OpinionStar Trek's Data would never work in practice because, like people without emotions, he'd be paralysed with indecision. [Susan Blackmore]
      – Lee J Haywood, 2010-10-20 at 20:43:47   (4 comments)

On 2010-10-20 at 20:46:15, Lee J Haywood wrote...
I think this argument misses the fact that whilst we evolved from creatures with 'emotions', a machine can be perfectly capable of weighing up choices based on a greater set of data and do so much more rapidly. The architecture can be different, as can the process of selection. Indeed, Data would even be able to make more rational choices than us simply because he has a much firmer grasp of probability. Humans are notoriously bad at making decisions, and much of that has to come from emotional bias.
On 2010-10-21 at 15:52:26, Thelevellers wrote...
I think I have to agree with you there, Lee. I was just reading today in (last week's) New Scientist about the (apparently new) science behind morality, and a bit about Sam Harris' new book which reckons that scientifically derived abosolute morals would be better than human ones. i.e. This is better than that because a quantifiably greater number of people would be better off. (There's a bit more detail to it than that, obviously :P )
On 2010-11-14 at 11:13:36, BorgClown wrote...
I think she's projecting her psychology background on a computer science problem. I imagine that since Dr. Soong doesn't like comatose androids, his algorithm forces a choice if indecision hits a time limit. If there's doubt, at least two choices are equivalent and any of them can be picked at random. That's a common failsafe in chess programs.
On 2010-11-14 at 11:15:24, BorgClown wrote...
Hell, now that I think of it, people do that too.