QuestionDo you think the "Chinese Room" argument is correct? http://en.wikipedia.org/wiki/Chinese_room
      – Melchior, 2010-01-04 at 19:40:43   (17 comments)

On 2010-01-04 at 20:09:10, Lee J Haywood wrote...
Ah, well it depends on what one understands the Chinese Room to represent. If it's a static program with no 'mind' then Searle is clearly correct in saying that the room possesses neither intelligence nor understanding of its own inputs and outputs. If, on the other hand, you have a learning machine that's capable of making inferences and creating internal models of the world - as the human mind does - then you have something worthwhile. The Chinese Room focuses on a symbol-processing machine, but it's not clear if that's a limitation placed on the machine's interface to the world or on the machine's own design. If it has a neural network that has learnt about the world from birth to reach the level of an adult human, and is then restricted to symbol processing then I'd say the argument fails. I'd also say that it may be possible to emulate a neural network with a universal Turing machine - i.e. a state machine / symbol processor, albeit a huge one.
On 2010-01-05 at 04:44:22, BorgClown wrote...
Unfortunately, I cannot contribute much on this topic. One could argue that an advanced neural network is simply following instructions, be it digital or analog, and so, one could argue that a brain follows biochemical and electrical instructions, and hence it is dumb. The problem is, we feel very much alive and sentient, and we can even recognize that sentience among lesser beings. Unless we answer first when does the mechanism ends and the consciousness begins, thought experiments like this one will remain unresolved.
On 2010-01-05 at 22:58:23, Melchior wrote...
My take on this is similar to borg's - this is really a discussion about the philosophy of mind, not AI. Once you accept that the brain - and thus what we call intelligence - follows (very complicated) logically definable processes, it should follow that a sufficiently complicated program could reproduce the same processes and behaviour. I would argue that once you're at this stage a brain is a brain, no matter what hardware it runs on.
On 2010-01-05 at 23:50:46, Lee J Haywood wrote...
I guess we only know that, and understand that, other people have consciousness because we accept that they're like us and we know what it's like to be self-aware ourselves. We have difficulty with non-human animals because whilst we accept that they might be self-aware we know that their view of the world is somewhat alien. So you might create an AI that is convincingly 'human' and claims that it is self-aware, yet that's all it is - a question of convincing you. It's not necessarily an intrinsic property, if you accept that we humans are deluded by seeing our subconscious choices as being under conscious control after the fact.
On 2010-01-06 at 21:04:48, BorgClown wrote...
I don't believe a conscious being can be created with traditional computers. Our digital computers are intellectual automatons built like clockwork mechanisms. The tiny analog deviations are corrected and expunged continuously in order to keep the mechanism predictable. If could become so complex as to appear self-aware, but below that there's only perfectly synchronized clockwork.
On 2010-01-06 at 21:58:40, Lee J Haywood wrote...
It depends on whether or not you believe that the physical brain can be emulated on a digital computer or not. I'd say that it's possible but perhaps impractical with current computers, yet future computers will still be digital but much more powerful. You don't actually have to emulate every atom, of course, so it's likely that a digitally emulated brain will actually be more efficient (i.e. physically smaller) than a real one. Our brains are fundamentally based on noise, no matter how much we fool ourselves into thinking that we're rational and logical. The digital/analogue divide does change the game for physically large, inefficient computers but I doubt it will be so important as digital computers get more powerful - you can emulate analogue processes sufficiently I'm sure.
On 2010-01-07 at 01:05:47, Melchior wrote...
Indeed. Digital representation of an analogue value is no problem. If you look at the brain, it too is an automaton. The chemical processes that power it are governed by "laws" of a kind - admittedly when you get down to the quantum level it gets tricky, and you certainly couldn't make an exact emulation of a given brain, but you wouldn't need to. A similar degree of randomness on the micro scale would produce a comparable macro result. Throw in a true random number generator into your computer and you no longer have "clockwork."
On 2010-01-07 at 05:05:21, BorgClown wrote...
@Melchior: Unless you use that random number for binary calculations =) Today's computers are simply binary calculators, even if it gets a true random number generator, the digital computer is just a very fast calculator and it will use the randomness to calculate binary numbers. If digital computers get so random and clever as to appear sentient, I'd still think of them as a very complex Montecarlo program. OTOH, I don't know if an analog computer could ever achieve true self-awareness, although one could argue that we are living proof of that.
On 2010-01-07 at 17:01:57, Lee J Haywood wrote...
In theory we think using natural selection - we generate a bunch of random connections and prune the ones that aren't working, leaving the 'fitter' pathways to let us get to the right answers to the problems we face. Even when we do mathematics, we do so in a very inefficient way. Think about how long it takes you to do something like long multiplication compared to how long it takes a computer. There's little doubt in my mind that computers (or rather, robots) will become self-aware given the freedom to evolve their minds. The mistake of traditional AI is to try to 'solve problems' with the goal of creating a patchwork intelligence that is fixed.
On 2010-01-08 at 04:15:43, BorgClown wrote...
Honestly, I don't even understand what is self-awareness. It exists, because if it was an illusion, you had to be aware to understand the illusion, yet it looks like it is more of a function instead of specific wetware.
On 2010-01-08 at 11:01:15, Lee J Haywood wrote...
I guess I equate self-awareness and consciousness, although really you're only self-aware when you think about yourself (as opposed to thinking about the things you're doing or seeing, etc.) I'm largely persuaded by the idea that consciousness really is an illusion. It's a way to gather together the decisions being made by your brain into a coherent whole, but with you seeing your own choices after the fact. This gives the illusion that there's a single 'you' who's in control, but in fact is just a mechanism to allow the mind to check that it's on the right track and make corrections if needed.
On 2010-01-09 at 05:09:56, BorgClown wrote...
If consciousness was just an illusion, technically you couldn't experience it because you wouldn't be conscious. You would just appear conscious to external observers. Are we very complex automatons? I don't think so. There was a time when life was something almost magical, yet biologists now can create simple life forms from scratch. I expect similar breakthroughs in neurology, hopefullt in my lifetime.
On 2010-01-09 at 10:57:00, Lee J Haywood wrote...
I don't follow your argument. You seem to be saying that consciousness is something other than an emergent phenomena but aren't saying what it is. Consciousness being an illusion is indistinguishable from consciousness as an integral state - either way you think you're conscious, therefore you are. 'Consciousness' is a vague term that explains what we feel we experience, not what's actually happening. The alternative is that consciousness is at the head of the decision-making system, that there's a real 'you' who makes decisions and hands them off to the subconscious for processing. What I'm saying is that you're actually a back-seat observer who perceives 'your' decisions and actions around 100ms after they've actually happened - an 'illusion', and there are experiments that support this notion. It's not to say that consciousness isn't real or isn't important, just that it's not what you might think it is.
On 2010-01-10 at 22:56:17, BorgClown wrote...
That's because I don't know what it is. It might be an emergent phenomenon, it might be a yet undiscovered biology area, or something else; all I can say is that it can't be explained yet, at least not as good as being alive can be explained now. Unless consciousness gets clearly defined I'll abstain from speculating too much about it.
On 2010-02-08 at 15:56:34, Thelevellers wrote...
I would advise giving The Meme Machine by Susan Blackmore a read - that had me pretty willing to believe the 'back seat driver' theory by the end. That was helped a little by attempting (but struggling to fully grasp) 'Conciousness Explained' by Dan Dennett. I'm not sure about the Chinese Room argument, but I think I'm partly agreeing, but in the sense that I don't think we are conscious in the way we *think* we are. If you understand? I'm not sure I'm being very clear, tbh, as it's the end of a college day as well I may give up, but intend to try Conciousness Explained again, as I've enjoyed thinking about this subject again... I may be back! :P
On 2010-02-08 at 22:33:37, Lee J Haywood wrote...
@Thelevellers: Added to my shopping basket.
On 2010-02-09 at 11:49:27, Thelevellers wrote...
@Lee J Haywood: Muahahaaa! And so the spread of The Meme Machine meme continues! Just as it *wants* :P