Michael Hanlon does raise the ethical hurricane that spins at the end of the effort essentially to create a human brain with computer technology:
Well, a mind, however fleeting and however shorn of the inevitable complexities and nuances that come from being embedded in a body, is still a mind, a 'person'. We would effectively have created a 'brain in a vat'. Conscious, aware, capable of feeling, pain, desire. And probably terrified.And if it were modelled on a human brain, we would then have real ethical dilemmas. If our 'brain' - effectively just a piece of extremely impressive computer software - could be said to know it exists, then do we assign it rights?
Would turning it off constitute murder? Would performing experiments upon it constitute torture?
Note the quotation marks around "person." Putting aside questions to which we do not have answers, such as the inherent morality that we should expect from digital life, we can observe that the likely response of our culture is tilted by the very assumptions with which it will achieve the innovation. Earlier, Hanlon writes:
So what is it, in that three pounds of grey jelly, that gives rise to the feeling of conscious self-awareness, the thoughts and emotions, the agonies and ecstasies that comprise being a human being?This is a question that has troubled scientists and philosophers for centuries. The traditional answer was to assume that some sort of 'soul' pervades the brain, a mysterious 'ghost in the machine' which gives rise to the feeling of self and consciousness.
If this is the case, then computers, being machines not flesh and blood, will never think. We will never be able to build a robot that will feel pain or get angry, and the Blue Brain project will fail.
But very few scientists still subscribe to this traditional 'dualist' view - 'dualist' because it assumes 'mind' and 'matter' are two separate things.
Instead, most neuroscientists believe that our feelings of self-awareness, pain, love and so on are simply the result of the countless billions of electrical and chemical impulses that flit between its equally countless billions of neurons.
So if you build something that works exactly like a brain, consciousness, at least in theory, will follow.
The implication of this sort of non-dualism is that the self isn't real. Look at it this way: Hanlon misses the possibility that the simulation could tap into or generate a soul. Rather like the mystery of the Trinity, I suspect the relationship of mind to body is more subtle than the binary dualism/non-dualism phrasing allows, but the salient point is that, by relegating soul to the mysteries of the gray jelly, Hanlon implicitly accepts the conclusion that cyber-consciousness would disprove soul, and yet he still wishes to count the creation as a "person."
The problem is that, if there's no "ghost in the machine," conceptually, then there is only machine, and machines can be turned off without moral complication. At some point, a human society with pervasive familiarity with this sort of humanoid lifeform might learn to recoil at the notion that one can simply erase the hard drive, but in the interim, it would have internalized the principle that "personhood" is "simply the result of the countless billions of electrical and chemical impulses." The "simply" is out of place, there; whatever the mechanism, there's something substantial about the soul, and our inherent value hinges on its recognition.