Penrose's argument doesn't satisfy me, though as him I think that consciousness has something to do with quantum mechanics. So here's my view, which is based on Everett's multiple worlds interpretation.
Everybody knows the Schrödinger's cat paradox : in a box we put together a radioactive atom, a bottle of poison, a mechanism that breaks the bottle according to the state of the atom, and a cat. If we suppose that the atom has a 1 second half-life and we set up the mechanism to be operational during that time, then quantum mechanics state that after that time the atom will be in a desintegrated/not desintegrated superposition of states and the cat in the dead/alive superposition. If we then open the box, we will have a 1/2 chance to find a dead or alive cat.
The question is : what is the state of the cat before we open the box ? The theory answers that it is superposed. Another question then is : what does it feel to the cat to be in a superposition of states ? Well, we just have to ask it. To be able to do so, we will change the protocol so that, instead of killing it (poor animal !), we will present two different symbols to it, a yellow one and a blue one for instance, according to the state of the atom.
I don't know about cats, but I know it's possible to breed monkeys to recognize that kind of symbols. It's clear that the animal will express a symbol, or the other, but not a superposition of the two.
- Objectively (mathematically) the cat is in a superposition of states : "saw the yellow symbol"/"saw the blue symbol". (At least, this is what the multiple worlds interpretation states, because it doesn't involve any collapse of the state vector).
- When we prompt it, the response of the cat is unique : subjectively, the state is well determined, it is whether "I saw the yellow symbol" or "I saw the blue symbol", with a 1/2 chance.
Since the subjective aspect doesn't match the objective state, we have to consider that the cat has its own perception of reality, a "consciousness".
We can apply this reasoning to a digital machine in place of the cat : everyone sees how to get from such a machine the information about the colour that excited its artificial eye.
But here is an important distinction to be made : the machine must be real, physically exist, of course, to be able to be in a quantum state. What is worth noticing is that most often by "digital machine" we mean "an algorithm", that is, an abstraction. An algorithm, being a virtual "thing", cannot be in a quantum state. There are also real, embodied digital machines, our desktop computers are good examples of these. But we have to know what we're talking about : if by "my computer" we mean "the program it is executing", or the physical machine. A computer is designed to behave as much as possible as an abstraction, that is, without making any mistake. But what we have to pay attention to is that a virtual machine couldn't ever have a parity error, for instance.
Such is the difference between real and virtual things. A debate in the a.i. field is opposing people who think that a conscious being couldn't be digital, that it would have to be analog. Actually the good distinction is between abstract and embodied.
That is why Penrose's demonstration about human superiority on digital machine is necessarily unsatisfactory : is it real or abstract a machine ? A parity error is precisely the kind of spontaneous event required to find out (say) Euclid's fifth postulate when one is an implementation of (say) the euclidian geometry formal system.
That is also why his invoking of microtubules as possible sources of consciousness is superfluous : if we define consciousness as "what it feels to be in a superposition of states", all physically existing things are conscious, even inanimate objects.
 Roger Penrose, The emperor's new mind, Oxford university press, 1989
 Roger Penrose, Shadows of the mind, Oxford university press, 1994
 THE EVERETT FAQ