The explanatory gap is the claim that consciousness and human experiences such as qualia cannot be fully explained just by identifying the corresponding physical (neural) processes. Bridging this gap is known as “the hard problem“. The explanatory gap has vexed and intrigued philosophers and AI researchers alike for decades and caused considerable debate.
The homunculus argument is a fallacy arising most commonly in the theory of vision. One may explain (human) vision by noting that light from the outside world forms an image on the retinas in theeyes and something (or someone) in the brain looks at these images as if they are images on a movie screen (this theory of vision is sometimes termed the theory of the Cartesian Theater: it is most associated, nowadays, with the psychologist David Marr, see also: solipsism). The question arises as to the nature of this internal viewer. The assumption here is that there is a ‘little man’ or ‘homunculus‘ inside the brain ‘looking at’ the movie.
In-completeness in this model therefrom that modern science already emphasized: the markov models of cognition, and the fact that first signal system has long lasting neurons’ terminations, for instance in receptors, that are causing basic intrinisic personality integration.
The reason why this is a fallacy may be understood by asking how the homunculus ‘sees’ the internal movie. The obvious answer is that there is another homunculus inside the first homunculus’s ‘head’ or ‘brain’ looking at this ‘movie’. But how does this homunculus see the ‘outside world’? In order to answer this, we are forced to posit anotherhomunculus inside this other homunculus’s head and so forth. In other words, we are in a situation of infinite regress. The problem with thehomunculus argument is that it tries to account for a phenomenon in terms of the very phenomenon that it is supposed to explain.
Another example is with cognitivist theories that argue that the human brain uses ‘rules’ to carry out operations (these rules often conceptualised as being like the algorithms of a computer program). For example, in his work of the ’50s, ’60s and ’70s Noam Chomsky argued that (in the words of one of his books) human beings use Rules and Representations (or to be more specific, rules acting on representations) in order to cognise (more recently Chomsky has abandoned this view: c.f. the Minimalist Program).
Now, in terms of (say) chess, the players are given ‘rules’ (i.e. the rules of chess) to follow. So: who uses these rules? The answer is self-evident: the players of the game (of chess) use the rules: it’s not the case (obviously) that the rules themselves play chess. The rules themselves are merely inert marks on paper until a human being reads, understands and uses them. But what about the ‘rules’ that are, allegedly, inside our head (brain)? Who reads, understands and uses them? Again, the implicit answer is (and, some would argue, must be) a ‘homunculus’: a little man who reads the rules of the world and then gives orders to the body to act on them. But again we are in a situation ofinfinite regress, because this implies that the homunculus has cognitive process that are also rule bound, which presupposes another homunculus inside its head, and so on and so forth. Therefore, so the argument goes, theories of mind that imply or state explicitly that cognition is rule bound cannot be correct unless some way is found to ‘ground’ the regress.
This is important because it is often assumed in cognitive science that rules and algorithms are essentially the same: in other words, the theory that cognition is rule bound is often believed to imply that thought (cognition) is essentially the manipulation of algorithms, and this is one of the key assumptions of some varieties of artificial intelligence.
Homunculus arguments are always fallacious unless some way can be found to ‘ground’ the regress. In psychology and philosophy of mind, ‘homunculus arguments’ (or the ‘homunculus fallacies’) are extremely useful for detecting where theories of mind fail or are incomplete.
The question of direct or “naïve” realism, as opposed to indirect or “representational” realism, arises in the philosophy of perception andof mind out of the debate over the nature of conscious experience;the epistemological question of whether the world we see around us is the real world itself or merely an internal perceptual copy of that world generated by neural processes in our brain. Naïve realism is known as direct realism when developed to counter indirect or representative realism, also known as epistemological dualism, the philosophicalposition that our conscious experience is not of the real world itself but of an internal representation, a miniature virtual-reality replica of the world. Indirect realism is broadly equivalent to the accepted view ofperception in natural science that states that we do not and can not perceive the external world as it really is but know only our ideas and interpretations of the way the world is. Representationalism is one of the key assumptions of cognitivism in psychology. The representational realist would deny that ‘first hand knowledge’ is a coherent concept, since knowledge is always via some means. Our ideas of the world are interpretations of sense data derived from an external world that is real (unlike the standpoint of idealism). The alternative, that we have knowledge of the outside world that is unconstrained by our sense organs and does not require interpretation, would appear to be inconsistent with everyday observation.
The hard problem of consciousness is the problem of explaining how and why we have qualitative phenomenal experiences….
The existence of a “hard problem” is controversial and has been disputed by some philosophers. Providing an answer to this question could lie in understanding the roles that physical processes play in creating consciousness and the extent to which these processes create our subjective qualities of experience. 
Several questions about consciousness must be resolved in order to acquire a full understanding of it. These questions include, but are not limited to, whether being conscious could be wholly described in physical terms, such as the aggregation of neural processes in the brain. It follows that if consciousness cannot be explained exclusively by physical events in the brain, it must transcend the capabilities of physical systems and require an explanation of nonphysical means. For philosophers who assert that consciousness is nonphysical in nature, there remains a question about what outside of physical theory is required to explain consciousness.
Various formulations of the “hard problem”:
- “Why should physical processing give rise to any inner life at all?”
- “How is it that some organisms are subjects of experience?”
- “Why does awareness of sensory information exist at all?”
- “Why do qualia exist?”
- “Why is there a subjective component to experience?”
- “Why aren’t we philosophical zombies?”
Chalmers stated the problem as “why does the feeling which accompanies awareness of sensory information exist at all?” in both The Conscious Mind (1996) and in the paper “Facing Up to the Problem of Consciousness” (The Journal of Consciousness Studies, 1995).
Unless I understand, I can not know (knowing/understanding/feeling/being) ?!