Can an android robot become conscious? Will there come a time when artificial intelligence will create androids that are just as conscious as humans? Will humanity become obsolete? In this video, Dr. R. Craig Hogan, president of the Afterlife Research and Education Institute and Seek Reality Online explains why there never can be an android that is conscious. A transcript of the video follows the video controls.
You can support this effort to give people the truth about the reality of the afterlife with your $6 contribution.
Transcript: Can Artificial Intelligence (AI) Become Conscious?
Can an android robot become conscious? Will there come a time when artificial intelligence will create androids that are just as conscious as humans? Will humanity become obsolete? In this video, I explain why there never can be an android that is conscious.
Hello, I am Dr. R. Craig Hogan, president of the Afterlife Research and Education Institute. Will android robots be engineered to become conscious, sentient beings? Will there be no difference between an engineered person and a biological person? And will the science fiction writers be correct that eventually androids will rebel and take over the Earth?
Artificial intelligence already performs better at recalling and explaining information than people do. Everything about the responses is what we would expect from a person and more. The part of consciousness that recalls information and synthesizes it into responses is easy to duplicate, and because it’s primarily recall and synthesis, AI will be able to do that better than we can. When talking about how consciousness could be created in the brain, David Chalmers refers to the neurological correlates of consciousness as the easy problem of consciousness. We can tell, for instance, where in the brain vision and hearing are processed. We can tell where the command and emotional centers of the mind are in the brain. However, neurological correlates of consciousness have yet to explain how the three pounds of fat and protein in the skull can create consciousness. Chalmers calls that the hard problem of consciousness.
In the same way, we might refer to having an AI android recall, synthesize, and verbalize information as the easy problem for AI. We have already solved that easy problem. In the 1950s, mathematician Alan Turing proposed the “Turing test” of the success of AI in appearing to be conscious. The test is whether someone can have a conversation with an AI device and not realize they’re communicating with a machine, not a person. But that’s common now. Chat GPT now has AI engaging in extended conversations with people. The conversations are indistinguishable from natural human-to-human speech. Eventually, bodies will be built with all the dexterity of a human body. The bodies will interact as people interact. Someone can walk into a coffee shop, sit across from someone, and have a normal conversation, not realizing the person is an android. That likely will happen in the next decades. The easy problem of mimicking consciousness is within our grasp.
But the hard problem of AI androids is whether they can have other characteristics of a human mind. These are just a few of the things AI cannot do.
1. AI cannot remote view.
People are able to remote view by getting sensory impressions of things thousands of miles away with no intervening energy or devices. Remote viewing is a common ability many people have. I am able to remote view. You may be able to also. AI cannot remote view.
2. AI cannot receive psychic impressions.
Psychics learn things about people their senses don’t tell them. Police psychics are used routinely to provide information about victims and perpetrators that an AI device could never learn.
3. AI cannot function if its mechanisms stop working.
People continue to live after their body dies and communicate with those still living on Earth to prove it. We have abundant evidence from accounts of people appearing to and communicating with their loved ones and other interested people. AI cannot survive death of the AI’s hardware.
4. AI cannot leave its mechanism and travel in an out-of-body experience.
People are able to go out of body and describe activities in remote locations with no intervening energy or devices. The phenomenon is called an out-of-body experience or OBE. People can have an OBE at will or experience it during a near-death experience. AI cannot have its inner workings separate from the body and travel to remote locations.
5. AI cannot have intuitive feelings not in the data that prove to be true.
People have intuitive feelings about things and events that were not in the data, but prove to be true. AI may surmise what will happen in the future, but cannot get gut feelings about events.
6. AI devices cannot affect each other over distances with no intervening connections.
People in groups have their brain waves and heart rates synchronize while 200 miles apart when they intend to do so. The studies were performed by Dr. Nitamo Montecucco in Milan, Italy, showing people’s minds were linked. Groups of AI androids could not sit 200 miles apart from one another and have their inner workings synchronize with no connections between the groups.
7. AI cannot send or receive healing thoughts.
People are able to send healing thoughts to living things and cause them to prosper and, at times, heal completely, whereas control groups without the benefit of healing thoughts during the same period of time deteriorate. In one study, MRI scans of the brains of 9 of the 11 receivers showed major significant changes in brain function each moment the healers began praying or thinking about them, even though they did not know when they were receiving the attention. The researchers wrote, “Their brains lit up like Christmas trees.” AI cannot send or receive healing intentions to repair damaged circuits.
8. AI cannot have precognition experiences with randomly presented images.
People whose body responses are measured as they look at random calming or emotional scenes on a computer have shown reactions to the pictures with calm feelings or negative emotions matching the pictures by as much as six seconds before the computer even selects the random picture to be shown. The research has been duplicated many times by a variety of researchers. AI androids could not correctly predict random pictures that will be shown.
9. AI cannot feel emotions.
People are able to feel emotions. AI may be able to mimic emotions by cueing up situations in which people feel emotions and displaying emotional responses, but AI cannot feel the emotions.
AI will never be able to duplicate these activities. They are the hard problem of creating consciousness with AI that will never be solved.
AI will be far advanced in its abilities, but will never duplicate the human mind. AI can never be conscious. It can only recall data, synthesize it, and create responses that give the impression of consciousness. That is the easy problem of artificial intelligence. The hard problem of artificial intelligence consciousness is that AI cannot perform the unique activities only a human mind can perform. Artificial intelligence devices will never be conscious.