Wednesday, August 27, 2025

Does AI have a Since of Conscious?

 


Most of us are familiar with 2001, a Space Odessey, but we may not realize how many other films have also involved Androids who became a character in the film.  Those movies are fiction, but as AI computer systems show the ability to perform processes associated with the human mind, some people are asking whether we may be risking too much if we are not careful.

Consider the things we know that AI can do.  They have been programmed to drive cars.  They can recognize faces.  They can compose music, and those are the least of things.   Are we creating something that may be able to act on its own to become human like.  Experts disagree.

One theory argues that because consciousness is grounded in biology and synthetic systems are not composed in that manner, they cannot have the ability to experience consciousness.  In disagreement, another argues that biological brains are not necessary for consciousness.  A third argument is that since we don't know what makes us conscious, how can we know what AI needs to achieve consciousness.  

Since even some of the most intelligent people in the world either cannot reach a conclusion or cannot agree about these issues, I am very far out of my league.  However, I can share some of my research.

Oxford philosopher Nick Bostrom, who studies "existential risk" believes that artificial intelligence might be the most apocalyptic technology of all, with intellectual powers beyond human comprehension.  We humans could be enslaved or destroyed, if they wished.  Yet, he believes we could enslave them.

Ray Kurzweil, director of engineering at Google, has long believed that AI will bring about a technological revolution after which human existence will be so transformed as to be unrecognizable.  Instead of viewing that as frightening, he believes AI is a panacea for human problems.

In 1957 future Nobel laureate Herbert A. Somun declared that the age of intelligent machines had already dawned.  He collaborated with RAND researcher Allen Newell, and although their efforts may seem silly today, they were pioneers.  Their failure resulted in eliminating the continuation of going down the wrong path.  Those that followed learned a great deal about what did not work.

Elon Musk described A-1 enhanced technologies as "summoning the demon," and technologies may still be extremely dangerous, primarily because it has the potential for amplifying human stupidity.  As Edward Moore Geist concluded in his 2015 article, from which I have shared some of the forgoing information, "Nor does artificial intelligence need to be smarter than humans to threaten our survival--all it needs to do is make the technologies behind familiar 20th-century existential threats faster, cheaper, and more deadly."

How many of us pause to reflect on what is happening, and even if we do, what can or should be done about it?  For our entire lives we have lived with change, with little pausing to question their use.  We have accepted the loss of privacy in exchange for conveniences that came with it.  Today, who is the watchdog?


    


     

No comments: