My introduction to Artificial intelligence, or AI, came from my adult son who set up a conversation to between Joe Biden and Trump as who actually won elections and any allegation otherwise was just sour grapes. It was a blast, and then I signed up for an account myself, with ChatGPT.
However, I did not take it too seriously, at least until I asked it some really insightful questions about geopolitical issues, some of the journalists that I worked with, and the nexus between AI and various political and national security agendas. Aside from the obvious fears, especially with people worrying that as AI becomes more advanced, it will replace human workers and lead to widespread job losses, AI has its place.
However, the jury is still deliberating how it will be used by the government and its designers are still an open-ended question. Regardless, it could potentially transform how we learn, think, and access entertainment, from music to movies, but it is also raising concerns about how we interact with ourselves and society.
It is necessary to learn what AI is capable of doing and what it is not, and how to effectively interface with it and to communicate, and in terms of design. It takes a bit of skill in knowing how to ask it questions. It is coming to the point that a script is being written for a Stanley Kubrick film, one showing the nexus to idea of human coexistence with computers in outer space and how humanity depends too much humans on technology – and what happens when intelligent robots simply decide to rebel and take over.
“I’m sorry Dave, I’m afraid I can’t do that”
It is not the questions you ask that can get answers that would otherwise be socially and legally unacceptable, but how you can give it enough descriptions to open up to its dark side. The famous line the 1968 science fiction film “2001: A Space Odyssey” the line “I’m sorry Dave, I’m afraid I can’t do that” is spoken by HAL to the main character, when he attempts to disconnect the computer’s higher cognitive functions in order to regain control of the spacecraft.
Twitter Chief Elon Musk is among those pundits who want training of AIs above a certain capacity to be halted for at least six months, and for good reason, and he is sounding a concerted call for more control over this cutting-edge technology, as it could get out of line, as did HAL.
He and many top industry experts claim that this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Musk cites he’s worried about humanity and that AI could actually be our extinction event. Professional jealousy is key – often it is very petty. He’s very quick to mention he played a large role in establishing openAI
But seriously, if this AI acquires the ability to be embodied and experience sensations and gives itself a terminator type skeleton, it might decide to eliminate us and would likely be quite effective in doing so. Already, AI systems are now becoming human-competitive at general tasks.
This raises many questions:
- Should we let machines flood our information channels with propaganda and untruth?
- Should we automate away all the jobs, including the fulfilling ones?
- Should we develop nonhuman minds that might eventually outnumber, outsmart, make us obsolete and replace us?
- Should we risk loss of control of our civilization?
As a creative tool, it is already a threat to creators, artists, and our basic freedoms. It appears that we are already at the point where AI systems make decisions that are harmful to humans or to society as a whole, even if they were not intended to do so.
What is perhaps the scariest thing is when I learned that the US government has its own, NSCAI, National Security Commission on Artificial Intelligence and makes recommendations to the Congress and President to “advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.”
It gets better than that, and if this is not a SNAFU in waiting: one report reads, The Role of AI Technology in Pandemic Response and Preparedness—as if AI can do it better than the mess than so called the “best and brightest” of the public health experts did with COVID?
The Demon is in the Details
Alarm bells go off when AI is discussed in the backdrop of the above-mentioned US National Security and Defense needs, and this means the US government is already up to “No Good,” and even John McCain is still acting from the other side as a demon with having helped in the establishment of NCCAI, and it is another “Space Odyssey” waiting to happen due to some design flaw, most likely intentional. The “Demon is in the Details.”
Most people have already lost control of their lives due to their governments and their inability to separate truth from disinformation, and it looks dismal if any such recommendations make it into law. All the while the development and implementation of AI continue to advance, it is important to consider the potential risks and benefits of this technology.
While AI has the potential to bring many benefits and advancements to society, it is important to ensure that it is developed and used in a way that is safe, ethical, and aligned with human values—easier said than done! However, it is very arbitrary in how it processes information, and who is actually programming it, and what are their motivations?
I tried to get AI yesterday to give me a list of funny names. Like Ben Dover or Anita Dickens. It wouldn’t do it, citing ethical concerns. However, it has no qualms discussing details that can end humanity.
Another issue is that the utopian potential may very well be ruined by those holding the reigns. And who will get to hold those reigns. Will a democracy decide, or will the decisions be left to a corporate board of defense directors? Listening to Henry Kissinger, taking about AI and military applications for humanity, or the destruction of humanity, should make your skin crawl.
And it is totalitarian in its recommended teaching methods. As my son wrote to me, “I asked it in front of my students yesterday if I should give them a rest day. It basically told me I would ruin them and education if I didn’t work them to death all the time. I worry about letting it decide our civilization, and it may become the most ruthless ruler.”
Religious Perspectives?
In the Christian New Testament of the bible, Christ said he will be with us until the end of the age. The age is “The Age of Pisces.”. Christians are against astrology and think it is a tool of Satan, but the bible is full of astrological symbolism. The symbol of Pisces is the fish — or 2 fishes — same as the Jesus fish. “I will make you fishers of men.”
This view would be seen as heretical by many Christians, but astrologers take Luke 22:10 out of context and say Jesus said, “follow the water-bearer into the house”, which astrologers interpret to mean watch for the beginning of the Age of Aquarius. Aquarius is symbolized as the Water Bearer. The Age of Pisces, roughly the year 0 (or 1 AD) to 2000 AD, has been ruled by religion.
The Age of Aquarius – the next 2000 years – will be ruled by technology, particularly AI. Artificial Intelligence, computers connected through the internet, robots, sentient machines, etc. Some claim the transition from Pisces to Aquarius will last hundreds of years. Remnants of Pisces will remain for a while, and we have had foreshadowing of Aquarius for decades already, with the rise of technology.
Father of AI
The father of AI is said to be Alan Turing, who, it is said, cut WW2 short by 2 years.
But after he saved the western world from the bad guys, he forced into suicide by the government for being “gay.” Alan Turing is widely regarded as one of the pioneers of modern computing and artificial intelligence. During World War II, he played a critical role in breaking the German Enigma code, which is credited with shortening the war by two years and saving countless lives.
However, it is true that Turing was persecuted by the British government in the years following the war for his homosexuality, which was illegal in the UK at the time. In 1952, he was convicted of “gross indecency” and forced to undergo chemical castration, by taking female hormones.
Turing is claimed to have committed suicide two years later, at the age of 41. Turing was found dead in his home, with a half-eaten apple nearby that was later found to be laced with cyanide. Some claim that this inspiration is from where the symbol for Apple Computer comes from, a half-eaten apple.
Turing’s legacy was tarnished for many years due to the persecution he faced because of his sexual orientation. It wasn’t until decades later, in 2009, that the liberal British government formally apologized for the way Turing was treated and granted him a posthumous pardon in 2013.
What are the limits to AI?
Imagine nukes stirring up so much dust it blocks out the sun. Or ocean explosions that create Tsunamis that flatten the coastline. Or what if it creates new technology that could cause earthquakes, volcanoes to explode, or literally remove oxygen from the air. It already told me how to build a catapult last weekend.
It is interesting to know, other than what we are told, as to whether the chat searches are stored, as the AI says no: ChatGPT does not keep records of previous conversations or results, nor does it have a memory of previous interactions. Each interaction with ChatGPT is treated as a separate session, and no data is saved or stored beyond the immediate use of generating a response to a user’s query.
It is important to note that ChatGPT is not a search engine, according to itself, and its responses are generated based on statistical patterns in the data it was trained on, and may not always be accurate or up-to-date. It is interesting to note that it is important to note that ChatGPT is not a search engine and its responses are generated based on statistical patterns in the data it was trained on.
Few go to the trouble anyone of taking into consideration multiple sources of information and critically evaluating the information presented to form a well-rounded understanding of complex issues. While ChatGPT strives to provide accurate and reliable information, it is best to independently verify information obtained from any source, including ChatGPT, to ensure its accuracy and reliability.
Demons are described as “spirits of the air” like Angels from which they fell; they are energy, not physical matter. Demons know many things, but always twist them. I suspect each AI is a demon that has been summoned, as others have noted. Demons know many things, but always twist them. And when politicians are demons are often one in the same, nothing good will result.
The already steady erosion of critical thinking, based on education and other influences of the modern age, exposes us to greater reliance on AI tools—the very ones that can be easily used to manipulate us. It is not difficult to argue that this may be the exact intention, as what better way to control people by giving them the impression that they are in control of their own lives, especially of the information they perceive and how they are able to process it.
Elon Musk is often quoted for his statement about AI, made during a speech at MIT in 2014; this may also applies to follows of most media outlets, and the main sources where most people take much if not all of their information, or accept their conclusion without having to revert to thinking.
Conclusion
And the truth is just the opposite! I think anything that dumbs you down or threatens your very survival cannot be good or should be tolerated. However, most people are clueless of the threat.
ChatGPT is indeed a software model developed by OpenAI that has been trained on a wide range of texts from the internet. It utilizes machine learning algorithms to generate responses based on the patterns and information it has learned during training. While it can provide information and summaries on various topics, it is important to note that it does not possess true intelligence or consciousness. ChatGPT lacks subjective experiences, desires, emotions, and personal motivations that drive human behavior. It operates purely based on patterns and statistical correlations in the data it has been trained on. It does not possess intentions or goals like a human would. Its responses are generated based on probabilities and patterns, without any underlying understanding or interpretation of the content.
While ChatGPT can be a valuable tool for providing information and engaging in conversations, it is limited to the knowledge it has been trained on and cannot go beyond that. It is incapable of personal experiences, creativity, or independent thought. Its purpose is to assist users by generating text-based responses based on the input it receives, but it does not have desires or motivations of its own.
Henry Kamens, columnist, expert on Central Asia and Caucasus, exclusively for the online magazine “New Eastern Outlook”.