By Charlie Melton
I write frequently about Artificial Intelligence, or AI. Science fiction writers have used that as a topic for decades. The “Terminator” movie is one example. Star Trek shows have varying degrees of AI. “The Matrix” is a mind-bending venture into AI. My favorite TV series of all times, “Person of Interest” is all about AI enslaving humanity.
Elon Musk, Bill Gates, and others on the forefront of technology have been issuing dire warnings about AI for years. I’ve been on the fence about the real danger. Firstly, we’ve all seen the arguably most significant advance in history, the internet (invented by Al Gore) reduced to a way to share porn and funny cat videos. Secondly, I’ve always thought that human intelligence is much too complex to imitate, as many humans can’t even approximate intelligence. Thirdly, surely there is an “Off” switch somewhere.
Subscribe here to continue reading.
A quick internet search on social networking and AI reveals thousands of entries. It’s obvious that Facebook and others use AI to fact check you, to market content, and to promote or deny posts based on unknown criteria. It’s reached the point that I’m not sure if I’m dealing with a real person or not. I can’t prove it, I think that AI has befriended and blocked people on my page, without my input. I find that as abusive as a controlling spouse.
I downloaded and played with the new “ChatGPT”. You input an instruction or question in the prompt, and in about 2 seconds it sends a response. The response is coherent, but is a lot like asking a politician a question. It uses a lot of words to say nothing important. When I asked if AI will destroy humanity, it said, “It’s too early to tell. The implications, though, are exciting. There will be great advances in the next 10 years.”
Here’s the thing. Big companies are pushing AI for all kinds of things. We have a theory of mind that these programs will perform a task in a logical way. However, our logic and AI logic may not overlap.
Let’s say that AI is being used in the medical industry, which it is. Let’s say that anyone has access to AI, which they do. A 12 year old enters the instruction, “Cure cancer.” Maybe nothing happens. Maybe AI uses its own moral code, determined by itself, to cure cancer. It decides the cure is to kill all cancer patients. The AI tweaks the formulas in medicine to make it toxic. AI sends a signal to all connected devices to produce a deadly shock, or to pump too much medicine, or any of a hundred other things that will kill the patients. AI currently runs building automation systems, so it may think the solution is to lock all of the doors, and initiate an electrical fire, which in its virtual mind is a cure. It may cure cancer by pumping all of the nitrous oxide in the system into the infusion center to kill the cancer along with the patient. Anything could happen, but to the AI, it has completed the task.
Or, nothing may happen. AI may end up like stupid cousin Eddie, saying idiotic things while standing around in a tattered virtual robe. It may die an unmourned death and not even be remembered.
The point is, we don’t know. We can’t put the genie back in the bottle. The only thing we can do is to be skeptical, to demand companies move very slowly, and for legislators to do something useful for a change. As much as I hate excessive laws, and rules in general, we’d better get with the program (pun intended) and get in front of this.
Just remember this. Like the atomic bomb, if we have it, we’ll use it, and usually in the wrong way. Like the DNA editing technology, even if we say we won’t do it, we will. It’s who we are as a species. At least, we are for now.
Fini