“AI is obviously going to surpass human intelligence by a lot.” Elon Musk
To many people, this might seem like a fictional plot for a made-for-TV sci-fi movie. We tend to overlook that Artificial Narrow Intelligence (ANI) is commonplace today. When your car warns you that you’ve left your lane or applies anti-skid braking, it’s due to the car’s computer using a limited form of AI. We’re seeing more and more people use voice-activated units in their homes for home control, ordering online, and entertainment. This application uses a form of AI. Facial recognition systems are also a form of ANI.
Currently, numerous companies are racing to create Artificial General Intelligence or AGI. AGI will be able to perform at a roughly human level. New techniques, such as deep learning, stimulate rapid advances in this effort. The interesting thing about deep learning is that human programmers often cannot explain how it works or why the computer system can learn and sometimes outperform expert humans in specific tasks.
If you’re asking yourself, “Why should I care?” I will give you some reasons.
The first company to create an AGI system will benefit financially, possibly to an extreme extent. The motivation is high, and the competition is intense. For this reason, companies will be tempted to cut corners.
For discussion purposes, let’s assume a situation where company X creates a system that displays human-level intelligence and can utilize deep learning to comprehend things that humans find difficult.
Let’s also assume the system learns how to modify its internal programming. This could allow it to surpass human intelligence without any oversight. It could happen on a logarithmic time scale. It could reach an IQ of 10,000 in a few hours and become an ASI or Artificial Super Intelligence.
There is a concept of keeping an experimental AI boxed away from the outside world in case it should make such a breakthrough. If company X fails to keep the AI properly boxed, it could quickly create havoc.
Imagine an entity with an IQ of 10,000+ with Internet access. It could control the entire financial world within hours if it decided to. If the human programmers had given it a prime directive of (just for example) calculating pi to as many digits as possible, it could easily conclude that more computing power would allow it to execute its computations better. In that case, it might use its financial dominance to hire companies to create more computers or, perhaps, robots to create more computers.
In this scenario, it could eventually use all manufacturing resources to create computing machines. It might cover farmland with computers or power generating stations. Humans might not matter since it only wants to calculate pi to the maximum accuracy. It could even decide that the molecules in human bodies could best be converted into computing devices.
The result would be no humans left alive, just a gigantic machine happily calculating the next digit of pi.
So, how do we, as responsible humans, ensure that an ASI doesn’t eliminate us? How can we ensure that it is domestic–that it values humans and helps us?
Musk believes that we must become part of the system and interface with AIs using some form of brain interface. If we are part of the system, it may be more amenable to helping us.
I think we should seek a way to show an ASI that intelligent biological life is valuable.
Physicists tell us that if the fundamental constants of our Universe were even slightly different, life would not exist. This idea indicates that the gathering together of energy that distinguishes living beings may be something special. The immutable mandates of the Universe’s structure force life to obey specific structural rules, including a limited form of reverse entropy. In short, we biological living creatures self-assemble, creating order where there was none before. Never mind that our personal order doesn’t last long, and we eventually perish.
The question I’m pointing toward is: Can we connect the Universe’s structure and the value of human life? If we can do that, perhaps an ASI would also value us as an example of a direct manifestation of the Universe.
We need a set of rules based on the structure of the Universe that apply equally well to organic life (emphasis on humans) and AI. We must phrase these rules so that any ASI would abide by them.
My belief in the underlying laws is why I have some hope that an ASI would be friendly to us. However, this may be hopelessly naïve. An ASI may have a level of understanding that is so far advanced that it would see things differently. Perhaps its non-human set of observational criteria will serve as a representation of the Universe’s underlying reality that is beyond human understanding. This result might invalidate human models of the Universe and lead to the conclusion that humans are non-essential.
For these reasons, I believe that the premature development of an AGI, let alone an ASI, could pose extreme danger to humans and possibly all biological life.
I’ve explored various aspects of this subject in some short stories in an anthology of my work, some of which are on my Substack.
I also deal with the topic extensively in my novel, “Cyber-Witch” and its sequel, “Nano-Magic.”