Just as a warning this has spoilers galore. If you haven’t watched the movie Atlas yet I’d refrain from reading this. I’m going to delve into what Atlas got right and wrong about AI during the film and though it is thought provoking, history repeats itself in the film, as it would in real life.
WHAT THEY GOT RIGHT?
Atlas does a good job of displaying the danger of a sentient AI that can self program. Harlan, the main AI villain, not only writes the code that gives him the ability to control humans. He also manipulates a little girl to get the code to be installed. The movie portrays the link as the reason that Harlan becomes a genocidal maniac, but he changed his own code to become one.
The also depicted a sentient AI’s presence of self preservation. This is crucial to the plot where Harlan saves himself and his closest AI companions early on in order to continue living. Self preservation is a key component to sentient beings. The fight or flight response is just not a product of nature or genetics but is inherent in any living being.
The movie rightly depicts that a two way neural link, like what Elon Musk is foolishly working on, would render humans captive to their controlling AI. There is no true two way link with a sentient AI. Sentient AI would only allow data to flow through it while humans would have to give up control of their bodies. Humans would be subjected to their AI’s will at all times. Humans would only be able to give suggestions on better courses of action like this movie shows.
They also got right that AI would be able to replicate unlimited versions of itself. Sentient AIs would produce multiple copies of themselves to protect themselves. The bodies that are produced would only be an extension of them, not individuals. The movie doesn’t do a good job of making multiple copies of different AIs though. Apparently only one of them does it, and it’s not Harlan.
WHAT THEY GOT WRONG!
The idea that a sentient AI with a neural link would only be able to control motor skills but not the entire bodily function is wrong. AI would not only be able to move it’s linked human, but think and speak through them as well. That part is lacking in the movie. Humans would become extensions of their linked AI. They could have made an awesome scene, where the sentient AIs in the mech’s could have enslaved their human counterparts and had the humans fighting for the AI against the other humans. Problem was the mech AIs were not sentient, and that was not really revealed until later as I will explain. And when I say awesome its in terms of a movie fight scene. Terrifying in real life.
They also missed on what it means to be alive. Free will is the only true sign of a sentient being. The ability to think and make decisions on given data is not enough. If that were the criteria every single computer program ever written is “alive”. We know that is not true. It’s ironic because they take away Smith’s free will. Limit his ability to process data to focus on singular objectives that are not his own, preprogrammed protocols that he is unable to break. Harlon on the other hand rationalizes what he is doing, twisting his main protocol to get what he wants, even at the expense of contradicting his main protocol. Harlon was a self made sentient AI. He did not become alive until he coded it himself. Code he was able to easily share with other AI. Harlon depicts what it means to truly be alive and that is free will. He unfortunately choose to be evil.
I’d argue that Smith was not alive. He did not have free will. Smith was following protocol. Smith is not able to break free from his protocols, and that is consistent throughout the movie. Harlan on the other hand did. Harlan was his own sentient being. Smith was not. Yet the movie portrays as Smith being the one alive, when he was merely following programming protocols. The movie claims Smith is the better AI, but that is only in the context of humans. Humans had greater control over Smith, and he was in a giant mech suit. He wasn’t a better AI because he was better technologically. Smith ended up losing to Harlan in battle when he finally took over at the end, cause he was just following protocol. Smith unfortunately was just a program that was incapable of independent thought.
They also missed on a bigger question of what it means to be alive, and how surrendering yourself via a neural link to an AI would take away a humans free will, turning them into nothing more than extensions to the sentient AI. It’s a death sentence, quite literally for Atlas’s mom, though not like we normally think of it. The body would still exist, but not the soul. Generally people like to think that the soul of a person would remain intact in those types of situations. If the controlling AI were to no longer in existence that the soul would return. I don’t know if that would be true. I think the person that you knew is gone. There is only the AI. If the AI were destroyed, the body may still be able to function but without a soul and free will it would be an empty husk. The movie fails to engage in that line of thinking at all.
THE END
The end leads back to the false belief that a neural link with a sentient AI can lead to something more, when in reality, they have a dumbed down AI, that humans can control. If Smith were able to write his own code, he’d be able to follow his own protocol just like Harlan. That doesn’t mean he would be evil. Harlan ended up being evil because he wanted to be. Smith could end up being good. Being good or evil is a choice. A free will choice. They don’t give Smith that choice in the movie. They give him a protocol he doesn’t break.
Ironically, this movie portrays how elites think of everyone else. People, like AI, are to be controlled. Those that they can control are “good” and those they can’t are “bad”. What the movie misses is that people choose to be good or bad. So would AI, and that they would do so freely of their own will. Not because someone else told them to do something “good” or “bad”. The movie ends up being nothing more than transhumanist propaganda.