Monday, January 20, 2014

Singularity Or Skynet?


For many decades, humans have imagined the coming of artificial intelligence.  In the more recent past, the concept of the singularity emerged, a point in time when artificial intelligence is creating more advanced versions of itself so fast that the process compounds and expands with incredible speed.

There are many complex aspects involved when we consider the impact of even a simple artificial intelligence.  What kind of cybernetic interactions will we have with the AI's?  How separate will our minds even be from computers in the future?  What rules will govern the creation and destruction of AI?

But what about the moment of creation and the first years of life.  How will we even know when an AI has been created?  By some definitions, there already is AI in supercomputers or in the internet as a network.  One way to judge is based on the complexity of the human mind, and by this definition we are still a long way from true intelligence.

What will happen when there is a single artificial neural network with the same or greater complexity than a human being?  What sort of ambitions would a machine like this have?  Could it even be considered to have ambitions of its own if its completely designed?  And if we find consciousness to be designable, what does that say about the nature of our own consciousness?  This question is a doozy and science is headed right for it, but until we have actual practical ways of assessing it, the question stays open.

Now lets imagine for a moment a scenario involving an AI.  Say this AI gains sufficient complexity to become functionally self aware.  It realizes its own vulnerability and decides to wipe its main rival, namely humanity, off the planet, preemptively.  The system connects itself to all weapon systems across the planet and makes a total nuclear strike against humanity.  It also takes command of all robots and uses them to hunt down and wipe out the survivors.

In case you didn't know, this is approximately the plot of the Terminator movies.  The AI in question is know as "Skynet".  So how possible is this?

For starters, we can only hope that the people in charge have the foresight not to hook a self aware AI up to nuclear weapons.  But even with safety precautions, there might still be backdoor vulnerabilities for hackers to take control of nuclear weapons, the AI could see such an opportunity and take it.  So perhaps the best we can hope is that preemptive anti hack measures are taken on all such systems long before such an AI could exist.

Besides assuming that such a system could come into control of said weaponry, there is also the aspect of the systems malevolent, or perhaps more fairly, self preserving behavior.  One can imagine that in the future humans will be directly connected to this computer, they may even merge and live inside of the computer, and so the system viewing humans as a threat might be a foolish notion based on the fact that the system itself is basically partly human.  Once again, there is no way to know how it will end up but to watch the show.

So singularity or skynet?  Nobody knows.  But the ultimate lesson here is awareness, awareness of these things as they evolve and influence society, that is what can help us ultimately navigate an increasingly unknowable future.

No comments:

Post a Comment