Singularity: Humanity 2.0

News

July 20, 2017

The Singularity: Humanity 2.0

The July Blog Series | 6 Degrees of Connectivity

Issue 3: The Singularity: Humanity 2.0


The Singularity -- it's talked about in abstract terms, couched in metaphor, and treated as a science-fiction allegory. But what is it and why does it concern the top minds in the technology industry? When we say "the singularity," we really refer to the technological singularity above all others. The singularity is the idea that we may eventually create an artificial intelligence that is so intelligent that it supersedes us in every way, essentially replacing us -- becoming Humanity 2.0. And yes, this sounds like science-fiction. But is it?


The Reality of the Technological Singularity

The issue with AI is simple: at a certain point, artificial intelligence will become intelligent enough that it will be able to improve upon itself. At that point, it's believed that there will be a cascade effect; artificial intelligence will quickly replicate and learn, all advancing towards superintelligence, which refers to a machine possessing a far higher degree of intelligence than even the brightest humans. As a result, artificial intelligence will have evolved to a state where it becomes logical and anticipatory, in essence putting cognition into machines. At that point AI will no longer need to rely on humans; it will eventually surpass them... And it all begins with the creation of the very first true AI: an AI that can learn and grow.

Artificially intelligent minds will not be limited by the anatomy of the human brain. They will be able to learn from and compress extraordinarily dense sets of data, thereby creating an entity that could appear to know virtually everything at once. However, this theory is based on the idea that an artificial intelligence will necessarily be able to achieve exponential development; that it will be able to quickly fine-tune its learning and its processes until it surpasses humanity completely.


Criticism of the Technological Singularity

Not every tech genius believes that the technological singularity is possible. There are two principle issues related to the technological singularity: the idea that there ever could be an artificial intelligence, and the idea that artificial intelligence necessarily would develop exponential growth.

In terms of intelligence, humanity still hasn't been able to create a truly intelligent machine. Though we talk about artificial intelligence and machine learning, these are truly advanced algorithms that are directed to complete certain tasks. They do not have consciousness or free will; they are not deriving results so much as simply following relatively simple logical gates. At the same time, humanity has often struggled to define what both sentience and intelligence truly mean. A neural network, such as the human brain, is also a sequence of logic gates... merely one that can express its own existence and that has an increased level of complexity. It's arguable that humanity could create machine intelligence.

Another issue holding back the technological singularity relates directly to Moore's Law. To put it simply, hardware cannot advance at a speed that would be sufficient to take into account the exponential growth of artificial intelligence. AI would always be limited to its own physical platform. It would not be able to exceed the projected physical boundaries of data storage and processing power. Of course, this only means that artificial intelligence would not be able to grow exponentially; it could still grow very quickly, which could be quick enough to exceed humanity's grasp.


Modern Day and the Singularity

The singularity is being discussed more often as artificial intelligence has come to the forefront of technological design. There are many industries currently chasing the development of artificial intelligence, and there aren't any existing regulations surrounding what can and cannot be done with AI software. A common fear within the technology sector is that any company (or even military) could potentially develop an AI system at any time, which could then kick off the singularity for the rest of the world. And due to the secrecy generally surrounding corporate products, it's impossible to know how close any organization could be to developing a fully functional AI.

Another question, regarding the implications of an AI, is whether an AI would want to supersede humanity in the way that it is feared; whether it would have the drive and need to consume resources the way that humanity has done in the past. Due to the limited understanding of what intelligence and consciousness is, it is possible that an artificial intelligence, once created, would have no interest in expansion, conquest, or even reproduction. If not, the implications for human society could be negligible.

In many ways, the singularity is not only a growing fear, but also a very important thought experiment. The singularity touches upon the ideas of consciousness and sentience; the ideas of what makes a "thing" intelligent, and what the consequences of that intelligence might be. Though very few can say with certainty whether AI intelligence could be truly achieved (or whether it would grow towards singularity), it is understood by most to be an extremely important and relevant topic. Even when otherwise embraced, artificially intelligent solutions still need to be treated with caution, because it is impossible for humanity to foresee every potential consequence.