6 reasons the Singularity might be closer than you think

Jan 27, 2020 | General

As we enter a new decade, technology improves rapidly, and artificial intelligence is hitting new milestones with an increasing frequency

The Singularity is a theoretical point in time, where technological improvement would no longer depend on humans, but on the technology itself. Seeing as how much our modern society depends on technology, the Singularity can be seen as the one most important milestone in human history. This crossing of this critical threshold would be unprecedented and would lead to a myriad of unpredictable changes. But one thing we can predict for sure — for the first time in a very long while, the fate of Homo Sapiens will no longer lie in its own hands.

Being part of Comrade, I often think about the Singularity. It’s why our cooperative exists in the first place — to find and build a way of coping with these times of turbulent change (and do that before it’s too late). Both of our big projects are interconnected with the Singularity — ScyNet is about how we reach that crucial point, and Wetonomy is our proposition to what we as humans do after the Singularity is here.

The ScyNet project addresses the question:

What state the world is going to be in when the Singularity hits?

Maybe humanity would have a better chance of surviving the aftermath of this technological explosion if certain conditions are met. Currently, the development resources of artificial intelligence are rather concentrated between a few technological giants, which coincidentally are all profit-driven corporate enterprises. As a result, they are inclined to hide all or some part of their artificial intelligence know-how, because exposing it would deprive them of their cutting edge before the industry competitors.

At Comrade, we reason that the development of AI in such an environment would not end well for most of humanity in a post-Singularity world.

Instead, we advocate that artificial intelligence should be developed in an open, transparent, and decentralized manner. That’s why ScyNet is being developed as a decentralized network of AI nodes and its code is fully open-source.

How close are we to the Singularity?

I hear you thinking “why bother now? Maybe there’s time and there are more pressing concerns in the world right now.”

While no one can give an exact date with a 100% confidence, it’s worth taking a look at what some of the biggest names on the Artificial Intelligence scene think of this timeline.

According to recent research by Emerj, 45% of experts in the AI field think that the SIngularity is coming before 2060. And 21% said they expect this to happen even before 2035.

And if that doesn’t feel pressing enough, in the past year, the world has witnessed some substantial achievements in the AI sector. We’ve gathered some of them in a list of the top 6 reasons why the Singularity might be coming sooner than we all think.

1. Quantum supremacy

This is the most recent one, but it might have huge implications on computing, hence starting with it.

What is it?

On October 23, 2019, Google AI officially announced that they have achieved quantum supremacy.

As it turns out, their experimental quantum processor Sycamore has solved a problem in only 200 seconds. Not so impressive yet? According to the Google AI team, the most powerful supercomputer in the world would take 10 000 years to deal with this very same task.


Meanwhile, IBM was quick to comment on the situation. While IBM agrees Sycamore did an impressive feat, they disagreed with the achievement being labeled as ‘quantum supremacy’. Quantum supremacy presumes that a quantum computer solves a problem, which is otherwise impossible to solve for any type of classical machine. IBM argued that the task could be dealt with for less than 2.5 days by the aforementioned Summit supercomputer.

So did Google AI really achieve quantum supremacy? The jury is still out on this. But it does look like a major step towards the development of a quantum computer, one that can outperform by orders of magnitude the classical computers we’re currently depending on.

Why is this important?

Even if IBM were right and Google did overblow the whole situation, this still puts Sycamore as literally 1000 times faster than the world’s most powerful supercomputer.

One can only dream what kind of Machine Learning techniques would such a quantum leap enable (pun intended), including opening up a path to general intelligence.

2. Automated Machine Learning

Machine Learning is a subset of Artificial Intelligence, where a program analyzes data and automatically adjusts to the best configuration needed to solve a particular problem, instead of the programmer trying to predefine every set of if-else rules. Practically, machine learning is automating part of the programmer’s tasks. But what if we could automate the automation?

What is it?

The term Automated Machine Learning, or just AutoML, has seen a sharp increase in popularity ever since Google AI released their NASnet paper a couple of years ago. While Automated Machine Learning was already a known concept in the data science community, the Google AI team revealed that their AutoML project was able to perform on par with state-of-the-art AI models, designed by their human data scientists.

Evolution of programming has brought us to AutoML

Why is this important?

Although Automated Machine Learning doesn’t exactly mean “an AI creating AI”, it’s a significant stepping stone for even more complex automation.

It takes away some tedious parts of the machine learning workflow, which enables smaller teams of data scientists to be even more productive.

Since this milestone, there has been an increase in the number of startups developing and offering automated machine learning solutions. As the technology is maturing, more small enterprises will be able to incorporate data science in their operations, thus becoming more competitive against the already established giants.

3. Governments investing in AI research

A global AI arms race is already undergoing. Recognizing the huge potential of Machine Learning, the world superpowers are investing heavily in development resources in a quest to gain an unfair advantage.

What is it?

China has announced plans to invest more than $2 bn in state-of-the-art AI research center in Beijing, and at least $5 bn more in AI development for the upcoming decade. The goal is to have a booming AI industry worth more than $150 bn by 2030.

The US isn’t taking this any less seriously — with approximately $24 bn of investments in just the last couple of years, the United States spending in AI is unmatched. Meanwhile, the US private sector is focused on AI chip development, but more on that a bit later.

Money talks

And while Germany is already shaping up as an AI startup hub, the administration isn’t stepping off the gas with a planned €3 bn investment before in the years leading up to 2025.

As for the UK — its top universities Oxford and Cambridge will benefit from an influx of $200 mn for AI research, an initiative led by a partnership between the public and private sectors.

Last, but not least, the European Commission has announced plans to pour €50 m in an effort to develop a network of Artificial Intelligence Excellence centers.

As a part of the Bulgarian consortium, the Comrade Cooperative is applying to establish one of these AI centers for excellence in Sofia. Working on ambitious projects like ScyNet, these hubs will strive to provide a competitive advantage to the EU on the global AI scene.

Why is this important?

These are just a few examples of what’s been happening and what’s planned. Some countries are just not that vocal of their AI spend, which doesn’t necessarily mean they don’t have an AI strategy.

But moreover, money matters. The government sector manages a huge amount of capital and having its focus switched to AI is a big thing. This will accelerate scientific progress, pushing the envelope towards a future of Singularity.

4. GPT-2

OpenAI made the headlines when they announced its GPT-2 creation — a state-of-the-art text-generation neural network. It was so good at producing text from prompts that its creators decided to pull it off the internet in the fear of dangerous misuse.

GPT-2 — too dangerous to be released in public?

What is it?

This created big controversy because OpenAI is a non-profit organization, which has vowed to share publicly the results from its research on AI. Thus, removing the network from the web was unprecedented and was considered against their core values.

The sheer fact that OpenAI broke its own rules to keep GPT-2 away from the public just shows how high they consider the risk of abuse with malicious intent. The fear is that the GPT-2 could be used to easily generate very believable fake news. And it’s already a problem of our modern society — with so much globally accessible information, it’s become harder to sift out the truth from misinformation on the media. Just look at the US 2016 elections fiasco or the developing deepfakes trend.

Why is this important?

Arguably, in order to be really good at creating content, AI needs to understand context. It has been a long-known problem with language modeling. But GPT-2 seems to have broken a threshold, which might mean we’re closer to an AI capable of really understanding language than we think we are. Whether that puts mankind closer to the Singularity or not is up for debate.

5. AlphaStar

OpenAI became famous for another experiment of theirs — the OpenAI Five, which showed great results in playing the Dota2 game against professional teams. But in terms of gameplaying AI, we will emphasize on another team’s achievement from very recently — AlphaStar.

What is it?

In October, 2019, Deepmind-made AlphaStar broke through the Grandmaster tier on the StarCraft 2 competitive scene. This put the neural network in the top 99.8% percentile, with only the world’s very best players keeping AlphaStar from being officially declared superhuman in the StarCraft 2 domain.

AlphaStar knows how to play protoss

Deepmind is a Google-owned research team, focused on achieving human-level AI, also known as Artificial General Intelligence. The scientists have used reinforcement learning to train the neural networks playing against each other in a virtual environment for a total of 44 days.

Why is this important?

After mastering chess and Go, many researchers have seen modern RTS (real-time strategy) games likes StarCraft 2 as the next logical milestone to be pursued by AI. What makes RTS different than the previous two examples is their much higher complexity, while also being games of imperfect information. It’s worth noting that before unleashing their creation on StarCraft’s official online arena, Deepmind had limited the AlphaStar’s vision of the map and its performance to 22 non-duplicated actions every five seconds.

Taking out most of the AI machine advantages leveled the playing field, so AlphaStar had to rely on its intellect and understanding of the game to strategize against its opponents — just like humans do when playing against humans. This makes its Grandmaster achievement even more impressive and might pave the way to even more complex neural networks. Given enough computing power, who could deny for sure if that can’t lead to the Singularity?

6. Computational power and microchips

Speaking of computing power, the last few years saw a resurgence in the microchip industry.

What happened?

The development of Graphics Processing Units (GPUs) has been on the rise because a new market created demand in the face of cryptocurrency mining. But besides rendering high-definition game graphics and calculating hashes, GPUs are also doing a great job for neural network training.

Cerebras: “Size matters.”

And it’s not just the GPUs. One notable example of where the industry might be going is the recently released Cerebras WSE chip. Unlike the typical chips, the Cerebras creation is not afraid of going large — it’s the biggest AI chip ever built. This isn’t the only record it breaks though. It also comes with a record of 1.2 trillion transistors built-in, which is a couple of orders of magnitude more than what the competition has been rolling out in 2019. Furthermore, being designed with AI in mind, its 400 000 cores are optimized for neural network compute primitives, thus making the Cerebras chip capable of triple or even quadruple the performance of a typical GPU.

Why is this important?

As we’ve already seen from the above example with Alphastar, the deep learning subset has been showing great promise (and results) in the quest for Artificial Intelligence. But while we’ve seen proof that complex neural networks can simulate complex behavior, it comes with the cost of a lot of computational power. AI feeds on data but breathes processing power. That’s why the development of faster and more efficient computing chips is the foundation for the AI revolution.

In conclusion — indifference is dangerous

Surrendering the steering wheel to AI is kind of scary

The Singularity is about humanity not being in control. And surrendering this control is as exciting, as it’s frightening. The excitement stems from the creative problem-solving potential of advanced artificial intelligence. But how would these thinking machines reason about their inferior creators — would they perceive us as a threat, as something irrelevant, or as a pet?

The subject of evolving automation should not be taken lightly (or, even worse, ignored) — it might be the most important topic of our times. Where we, as humanity, stand on this thought spectrum will determine our collective future — whether a scifiesque tale of abundance… or none at all.

But what is your opinion on the Singularity? Is it impending, is it inevitable, or do you find the idea of machine overlords absurd? Are there other major events from the last couple of years that support your views? Join me for a discussion in the comments or the ScyNet Discord channel here.