I don’t know about you but, AI took me by a storm. Suddenly I felt so small as a human being given that a super smart technology would devour my digital footprint and emerge smarter than me and that the scary part it may not need me in the future!
There’s at least one expert who believes that “the singularity”—the moment when artificial intelligence surpasses the control of humans—could be just a few years away. That’s a lot shorter than current predictions regarding the timeline of AI dominance, especially considering that AI dominance is not exactly guaranteed in the first place
What Is Singularity?
Singularity is the moment when machine intelligence becomes equal to or surpasses human intelligence—a concept that visionaries like Stephen Hawking and Bill Gates have believed in for quite a long time. Machine intelligence might sound complicated, but is simply defined as advanced computing that allows a device to interact (through a computer, phone, or even an algorithm) and communicate with its environment intelligently.
Ben Goertzel, CEO of SingularityNET—who holds a Ph.D. from Temple University and has worked as a leader of Humanity+ and the Artificial General Intelligence Society—told Decrypt that he believes artificial general intelligence (AGI) is three to eight years away. AGI is the term for AI that can truly perform tasks just as well has humans, and it’s a prerequisite for the singularity soon following.
Whether you believe him or not, there’s no sign of the AI push slowing down any time soon. Large language models from the likes of Meta and OpenAI, along with the AGI focus of Elon Musk’s xAI, are all pushing hard towards growing AI.
“These systems have greatly increased the enthusiasm of the world for AGI,” Goertzel told Decrypt, “so you’ll have more resources, both money and just human energy—more smart young people want to plunge into work and working on AGI.”
I had written about AI, this was back in 2016 ofcourse after a few sessions of regurgitation on AI, figuring out skynet in the Terminator and the profuse mentioning of Artificial intelligence. You should read Ai and the ‘skynet’ threat theorem in my medium channel.kivuti kamau
When the concept of AI started first emerged—as early as the 1950s—Goertzel says that its development was driven by the United States military and seen primarily as a potential national defense tool. Recently, however, progress in the field has been propelled by a variety of drivers with a variety of motives. “Now the ‘why’ is making money for companies,” he says, “but also interestingly, for artists or musicians, it gives you cool tools to play with.”
For decades, superintelligent artificial intelligence (AI) has been a staple of science fiction, embodied in books and movies about androids, robot uprisings, and a world taken over by computers. As far-fetched as those plots often were, they played off a very real mix of fascination, curiosity, and trepidation regarding the potential to build intelligent machines.
Getting to the singularity, though, will require a significant leap from the current point of AI development. While today’s AI typically focuses on specific tasks, the push towards AGI is intended to give the technology a more human-like understanding of the world and open up its abilities. As AI continues to broaden its understanding, it steadily moves closer to AGI—which some say is just one step away from the singularity.
The technology isn’t there yet, and some experts caution we are truly a lot further from it than we think—if we get there at all. But the quest is underway regardless. Musk, for example, created xAI in the summer of 2023 and just recently launched the chatbot Grok to “assist humanity in its quest for understanding and knowledge,” according to Reuters. Musk also called AI “the most disruptive force in history.”
With many of the most influential tech giants—Google, Meta and Musk—pursuing the advancement of AI, the rise of AGI may be closer than it appears. Only time will tell if we will get there, and if the singularity will follow.