You have all seen The Terminator Sequel and if you have not, maybe you should, because I may not have the luxury of explaining in detail, but the story goes that, machines get smart enough to annihilate the human race, why? Because we made them smarter than us!
This Ai issue, is a very exciting and advanced phase in computer technology.There are those who are optimistic (I am one of them) about it and there are those who are pessimistic and question the technology step by step.
Just recently DARPA (The Defense Advanced Research Projects Agency) a U.S based agency announced, The Cyber Grand Challenge, a seven-team, $3.75 million-prize-pool hacking tournament that concludes in Las Vegas this August.
DARPA’s stated goal is to find new strategies for countering cyberwarfare. This had Elon Musk , CEO / CTO of SPaceX-cofounder at Tesla to raise an eyebrow, he simply did not buy it. In fact, he thinks that the Cyber Grand Challenge might just leave us with something like Skynet, the hostile A.I. that routinely tries to destroy humanity in the Terminator franchise, wooo that’s a mouthful, but if you have seen ‘Terminator’ the reality should send those chills up your spine.
On paper DARPA wants;
To build an automated artificial intelligence that is capable of detecting and resolving bugs in a computer security system. Essentially, they want to create an unsupervised, autonomous A.I. hacker extraordinaire that will detect system vulnerabilities and patch them itself.
Now hold it right there, did I say, I really admire the advancement of technology into Artificial intelligence?
Well I do, but, having a technology that can solve problems by itself without my input kind of scares me, how long will it be before I become irrelevant? so in a way, looking from Elon Musk’s point of view my eyebrow is raised too.
As it stands today, of course, people perform the often thankless duties of cybersecurity. Expert hackers are adept at finding and fixing susceptibilities, but as cyber warfare becomes more prevalent their demand for their skills could surpass the supply.
The process of fixing a flaw, DARPA writes, “can take over a year from first detection to the deployment of a solution, by which time critical systems may have already been breached.” And, the demand for quick fixes to ever-present security issues continues to rise as more and more everyday devices communicate information over the internet. DARPA is arguing that a cybersecurity A.I. system would be “the first generation of machines that can discover, prove and fix software flaws in real-time, without any assistance,” making the whole world more secure.
I could agree with all this, in the recent past, cyber attacks have increased exponentially, and everyday CIO,CTO and Cyber Security experts have to come up with more advanced methods to counter intrusions, sustain and protect data integrity, so having an AI designed for this purpose sounds ok but ___________ (I will let you fill this blank)
Lets Explore Ai a little bit, maybe just to cover the scope of the capability of this technology;
AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.
DARPA challenge probably won’t spawn Skynet, but it seems important to that people consider the possibility of malevolent artificial intelligence. Once we as a society unleash such a beast, doomsayers say, there’ll be no holding it back.
So, as we get down to design and develop the best and most advanced AI models, then we should consider, adding is to the code a ‘Morality script’ so that these programs have a guide on how they interact with each other and us.
Recent advances in artificial intelligence have made it clear that our computers need to have a moral code. Disagree?
Consider this: A car is driving down the road when a child on a bicycle suddenly swerves in front of it. Does the car swerve into an oncoming lane, hitting another car that is already there? Does the car swerve off the road and hit a tree? Does it continue forward and hit the child?
What can make you sleep better tonight after reading this is that, we are yet to get to that Human-like Artificial Intelligence, at least no apocalypse in sight-yet. The Ultimate basis of AI is to translate human-patterns and perform tasks as a normal un-aided human flawlessly. Human mind and behavior are extremely complex and execute tasks very simply with an understanding that is very unique; machine learning has a long way but I am sure there are mega-strides towards this.
I just hope we do not design the path to our own extinction. I will appreciate your views on AI and contributions to this article.
Latest posts by Peter Kivuti (see all)
- Microsoft will acquire Github for $7.5m to empower developers - June 9, 2018
- GDPR and our Path to GDPR compliance - May 17, 2018
- Microsoft Intellicode: AI assisted development - May 8, 2018