Back in 2017, a self-taught computer became the world’s best player of Go, an abstract strategy board game. The computer was first of its kind in terms of self-teaching, based purely on reinforcement learning and self-play. The computer was running on the new AlphaGO Zero program from DeepMind, a British artificial intelligence company. This version marks a new development in artificial intelligence (AI) technology since the original AlphaGO, which analyzed games between excellent human players to discover winning moves. Now, the new AI program starts solely with knowledge of the rules and objectives of the game. Venture capitalists have jumped on the AI bandwagon, by heavily funding startups dealing with this technology and encouraging the compilation of big data (data sets too large and complex for traditional data-processing application software).
A field in which AI will be very useful is cyber security. Gaurav Banga, CEO and founder of Balbix, warned that “it is now mathematically impossible for humans to manage cyber security without the assistance of artificial intelligence.” He expects a “new royal wedding of IT: AI and cybersecurity.” Jason Parry, vice president of Client Solutions for Force 3, shares the same view as Banga. According to Parry, AI will be powerful in cyber security as it will be able to use the “power of advanced analytics and machine learning to take action against real-time threats in an automated, predictive way.” Jyan Hong-wei, director of Taiwan’s department of cyber security, goes further and warns that that we are currently in a “AI hacking arms race,” where it will be about “software training other software” and effectively replace humans in writing the codes for cyber attacks. It will make hackers faster and more capable.
Security Implications
Cyber warfare is not something new, but looking at the current trend of escalating cyber attacks from state and non-state actors raises the possibility that it might escalate into open conflict as it was seen in Georgia and Ukraine. In both these instances, Russia used cyber attacks to destabilize the countries’ communications and tried pushing forward pro-Russian propaganda. Eric Trexler, vice president of global governments and critical infrastructure of the Forcepoint company, predicts that “cyber adversaries will increasingly push our limits.”
Where self-learning AI technology becomes alarming in terms of real-life security implications is the possibility that it may facilitate cyber attacks. These attacks could cause financial harm, loss of life, and a violation of national sovereignty. They could also be interpreted as hostile actions and prompt an active response from states, triggering conflict.
Interstate cyber attacks are already a critical issue.States like China, Russia, North Korea and Iran are already heavily engaged in hostile cyber activities against other states, with the objective of furthering their foreign policy agendas. Examples include Russian cyber activities in Ukraine, interference in the 2016 US presidential election, hacks of the German parliament in 2015 and 2019, and Chinese attacks on Taiwan and the US.
Cases of cyber attacks will only increase in the future with the introduction of AI technology. According to Zulfikar Ramzan, chief technology officer at RSA, nation-states are the most likely to be the first targets of AI-based attacks.
Furthermore, AI-powered machines are “likely to become primary cyber fighters on the future battlefield,” according to Alexander Kott, chief scientist at the Army Research Lab. This idea has already been embraced by the Pentagon, which released its AI strategy in August and announced a five-year, $885 million contract with Booz Allen Hamilton, a management and information technology consulting firm. The US Department of Defence wants to automate mundane tasks and augment productivity by using AI technology to process data faster.
International Call to Action
The growing threat of cyber attacks is gaining the attention of lawmakers. Leaders of the U.S. Senate Intelligence Committee have claimed that China has hacked American secrets. This sentiment matches a 2017 Government Accountability Office report, which exposed gaps and a lack of leadership in critical technology protection. As such, Senators Marco Rubio (R-Florida) and Mark Warner (D-Virginia) introduced a bill on January 4th that aims to stop the theft of state-sponsored technology by promoting better “cyber hygiene.” To do so, the bill would establish the Office of Critical Technologies and Security, which would develop a long-term strategy to achieve and maintain the United States’ technological supremacy.
On the international stage, a small country is making big waves on this issue. Estonia wants to shape world cyber laws on the UN Security Council, if it can be elected to one of the non-permanent seats up for grab in 2020. Jonathan Vseviov, the Estonian Ambassador to the United States, spoke at Cybercon 2018 on the necessity of creating international norms of behaviour when it comes down to the cyberspace, in hopes of avoiding escalating conflicts.
AI is a rapidly developing technology, and it is important that lawmakers start paying more attention to this phenomenon and the cybersphere. From an industry perspective, experts are warning that AI can and will be used with malicious intentions. There is a lot of potential for AI to drastically improve our everyday lives, but there is also equal potential for it to be misused. For example, AI technology could help crack passwords faster, create custom phishing messages by analyzing social networks and could eventually be used to process sensitive or personal data.
As such, experts are advising that, in the next five to 10 years, extensive norms on AI and cyber security should be adopted. For a healthy AI future, the development of AI technology should be an open process and possible vulnerabilities, which would allow the technology to be misused, should be fully disclosed. In order for this to happen, there needs to checks and balances in the form of human involvement.
The opinions expressed in this article are solely those of the author and they do not reflect the position of the McGill Journal of Political Studies or the Political Science Students’ Association.
Featured image by Markus Spiske, via Unsplash.