ARTIFICAL INTELLIGENCE: A FRIEND OR FOE OF CYBER LAWS

This article was written by Princess Kalyani, a student of National University of  Study and Research in Law, Ranchi.

Abstract

Artificial intelligence is changing the way we live. The recent years have seen a boom in the use of Artificial Intelligence across almost all sectors. It is undoubtedly beneficial to mankind. But that’s not all. If artificial intelligence has the strength to change our civilization for the good, it has the strength to destroy us too. Whether artificial intelligence is a boon or bane for the future, it cannot be said yet. The debate becomes all the more complex when it comes to law. The governments need to analyze the benefits and pitfalls of AI in cyber law and make strong policies for regulation and governance of AI in cyber law. Risk analysis is crucial for the regulation of AI in the cyber world for the future. This paper analyses the benefits as well as the disadvantages of AI to cyber law.

Introduction

Artificial Intelligence is not only expected to amplify human effectiveness but to also exceed human intelligence and capabilities to perform complex decision-making and analytical functions in the coming years. Artificial Intelligence can be used for a wide range of functions in several sectors like retail, banking, manufacturing, healthcare, telecom, and process manufacturing.

 The growth of Artificial Intelligence in the cyber world is insurmountable. Several companies around the world are looking forward to implementing AI techniques for a better, more efficient business. A 2017 survey conducted by BCG and MIT Sloan Management[1] showed that almost 70% of company executives believe that AI will play a major role in their companies in the coming 5 years. Around 20% of the companies in the present day have already incorporated some or other form of AI in some aspect of their organization.

The feature that makes Artificial Intelligence so unique and innovative is its “learning” power. AI uses training data to perform according to the requirements of the operator. Artificial Intelligence systems can incorporate new data as they work and their reaction to gets refined over time through this “training” experience.[2] This nature not only makes it a powerful tool on the internet but also gives it massive powers that can be exercised in the cyber world.

 It is predicted that the investments from the business sectors into AI technology could touch almost $79.2 billion by the year 2022.  A report by the Capgemini Research Institute states that around 70% of enterprises believe that AI is necessary to deal with cyber attacks.[3] This just goes on to show how AI is becoming prevalent in the modern world. The benefits of AI to the cyber world are immense and there is no doubt that using it within limits can definitely change our lives for good.

ARTIFICAL INTELLIGENCE: A BOON FOR THE CYBER WORLD

Detection and prevention of cybercrime

 AI has enabled intrusion detection systems that not only help protect us from external threats but internal attackers as well.  The new methods for protecting the organization using AI involves various processes like data mining, neural networks, and heuristics methods which help improve their efficiency[4].  There are also intelligent mobile agent mechanisms powered by AI for better threat detection in cyber security.[5] One example of this can be the firewall created by AI to defend organizations from cybercrimes. The firewall protects as well as alerts the system about any form of intrusion. It is also known to perform several functions like auditing of the system configuration, detecting and identifying unusual activity, adapting traps to record information about the attack, etc.[6]

 The reason why intelligent agents are so widely utilized for security functions is because of their ability to communicate, cooperate, plan and implement the responses to cyber security threats all on their own.[7] AI can collect and analyze security data, track threats and prioritize responses without external support.  In case there is a breach in the system, AI systems can provide recommendations for containment of the threat along with detailed forensic reports. AI ensures deeper detection and faster response, which makes cyber security a lot stronger than it has ever been. [8]

 Detection of Potential crimes

 AI can be used to detect bribery, fraud, compliance issues, and even potential litigation based on the data and documents possessed by a company. AI can search through a company’s records and alert in case a malicious activity is spotted. In fact, the UK police department is also planning to use AI systems to prevent violent crimes.  This will be done using a system called National Data Analytics Solution (NDAS). NDAS uses a combination of AI and crime statistics to flag down individuals who are more likely to commit crimes in the UK.[9]

There are several other AI products that can not only make business decisions but also identify suspicious claims, and alerts for investigation. Products like Spark Cognition are created to monitor and guard against market manipulation and abuse. Other products like Outside IQ ensure regulatory compliance by the companies through artificial intelligence. These are just a few examples of several other products that use AI technology to detect and prevent misconduct.[10]

Banks can use AI to detect and halt financial crimes at a much faster rate now. Artificial Intelligence is known to help banks and other financial organizations to detect suspicious activities related to money laundering. The Royal Bank of Scotland has successfully used AI technologies to prevent losses of more than 9 million dollars[11]. Artificial neural networks can help predict the next moves of even the unidentified criminals who have alerted the AI bank security systems.[12]

Smart contracts

Another great use of Artificial Intelligence can be seen in the form of Smart contracts. Under smart contracts, AI basically creates this self-service system wherein a client can log in to the system, select the kind of contract required and the system will produce a standard agreement for them[13]. Smart contracts not only define the rules, obligations, and penalties related to the agreement but also automatically enforce them. The contract controls the transfer of digital currencies according to the conditions laid down in the agreement.[14]  Smart contracts provide very high efficiency for a very low cost and will definitely prove to be very beneficial to the legal industry in the future.

AI for data privacy

AI can be a boon to protect the extensive amount of personal data on the internet, as well. AI can not only help monitor who is looking at an individual’s data but can also be used to respond to the wrongful use or theft of personal data. There are several privacy bots and privacy policy scanners, which can help users get a simplified idea of their privacy rights when using a particular site. For example, Polisi( privacy policy analysis) uses machine learning to extract a readable summary of how much data the particular service collects, as well as what the service intends to do with the data.[15] This can help individuals have a comprehensive idea of their privacy rights. This is a fundamental step in protecting individuals from privacy breaches. AI is also going to be used to develop techniques that can enhance the security of privacy by evaluating encrypted data.[16]

AI can be used by companies to prevent malicious content from being posted online. For example, Facebook is using AI to monitor inappropriate content on their online platforms. The AI used by Facebook monitors, detects and blocks suspicious activities on its platform. Facebook reportedly removed 99% of ISIS and other terror related content before it was even flagged, using AI[17] .

Other than these advantages, AI is a boon to the cyber laws when it comes to handling huge volumes of security data, automating threat responses, acceleration of detection time, and lowering the workload of the security teams.[18] Besides its utility in various sectors, AI is widely used for functions such as fraud detection, scoring risk, behavioral analysis, and malware detection. But AI, like any other technological innovation, does not come without its pitfalls. AI exposes us to the risk of credit frauds, cyber interferences, privacy breaches, and cyber attacks among others. The cyber law world is bound to see unprecedented, controversial issues in the coming years, with the rise of AI in almost every industry.

THE PITFALLS OF AI IN CYBER LAW

Given the focus on the benefits of AI for cyber security, it is important to take a closer and analytical look at the risks involved with AI. It is important to do a proper risk assessment of AI in the cyber law world, to recognize the areas that are in the utmost requirement of attention from the policymakers.

The Question of liability

There are several legal restrictions when it comes to AI. Since it is completely unchartered territory for both judges and lawyers alike, it will be a difficult path ahead.  However, the main legal question that needs to be answered when it comes to AI in the cyber world is that of liability. For example, if a device purchases illicit substances using the internet, who would be held liable? Would it be the engineer, the manufacturer, the customer or the device?[19]

The hypothetical question of the liability in case a self-driving car kills a pedestrian, has been widely discussed.[20]AI systems are autonomous and mostly work without any operator. It could be suggested that in cases where the machine is being instructed by another person, the other person shall be held liable. However, the real issue arises when an AI program which is built with good intentions, commits a mistake. The example of the very popular Motorcycle factory case[21] can be considered here. In the given case, a robot programmed with artificial intelligence ended up killing an employee because its system detected the employee as a “threat to the mission”. In such cases, there is no strong legal standing on who would be held responsible.

In such cases, it could be suggested that whenever an AI system is involved in a crime, the legal frameworks are required to determine whether it should be held liable as an innocent agent, an accomplice, or a perpetrator.[22]  However, it is not all that simple. There are several people involved in the manufacturing of a functioning AI system. The real challenge to the cyber laws is to devise clear rules about the liability of the designer, operators as well as the AI robots in the legal frameworks. [23]

The question is not just about ‘who’ is to be held liable when an AI device fails to function appropriately. The question is also about how liability should be applied to AI systems. As the first few cases have started appearing, the laws are still grappling with inadequate liability frameworks.

Automated decision and social scoring: A crucial downside to AI that cannot be ignored

AI can very easily be used to predict sensitive information from the inputs we feed into the system. Through machine learning and analysis of our activity logs, AI can not only predict health and ethnic identity but also other critical information including political views, location preferences etc. The bigger problem here is that the use of AI is not only limited to the gathering of such information, but the information could be used to rank as well as classify individuals without their consent. This can be seen in implementation in China.[24] The use of such a ranking system can give rise to an Orwellian society. The automated decision and social scoring system is a major example of how AI can be used to encroach upon our fundamental rights.[25]

AI and the question of data privacy

The prevalence of AI is already predominant when it comes to personal data. AI not only tracks but also predicts our preferences regarding shopping, politics and even locations. Since there are no clear cut regulations about consent under AI, it is difficult to identify a legal basis when it comes to personal data processing.

Big Data analytics and artificial intelligence (AI) are allowed to draw inferences that are unverifiable and are used to make predictions about the preferences and the private life of individuals. These inferences can create new opportunities for discriminatory, biased, and invasive decision-making.[26]

All the devices that surround us, right form smart home devices to computers; have increased our vulnerability towards data manipulation. It is still a grey area to determine where and how exactly all our data is being shared and processed. The more we rely on digital technology, the more exposed we are to the threats of data manipulation.

AI is in conflict with the privacy protection laws laid down by the OECD under the OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, adopted in 1980. AI systems aren’t working in compliance with the eight principles laid down by the OECD[27] that have formed a basis for the cyber laws of several countries. The lack of compliance of AI to these principles suggests the enormity of challenges that the nations are going to face, when regulating data privacy of AI under their cyber laws.

The black box problem: transparency

The cyber laws of most of the nations require the data processing system to be transparent. The individuals are to be provided with the specifics of not only how the data was processed but also why certain decision was made by the system.[28] However, complex algorithms of AI make it extremely difficult to meet the transparency requirement.

AI has grown out of the individual machine system and has brought us to the point where the algorithms are so complex that it cannot be traced back.[29] This has led to the rise of the black box society.[30] Black box AI basically refers to the lack of rationale provided to us, about how the AI comes to certain conclusions. This is not only means a lack of transparency but could also lead to unfair decisions. [31]

The AI black box is also problematic because the burden of proving the causation, in the initial phases of the litigation, is on the plaintiff. And since it is difficult to even track down the system that caused the breach, it will be a huge problem for the aggrieved.[32]

AI: A tool for cybercriminals

The growing use of AI is also leading to the expansion of the existing threats to the cyber world. Moreover, AI has also brought with itself several new profound threats to be tackled. When it comes to criminal acts, Artificial Intelligence could play a very essential role in increasing the rate of criminal activities. In a major experiment conducted in 2016[33], AI was used to create phishing links for people using text messages. The experiment used machine learning techniques to construct a customized message for each target based on their preferences. Clicking on the phishing link would lead the criminal to obtain all the necessary personal information of the victim. This could lead to major frauds using the AI, with virtually no effort by the criminal.

Several AI vehicles have been used for smuggling of illicit substances across the border too.[34] For example, remote-controlled cocaine trafficking submarines have been discovered in the US.[35] Unmanned vehicles which are powered by AI make it all the more difficult to detect smuggling networks. AI is also capable of manipulating markets by showing false orders, to profit the criminal.[36] Apart from these, AI can also be involved in various other criminal activities such as drug trafficking, offences against the person, sexual offenses, theft and fraud, and forgery.

CONCLUSION

If a proper risk analysis were to be conducted, the risks of AI in the cyber world outweigh the benefits. The good news is that, most of the threats that AI poses  can be mitigated by a strong legal framework. For example, strict privacy regulations as well as a strong legal framework to determine the liability of various parties in a cyber crime, could help enhance the role of AI in the cyber world.

 Strong command over AI is difficult to maintain without effective policies determined to maximize the benefits and minimize the losses. As for India, the talent and the determination cannot be questioned. However, the country lacks strong legal directives to govern AI.

The formulation and implementation of legal policies is going to be a slow, continuous process. The intelligent machines and computers that are going to appear in the coming years will not only have an “understanding” of various concepts but will also have the capability to detect the changing environments. The role of computers in our society is going to be significant in the coming years.

Since there the changes are unprecedented, the legal decisions regarding various aspects of AI are also going to be a landmark, novel judgments. The issue then will also be that it would be an unchartered territory for most. The judges may or may not be able to grapple with innovative technology and faulty precedents could be set in the absence of strong legal frameworks. It is therefore important to act soon, so that we can derive the utmost benefit from AI, and minimize the threats it can impose.

 Another challenge is the lack of uniformity in the cyber laws of various countries. Since different people from different counties could be involved in the formation of an AI system that is to be used in a different location altogether, it will be difficult to determine exactly which laws should apply. It is therefore important for the nations to have uniformity in their cyber laws. The challenges that AI poses to the legal system are not only conceptual but also practical. All such challenges have to be confronted to ensure a fair and just system to compensate the aggrieved parties. [37]

AI, like any other technology, comes with its advantages and disadvantages. On one hand, AI could be used to provide cyber security and protect various organizations and individuals from cyber threat. On the other, it could also be used maliciously to boost cyber crimes and cyber threats. The analysis of the risks and benefits of AI to the cyber laws is necessary to understand the importance of strong legal frameworks regarding AI law. Sooner or later, the legal system will have to confront these challenges. The sooner it happens, the better it is for all of us.

[1] Sam Ransbotham, David Kiron, Philipp Gerbert & Martin Reeves, Reshaping Business With Artificial Intelligence

Closing the Gap Between Ambition and Action, MIT Sloan Management Review (September 06, 2017), https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/

[2] Philipp Gerbert , Sukand Ramachandran , Jan-Hinnerk Mohr , and Michael Spira, The Big Leap Toward AI at Scale, BCG article (June 13, 2018), https://www.bcg.com/publications/2018/big-leap-toward-ai-scale.aspx

[3] Reinventing Cybersecurity with Artificial Intelligence :A new frontier in digital security, Capegemini  (last visited: 26th August, 2019 ), https://www.capgemini.com/research/reinventing-cybersecurity-with-artificial-intelligence/

[4] X. B. Wang, G. Y. Yang, Y. C. Li & D. Liu, Review on the application of Artificial Intelligence in Antivirus Detection System,  506 Proceedings of the IEEE Congress on Cybernetics and Intelligent Systems, 2008

[5]  Yu Chen, NeuroNet: Towards an Intelligent Internet Infrastructure, 543 In Proceedings of the 5th IEEE (CCNC),2008

[6]  J. S. Mohan  &  Nilina T, Prospects of Artificial Intelligence in Tackling Cyber Crimes, 1717 IJSR  4, (2015).

[7] Selma Dilek, Hüseyin Çakır& Mustafa Aydın,  Applications of Artificial Intelligence Techniques to Combating Cyber Crimes: A Review, 6 IJAIA 21 ( 2015)

[8] Rajat Mohanty,  Will AI Change the Game for Cyber Security in 2018?,  Paladion (last visited: 26th August at 7 pm), https://www.paladion.net/hubfs/Whitepaper%20PDF/Will%20AI%20Change%20the%20Game%20for%20Cyber%20Security%20in%202018%20-%20Whitepaper.pdf?hsLang=en-us

[9] Chris Baraniuk, Exclusive: UK police wants AI to stop violent crime before it happens, New Scientist, (26 November 2018), https://www.newscientist.com/article/2186512-exclusive-uk-police-wants-ai-to-stop-violent-crime-before-it-happens/

[10]  Adam C. Uzialko, 6 Incredible Ways Businesses are Using Artificial Intelligence Today, Business News Daily, (November 7, 2016 04:07 pm EST), https://www.businessnewsdaily.com/9542-artificial-intelligence-businesses.html

[11] NatWest teams up with Vocalink Analytics to help protect corporate customers from fraud , RBS (10 April 2018 ) ,https://www.rbs.com/rbs/news/2018/04/natwest-teams-up-with-vocalink-analytics-to-help-protect-corpora.html

[12]Lisa Quest, Anthony CharrieLucas du Croo de Jongh & Subas Roy, The Risks and Benefits of Using AI to Detect Crime, Harvard  Business Review, ( Aug 09, 2018), https://hbr.org/2018/08/the-risks-and-benefits-of-using-ai-to-detect-crime

[13] Sterling Miller, Benefits of artificial intelligence: what have you done for me lately? , Thomson Reuters (last visited: Aug 26 at 6 pm), https://legal.thomsonreuters.com/en/insights/articles/benefits-of-artificial-intelligence

[14] Oscar W, AI Smart Contracts — The Past, Present, and Future, Hackeroon,(November 18th, 2018), https://hackernoon.com/ai-smart-contracts-the-past-present-and-future-625d3416807b

[15] Artificial Intelligence and Data Protection:Delivering Sustainable AI Accountability in Practice First Report:Artificial Intelligence andData Protection in Tension ,  Centre for information Policy Leadership, (October 10, 2018), https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl_ai_first_report_-_artificial_intelligence_and_data_protection_in_te….pdf

[16] Andy Greenberg, An AI That Reads Privacy Policies So That You Don’t Have To, Wired, (9 February 2018), https://www.wired.com/story/polisis-ai-reads-privacy-policies-so-you-dont-have-to/.

[17] Community Standards Enforcement Preliminary Report, Facebook (2018), https://transparency.facebook.com/community-standards-enforcement#terrorist-propaganda

[18] Amit Tewary, Why Artificial Intelligence in cyber security is the need of the hour, Paladion, (last visited: August 25th at 7 pm) ,https://www.paladion.net/why-artificial-intelligence-in-cyber-security-is-need-of-the-hour

[19] Mike Power,What happens when a software bot goes on a darknet shopping spree? , The Guardian, (Fri 5 Dec 2014 13.56 GMT), (https://www.theguardian.com/technology/2014/dec/05/software-bot-darknet-shopping-spree-random-shopper

[20] G. Hallevy, The Criminal Liability of Artificial Intelligence entities, SSRN (February 15,2010) available at http://ssrn.com/abstract=1564096

[21] Weng Y-H, Chen C-H and Sun C-T, Towards the Human-Robot Co-Existence Society: On Safety Intelligence for Next Generation Robots, 1 Int.J.Soc.Robot. 267, 273 (2009)

[22] Gerstner M.E., Comment, Liability Issues with Artificial Intelligence Software, 33 Santa Clara L. Rev. 239 ( 1993)

[23] John Kingston. (2016). Artificial Intelligence and Legal Liability, Research and Development in Intelligent Systems XXXIII: Incorporating Applications and Innovations in Intelligent Systems XXIV (pp.269-279)

[24] Charlie Campbell, How China Is Using “Social Credit Scores” to Reward and Punish Its Citizens,TIME, (last visited: 24th August, 2019 at 4pm),https://time.com/collection/davos-2019/5502592/china-social-credit-score/

[25] Tweaklibrary team, Artificial Intelligence a Threat to Privacy, (4th April, 2019), https://tweaklibrary.com/artificial-intelligence-a-threat-to-privacy/

[26] Brent Mittelstadt and others, ‘The Ethics of Algorithms: Mapping the Debate’ (2016) 3 Big Data & Society (2016)

[27] OECD Revised Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, (2013), available at http://oecd.org/sti/ieconomy/oecd_privacy_framework.pdf.

[28] GDPR, article 12 (transparency)

[29] Michael Krigsman, Artificial Intelligence and Privacy Engineering: Why It Matters NOW, Zdnet, (18 June 2017), http://www.zdnet.com/article/artificial-intelligence-and-privacy-engineering-why-it -matters-now/

[30] F. Pasquale,The black box society: The secret algorithms that control money and information, Harvard University Press, (2015).

[31] R. Guidotti, et al, A survey of methods for explaining black box models,  93 ACM Computing Surveys (CSUR), 51(5),  (2018)

[32] Yavar Bathaee, The artificial intelligence black box and the failure of intent and causation, 906 Harvard Journal of Law & Technology, 31(2) (2018).

[33] John Seymour and Philip Tully, Weaponizing data science for social engineering: Automated E2E spear phishing on Twitter, (2016), https://www.blackhat.com/docs/us-16/materials/us-16-Seymour-Tully-Weaponizing-Data-Science-For-Social-Engineering-Automated-E2E-Spear-Phishing-On-Twitter-wp.pdf.

[34]Thomas C. King & Nikita Aggarwal, Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions, (2019), https://link.springer.com/article/10.1007/s11948-018-00081-0#citeasSci Eng Ethics (2019).

[35] N. Sharkey, M. Goodman, & N.Ross, The coming robot crime wave, 6 IEEE Computer Magazine, 43(8), (2010)

[36] E. Martínez-Miranda, P. McBurney, & M.J  Howard., Learning unfair trading: A market manipulation analysis from the reinforcement learning perspective. In Proceedings of the 2016 IEEE conference on evolving and adaptive intelligent systems, EAIS 2016 (pp. 103–109) (2016)

[37] Matthew U, Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 354 Harvard Journal of Law & Technology, Vol. 29, No. 2, (2016).

Add a Comment

Your email address will not be published. Required fields are marked *