picture courtesy: http://www.kachwanya.com/wp-content/uploads/2015/07/150417-robot-criminal.png

This article was written by Shubham Singh a student of Amity University U.P.


The artificial intelligence is becoming an integral part of our society. The interaction of artificial intelligence with humans can be observed in every field. These interactions will increase in the future as the technological world is evolving. What is an artificially intelligent being commits a crime, then who should be held liable? Therefore, to ensure that these interactions are beneficial and occur as intended we need to subject artificially intelligent beings to law, especially criminal law as it is the most effective way for social control. The artificial intelligence can be treated as a legal personality like corporations, to subject them to law. Treating artificial intelligence as a legal person not only makes them subject to the law but also protects the innocent developers and owners from the criminal liability arising from the acts artificial intelligence. The criminal liability arises basically out of the presence of two factors that are mens rea and actus reus. Actus reus is the physical outcome of the act. In the case of artificially intelligent beings, the main challenge arises in detecting the mens rea which is the mental factor as there is no yardstick to measure this factor. This problem can be solved by the application of the Turing Test by the court to detect whether the artificial intelligence entity is capable enough to formulate mens rea. The human laws can be imposed on the artificial intelligence like they are imposed on the other legal personalities like corporations and in the similar vein punishments to can be awarded by making necessary alterations.

Keywords: Artificial intelligence, Legal personality, Criminal Law, social control, Punishment


    On 4 July 1981, the first death by a robot was recorded, Kenji Udara was an engineer at Kawasaki Heavy Industries plant. He entered into a restricted area of manufacturing line to perform some maintenance work on the robot. Kenji failed to completely shut down the robot. Robot detected him as an obstacle and pushed him into an adjacent machine from its hydraulic arm, killing him instantly.[1] Unfortunately, the present laws are inefficient to efficiently tackle such instances. Robots and Artificial Intelligence add a whole new dimension to our world, the growth in the technological world is rapid, and the robots are becoming an integral part of our life.

    In the present world, the robots are just inanimate objects like any ordinary tool with no legal liability or duties which means Kenji who died from the hands of a robot was not murdered by the robot. Then legal aspect of the scenario is who should be held liable?  What if a self-driving car accidently kills a person who suddenly came in front of the car? The owner of the car or the developer of the car who never had any criminal intention or negligence on their part. The legal system will have to evolve to be compatible with the dynamic technological world. Therefore, the question arises that how the technological growth of the artificial intelligence should be made subject to legal social control. This article tries to work on the legal solutions to the problems arising from the increasing influence of the artificial intelligence in our society.

The research on this problem began since the early 1950s by Isaac Asimov. In his science fiction “I, Robot,” Isaac Asimov propounded three fundamental principles for Artificial Intelligence and Robots: The first law states that robot should not injure a human being or allow a human being to come to any harm. The second law directs the robot to obey the orders given to it by human beings, except where such orders would conflict with the First Law. And the third law states that a robot must protect its own existence, but in the case of any conflict among third law and first law, the priority should be given to the above mentioned laws.[2] Though these fundamental principles are not compatible with the present artificially intelligent beings. The application artificial intelligence is becoming more complex, the interaction of humans with robots is observed almost in every field.  Isaac Asimov principles are not sufficient to cope up with this whole new dimension of our society. What if a military drone is ordered to attack a terrorist or what if a person orders a robot to hit a person in good faith? In such cases, these principles have no real legal significance.

   Futurologists have propounded the evolution of a new species which they termed as ‘machina sapiens’[3] which will share the earth as an intelligent creature with humans. Robots and Artificial Intelligence are emerging as a transformative technology that has capacities of Humans and I won’t be wrong if I say some of these are more capable than humans. From our home appliances to most dangerous war weapons like a drone, humans are in interaction with Robots and for making these interactions beneficial we need to regulate our interactions by properly established laws.

     The robots are innocent with lack of ability to understand the nature of norms and laws of society like a small child, therefore, they can be used as a crime machine by people, since robots function on the directions and they can, therefore, be misused or another scenario which could be faced by the world in coming future is that robots develop and evolve to a level where they can make self-decisions and can formulate intentions. This could be more dangerous to society than the biggest hydrogen bomb that could possibly be made. A picture of which has been shown in numerous movies like “A Space Odyssey” (2001), “The Matrix” (1999, 2003) and many more where the robots evolve to a level that they take over the world and start eliminating humanity from the face of our world. Though these are fictions but this could possibly be the future of our world.

    The main question arises how we can check such misuse of artificial intelligence simultaneously by not imposing the restrictions on the technological growth of this field. In order to cope up with this problem, the Artificial Intelligence entities must be made subject to legal control. The new technology no doubt can improve human lives but at the same time can cause human suffering likewise is also possible. Therefore, the new technological growth compels the adjustment of legal orders. Though the artificial intelligence may not qualify for rights and laws for natural persons and may not be covered under constitutional provisions like a natural person but making Artificial Intelligence subject to law not only saves the innocent people from the criminal liability arising from the acts of such entities but also subjects Artificial Intelligence to legal social control which checks the misuse of the artificially intelligent beings.

Now the question arises what type of laws are suitable and how can an artificial intelligence entity be subject to laws since they are mere objects in the scope of the law. The criminal law is the most effective way towards social control in our human civilization and can be used as an efficient tool check the negative impact of artificial intelligence on our society.

Subjecting artificial intelligence to criminal laws creates an interesting dispute in the provisions, which is based on the approach of society towards them as artificial intelligence though forms a new dimension in the technological world is merely a computer programme based product in the law. The solution to this problem is recognizing artificially intelligent beings as legal or juristic persons in law.


Artificial intelligence entities should be treated as legal persons just as corporations are legal persons under the law. It is pertinent to note that the initial reasoning behind according to corporates legal personhood was to promote commercial activity and also remove corporate liability from individual shoulders. In the same vein, artificial intelligence should be accorded basic constitutional freedoms in line with those accorded to corporates. The primary objective behind this is that as artificial intelligence develops and begins to think, civil and criminal liability arising from their actions will not be solely attributable to their programmer or owner. Like, the autopilot is based on the artificial intelligence technology. What if a developer of a warfare aircraft makes an auto pilot programme which itself eliminates any obstacles on its mission and in one of the mission the pilot of the aircraft aborts the mission due to bad weather but the autopilot recognizes pilot as an obstacle and ejects the pilot out of the cabin which kills the pilot. Now the developer didn’t had any intention to kill the pilot but the current laws consider them liable. The correct option would be to impose criminal liability on the auto pilot and correct the algorithms of its programming. This not only saves the developers of artificial intelligence and the owners from criminal liability for acts they never intended but also prohibits demoralization of developers to bring more innovations into the technological field.

At the same time, as robots get sentient, they too shall start demanding basic rights in line with their needs to facilitate their well-being. After developing artificially intelligent beings scientists are designing machines with emotional intelligence and other capabilities that will diminish the line of difference between humans and machines.[4] It is fundamentally in the benefit of human beings to ensure that our interactions with these artificially intelligent beings are beneficial and occur as intended. In furtherance of the same, we need to grant legal personhood to these types of technology.

Though this could make artificial intelligence a tool to commit crimes. The crime perpetrator can easily take shelter behind artificial intelligent beings and use legal personality of the artificial intelligence entity as a statutory privilege to commit crimes. In the case of a corporation if any person uses legal personality of the corporation for his fraudulent of dishonest purposes he is not allowed to take shelter behind the legal personality of the corporation and the court lifts the corporate veil of the corporation and takes action against the perpetrator as there is not corporate personality. The corporate veil is lifted only if a person relies on the corporate personality of the corporation to shield it wrong doings.[5]  In the same vein, the scenario of artificial intelligence can be treated. If a perpetrator of any fraud or crime is found taking shelter behind the legal personality of the robot he should be treated by a court as if there was no legal personality. Many precedents of which are being slowly established like the case of “computer raped by telephone” which was widely reported in which a programmer used a telephone link to invade the privacy of the computer. During the course of the investigation the questions arose as to whether issue a search warrant to the computer to fetch evidences. This was the first time when the world saw any computer being treated as a person and a search warrant was issued to the computer.[6] The auto-pilot legislation is leading in establishing the precedents in this field. In Klein v. U.S., [7] the pilot used the autopilot to land the plane while the guidelines strictly prohibit the use of auto pilot during the landing. There was an error on the part of the autopilot during the landing and which lead to a bad landing causing damage to the plane. The pilot was held liable as there was negligence on his part rather than considering auto-pilot liable for error on its part. In U.S four states have passed to legalize self-driving cars by Google, [8] Nevada being the first state to do so.[9] These cars treated as the traditional drivers in the perspective of law.


The applicability of criminal law on artificial intelligence gives rise to another question of criminal liability of the artificial intelligence. The criminal liability is based on the presence of two factors, mens rea and actus reus. In the light of English criminal law, the criminal liability doesn’t arise until both the factors are present.[10] It is said that actus non facit reum, nisi mens sit rea which states that the intent and the act both must concur to constitute the crime. Actus reus is the material outcome of the act or the deed[11] and is an essential element to constitute the crime.[12] The main problem arises in detecting the presence of mens rea which means the presence of criminal intention

To prove the criminal liability of the robot the presence of both the factors is essential. The actus reus can be detected by the acts or omissions of robots. But there is no such yardstick that can measure the presence of mens rea in the omissions of the robot. It is the mental element of the person doing any offence such as, the knowledge of the outcome or result of the act or the ability to understand the nature of the act which are accompanied by the most important factor which is the intention to perform the particular act.

Turing Test

The challenge to detect the mens rea in the acts of the robots can be tackled by the application of Turing test. In 1950, Alan Turing introduced the concept of Turing Test to test the ability of a machine to formulate intents for its actions or to exhibit intelligence.[13]  Turing test is a game in which the machine imitates being a human with a human opponent. After a series of questions the questioner who is completely unaware of which competitor is human and which one is a computer guesses which of them is human. Turing test basically tests the ability of a machine to exhibit human nature. If the machine is successful in convincing the questioner that it is human, it passes the test and is believed to have capabilities to act as a human. The applicability of the Turing test on every particular artificial intelligence entity can be cumbersome for the courts. Therefore, the government can lay down norms for manufacturers or developers of these entities to subject them to Turing test or any other test as the government deems fit before the public offering of these entities.

Chinese Box Test Theory by John Searle

John Searle criticized the Turing test theory by his Chinese box test. He stated that Turing test on a robot is similar to giving instructions to a man locked in a room in Chinese who has no knowledge of the language. But when he is given a rule book that consists translation of Chinese into symbols, he will be able to understand the instructions. The people outside the room will be convinced that the person inside the room understands Chinese. But actually, the person doesn’t understand Chinese but acts on the basis of instructions. The John Searle conveys by this experiment that the machines act on the basis of algorithms and programmes which are manipulated on the basis of the inputs.[14] The Artificial intelligence entity actually doesn’t thinks or formulates intention but acts on the basis of programmes that function on the given input.

Criticism of Chinese Box Experiment

It is a hypothetical approach that the programmes help the artificially intelligent beings to convince the questioner that it is human. Though the programme of the Artificial Intelligence which is like a symbol manual that helps understand the inputs but it definitely doesn’t provide consciousness required to give human like responses and to convince the questioner that it is a human.


The question arises what should be done after an artificially intelligent being is held criminally liable and what punishments or measures should be taken. After conviction, what punishment should be sentenced to the artificial intelligence entity by the court? What are the matters in which they can be held liable? Similar questions arose when the issue of criminal liability of corporations was discussed as to how the companies and corporations will be made subject to laws implicated on the natural persons.[15] The present corporation laws perfectly display how these questions were answered. The corporations when are imposed fines by the court, they are bound to pay them in a similar way like natural persons. In the same analogy, punishments can be granted to the artificial intelligence. Though there is a need for adjustments in the implication of these punishments for artificial entities but this doesn’t negate the nature and principal behind these punishments when related to humans.

There are few factors that must be taken into consideration while implication of punishment on the artificial intelligence.[16] 1) The fundamental principles of the particular punishment. 2) Effects of the punishment on the artificially intelligent beings. 3) The practical achievements by the specific punishment.

The most important factor among which that must be taken into consideration is what the achievements are from the specific punishment. Most of the punishments imposed on the human offenders like the death penalty, life imprisonment, imprisonment, serving society and fines. But even the most severe punishments like the death penalty or life imprisonment are impractical for artificial intelligence entities. The fundamental principal behind these punishments is to make offenders incapable of committing any other crime in future.[17] The death penalty is awarded to deprive the human of its life. But the term life is abstract for artificial intelligence. Artificial intelligence can be tangible like robots, computers but sometimes artificial intelligence entities have no physical existence like software, mobile applications. The death penalty is awarded to a human in case of grave and serious offences or when the offender possess danger to the society in the future in the same vein in case if an artificial intelligence entity is found to possess danger to society the punishments of similar consequence can be awarded which bars the entity from causing any further harm to the society like by deletion of the software or banning the production and development of the concerned entity or in case of an entity with physical existence it can be dismantled or destroyed. In the case of less serious offences or petty offences where the humans are awarded imprisonment or other punishments like society service with the fundamental object of bringing reformation in the person so that the particular person can serve the society in future and live as a part of society. The same principal can be applied to the artificial intelligence where there is a possibility of reformation in the artificial intelligence measures can be taken for bringing about reformative changes in the artificial intelligence entity by making necessary technical or programming changes or by alteration of algorithms. Fines can be imposed on the artificial entities for petty offences but in maximum cases, these entities are incapable of paying fine as they don’t have money or property of their own. In such cases, fines can be realized by imposing punishment of community service. The punishment of community service is most appropriate in terms of practicality and achievements for artificial intelligence.

 Many legal systems recognize community service as the better substitute for short term sentences because of its productive nature.[18] Community service punishment is also awarded in case the offender is incapable of paying the fine imposed for the offence he committed. The objective of such punishments is the contribution of labour service by the offender towards society. Therefore, the punishment of community service can be appropriately imposed on artificial intelligence where the entity can work for the welfare of the community by the contribution of labour.


 Artificially intelligent beings add a whole new dimension to our society. The rapid development in the technological world warrants the adaptive reforms in the current legal system to find solutions to the emerging legal problems through artificial intelligence in our society. The criminal liability can be imposed on the artificial intelligence if all the requirements of actus reus and mens rea are met. The dynamic technological world possesses strong danger to humanity, in order to protect our society, we need to subject artificial intelligence to law especially criminal law as it is the most effective way to social control. In the initial phase of corporate development, people were afraid of corporations but since corporations have been treated as legal persons subject to criminal and corporate laws the goal of social control on corporations has been achieved. Corporations since the fourteenth century have appeared in modern form.[19] It took many centuries to subject corporations to the laws. Artificial intelligence has become an important part of our society which is likely to get more influential in the future with the changes in the technological world.  The society has already started facing problems due to lack of legal enactments on artificial intelligence. There are a huge number of crimes already been committed by the artificially intelligent beings. Therefore, there is a strong need that the society starts taking steps towards the development of legal system for dealing with such problems. Not subjecting artificial intelligence to laws especially criminal law would be outrageous. Human laws can be imposed on the artificial intelligence as they are imposed on other legal entities like corporations.

[1] Paul S. Edwards, ‘Killer robot: Japanese worker first victim of technological revolution’ Deseret News (Salt Lake City ,8 December 1981) 1

[2] Isaac Asimov, I ROBOT (1st edn, Gnome press 1950) 124

[3] Gunkel David J, The Machine Question: Critical Perspectives on AI, Robots, and Ethics( 1st edn, The MIT press 2012)  47

[4] Rafael A. Calvo, Sidney K. D’Mello, Jonathan Gratch & Arvid Kappas, The oxford handbook of affective computing ( 1st edn, Oxford 2015) 176

[5] BSN (UK) Ltd. v. Janardan Mohandas Rajan Pillai [1996] 86 Com Cases 371 (Bom)

[6] Ward v. Superior Court of California [1972] 3 C.L.S.R. 206

[7] Klein v. U.S [1975] 13 Av.Cas. 18137

[8] Thomas Halleck, ‘Google Inc. Says Self-Driving Car Will Be Ready By 2020’ (International Business Times, (15 January 2015) <http://www.ibtimes.com/google-inc-says-self-driving-car-will-be-ready-2020-1784150> accessed 12 February 2016

[9] Alex Knapp, ‘Nevada Passes Law Authorizing Driverless Cars’ (Forbes, 22 June 2011) <http://www.forbes.com/sites/alexknapp/2011/06/22/nevada-passes-law-authorizing-driverless-cars/#17c7344a5b73> accessed 12 February 2016

[10] Ratanlal & Dhirajlal, The Indian Penal Code by Ratanlal & Dhirajlal (32nd edn, LexisNexis 2011) 16

[11] Stanhope Kenny & J.W.C. Turner, Kenny’s outline of criminal law (19th edn, Cambridge University Press 1966) 17

[12] R v. White [1910] 2 KB 124

[13] Alan Turing, ‘Computing Machinery and Intelligence’ [1950] LIX 236

[14] Peter Kugel, ‘The Chinese Room Is A Trick’ (2004) Computer Science Department Boston College, Chestnut Hill, USA < http://www.cs.bc.edu/~kugel/Publications/Searle%206.pdf> accessed 14 February 2016

[15] Gerard E. Lynch, ‘The Role of Criminal Law in Policing Corporate Misconduct’ (1997) 60(3) Law and Contemporary Problems <Available at: http://scholarship.law.duke.edu/lcp/vol60/iss3/3> accessed 14 February 2016

[16] Gabriel Hallevy, ‘The Criminal Liability of Artificial Intelligence Entities’ (2010) SSRN <http://ssrn.com/abstract=1564096> accessed at 19 January 2016

[17] Robert M.  Bohm, Deathquest : An introduction to the theory and practice of death penalty in the United States (4th edn, Routiledge 1999) 74

[18] John Harding, The development of the community service , alternative strategies for coping with crime  ( 1st edn, Norman Tutt 1978) 164

[19] William Searle, Holdsworth, A history of English law (1st edn, Sweet & Maxwell Ltd 1969) 471


Add a Comment

Your email address will not be published. Required fields are marked *