Modern agents are learning about our behaviors to provide tailored responses during situations. You may have experienced this with your speech reconizer or virtual assistant – and how they describe the things and people around you. But that’s only the beginning.
AI is adapting to natural human languages, moods, needs, and habits at an alarming rate. But it’s not just their speed of adaptation that’s alarming; it’s the increasing tendency of robots to make lethal mistakes they gleaned from human trainers. Among other things, this development begs the question, who’s to blame for a robot’s choices? Below are probable answers.
Past and Potential Consequences of an Autonomous Robot
There are hundreds of Tesla autopilot crashes – some fatal – igniting fresh liability debates. Some parties blame negligent drivers who fail to monitor the system properly. Others blame Tesla for marketing a possibly “semi-autonomous” device as autonomous.
Elsewhere, autonomous AI have – and can act – in discriminatory or harmful ways without any malicious intent on the part of their human creators. For instance, Microsoft’s “Tay” chatbot in 2016 got manipulated by adversarial users who used inflammatory content to ensure it made racist or offensive comments. Although there were no lawsuits, some people had a reputation to recover after “Tay” went haywire.
Similarly, a recruitment service on Amazon trials downgraded CVs from female applicants, after “learning” the historical connection between successful tech candidates and male-dominated backgrounds. Amazon halted the project before any lawsuits emerged.
Why A Seemingly Autonomous AI Can’t Be Blamed
Even when an autonomous system initiates the harmful act, a human or organization must bear the brunt. Machines can’t be tried. Neither can they pay damages. They have no moral agency, so “arresting” AI machines would be totally impractical. Also, assigning the blame to AI implies letting creators and manufacturers off the hook for their creations’ actions.
However, one relevant question is, “Who among the network of organizational and human actors – security, legal teams, designers, end-users, or regulators – should bear the ultimate liability?” While a self-learning algorithm swiftly picks up harmful habits or biases based on new data, it might fall prey to unforeseen adversarial attacks.
Yet, most legal jurisdictions insist that human and corporate entities behind the development and supervision of such devices should be held liable. The only challenge, however, is that the unique harm caused by malfunctioning “autonomous” AI falls outside the definition of a “defective product,” based on product liability statutes.
Legal Frameworks Globally
In the United Kingdom (UK), the judiciary has worked towards incorporating AI robots into current frameworks while providing sector-specific updates. According to the UK Government’s 2023 White Paper on AI, products must comply with five principles – safety, transparency, fairness, accountability, and contestability.
While the UK is yet to have a dedicated AI liability statute, it lacks pertinent legislation for specific sectors. For instance, the Automated and Electric Vehicles Act 2018 demands that insurance covers accidents caused by automated vehicles and claim compensation if a technical defect caused the harm. Elsewhere, the UK relies on the Consumer Protection Act 1987 for its product liability rules.
The EU AI Act holds a risk-based classification system. Manufacturers of “high-risk” items like specific healthcare applications and autonomous vehicles must meet strict requirements for accuracy, transparency, and human supervision. Subsequent legislatory amendments have been introduced to include AI software and other less tangible applications.
In the US, legal principles vary from one state to another, and there are no AI liability laws at the federal level. Typically, AI-linked liability cases are channeled via conventional laws of negligence and consumer protection. For instance, in the case of crashing self-driving cars, US courts consider whether the designer misled customers about the technology’s powers or omitted reasonable safety features.

Possible Solutions for Practical Situations
One practical step involves implementing strategic governance and supervisory frameworks within companies. These would ensure that the legal, technical, and ethical aspects of a product are significantly reviewed before the product’s launch.
Contractual arrangements between interfering organizations on a robot’s project that outline the actual responsibilities of each party are potentially helpful. These arrangements can also go a long way in enriching legal proceedings. Insurance for high-risk AI cases, such as surgical robots or large-scale recommendation engines, may also help clarify compensation-related questions.
Conclusion
Before now, artificial intelligence often included generic, all-purpose entities that behaved similarly with all users, a characteristic that many have decried. However, newer AI agents are experiencing a dramatic change that could potentially cross ethical and safety limits.
Robots can’t be arrested or essentially accused of wrongdoings. But improved legal provisions, contractual arrangements towards liabilities, and more specific insurance policies can help better attach blame (or compensation) for a robot’s choices.
No Responses