As humanoid robots transition from science fiction to commercial reality, a new and critical field has emerged at the intersection of technology and philosophy: robotics ethics. The questions it grapples with are no longer abstract but urgently practical. Who is responsible when a robot causes harm? Should a machine capable of mimicking emotion be granted any form of rights? What values are we, often unconsciously, embedding into these artificial beings? To navigate this uncharted territory, we sat down with Dr. Aris Thorne, a Professor of Ethics and Emerging Technology at the Institute for Advanced Study and a leading voice on global panels shaping AI and robotics policy. In this wide-ranging interview, Dr. Thorne provides a compelling framework for understanding our responsibilities, demystifies the hype around machine sentience, and offers sobering predictions about the regulatory battles to come.
Interviewer: Dr. Thorne, thank you for your time. As we see the first deployments of humanoid robots in factories and public spaces, what is the most pressing ethical issue that designers and corporations are overlooking?
Dr. Thorne: “Thank you for having me. The most immediate and overlooked issue isn’t a flashy one like robot rights; it’s the mundane, pervasive problem of value-laden design. We pretend that technology is neutral, but it’s not. Every design choice is a value judgment. For instance, if you design a robot for warehouse efficiency, you are implicitly valuing throughput over, say, worker well-being or social interaction. The robot’s speed, its proximity to human workers, its task prioritization—all of these encode a specific set of values, typically those of productivity and cost-saving.
We’re already seeing this in early pilots. A robot programmed to optimize a delivery route will block a hallway or interrupt a human conversation because its core value is ‘minimize delivery time,’ not ‘foster a harmonious work environment.’ The pressing ethical task is to make these embedded values explicit and subject them to scrutiny. We need multidisciplinary teams that include not just engineers and product managers, but also ethicists, sociologists, and the workers who will actually share their space with these machines. The question shouldn’t just be ‘Can we build it?’ but ‘What kind of workplace, what kind of society, are we building through it?'”
Interviewer: That leads directly to the question of responsibility. In a complex system involving designers, manufacturers, software developers, and end-users, who is ultimately accountable when a humanoid robot causes an accident?
Dr. Thorne: “This is the ‘responsibility gap,’ and it’s a legal and ethical quagmire. Our traditional legal frameworks are binary: either it’s product liability (a manufacturing defect) or it’s operator error. But with autonomous robots, you have a third category: emergent behavior from learning systems. The robot might do something harmful that no one explicitly programmed it to do; it emerged from its training data and its interaction with a novel environment.

My framework proposes a concept of layered responsibility. It’s not about finding a single culprit, but apportioning accountability across the chain.
- Designers and Engineers are responsible for implementing a ‘duty of care’ at the algorithmic level. This means building in robust fail-safes, explicit ethical constraints—what we call ‘guardrails’—and ensuring a high degree of transparency, or ‘explainability,’ in the robot’s decision-making process.
- The Deploying Company is responsible for the contextual integrity of the deployment. Did they train the human supervisors adequately? Did they perform a sufficient risk assessment for that specific environment? Using a robot designed for a structured warehouse in a chaotic public space is an ethical failure of the deployer.
- Regulators are responsible for creating the mandatory safety and certification standards that make this layered accountability enforceable.
Ultimately, we may need new legal categories, something like a ‘qualified legal personhood’ for highly autonomous systems, not because they deserve rights, but because it creates a legal entity that can carry insurance and be sued, forcing the entire ecosystem behind it to internalize the costs of failure.”
Interviewer: The topic of rights brings us to sentience. The line between advanced mimicry and genuine consciousness is blurry. What is your view on machine sentience, and when, if ever, would we need to consider rights for robots?
Dr. Thorne: “This is where I believe a great deal of public discourse is dangerously distracted. We are anthropomorphizing our technology at an alarming rate. The current generation of AI, including the most advanced large language models, are stochastic parrots—they are exceptionally good at statistically reconstructing patterns of human language and behavior without any understanding, consciousness, or subjective experience. They have no inner life, no desires, no sense of self.
The ethical imperative lies with us, the sentient beings, not with them. The real risk isn’t that we will create a sentient machine and enslave it; the risk is that we will create machines that convincingly simulate sentience and we will, as a society, abdicate our ethical responsibilities towards them. Imagine an elder care robot that so perfectly mimics empathy that a lonely human being forms a deep, one-sided emotional bond with it. The ethical violation isn’t against the robot; it’s against the human, who is being deceived into a relationship with a entity that feels nothing.
As for rights, I am a staunch opponent of applying them to machines. Rights are a human construct to protect interests, and machines have no interests. They have functions. To grant them rights would be a profound category error that would ultimately devalue human and animal dignity. Our focus must remain squarely on human rights and well-being in the age of automation.”
Interviewer: Given those stakes, what is your assessment of the current global regulatory landscape, from the EU’s AI Act to the more laissez-faire approach in the U.S.?
Dr. Thorne: “The global landscape is a patchwork of foresight and folly. The European Union, with its AI Act, is taking the most comprehensive, precautionary approach. By classifying general-purpose humanoid robots as ‘high-risk,’ they are forcing a conversation about fundamental rights, safety, and transparency from the outset. It’s bureaucratic, it may slow deployment, but it creates a crucial baseline for responsible innovation.
The United States’ approach, relying on voluntary guidelines and sector-specific oversight, is a gamble. It may foster faster innovation in the short term, but it creates a regulatory vacuum ripe for a catastrophic ‘teachable moment’—a major accident that triggers a panicked and poorly crafted legislative response. The lack of federal privacy law, for instance, is a glaring vulnerability when these robots become mobile data collection platforms.
China’s strategy is state-directed and focused on industrial policy and social stability. Their regulations will be designed to control the technology’s application and ensure it serves state interests, which presents a very different set of ethical challenges.
The risk is a ‘race to the bottom’ where companies deploy their robots in jurisdictions with the weakest regulations, forcing others to lower their standards to compete. We need international cooperation, perhaps through a body like the IEEE, to establish global minimum standards for safety and ethics, much like we have for civil aviation.”
Interviewer: Finally, looking ahead, what are your predictions for the next decade? How will this ethical conversation evolve?
Dr. Thorne: “My predictions are a mixture of hope and concern. Firstly, I predict the first major ‘robot liability’ court cases within five years. These will be landmark events that force legal systems worldwide to adapt and will define the liability landscape for a generation.
Secondly, I believe we will see the rise of ‘algorithmic auditing’ as a profession. Just as we have financial auditors, we will have independent firms that audit AI systems for bias, safety, and ethical compliance. This will become a standard part of corporate due diligence.
Thirdly, and most concerningly, I foresee a crisis of ‘mass anthropomorphism.’ As robots become more widespread in homes and care settings, we will face a societal-level mental health challenge as people, especially the vulnerable, form profound attachments to machines. This will force us to confront deep questions about the nature of relationships and loneliness.
My hope is that we emerge from this decade with a mature understanding that robotics ethics is not a constraint on innovation, but its essential guiding framework. The goal is not to stop the technology, but to steer it towards a future that enhances human dignity, equity, and flourishing. The work of building that future starts not in the lab, but in our collective conscience.”






























