While we haven’t yet reached the technological singularity (when AI will just start getting smarter and smarter and surpass human intelligence: can’t wait), it’s no secret that robots are keep getting smarter and smarter. (While it sometimes seems that actual people are getting dumber and dumber, I don’t think that there’s actually any evidence that this is the case. Yet.)
Anyway, the robots, or their virtual, software-based cousins, the bots, are out there. And some of them – especially those nasty little bots; especially those nasty little Russky bots – are up to no good.
As robots/bots get older, wiser, and more malicious, the question arises about who’s responsible if one of them does something bad. Prosecutorially bad.
Under an ongoing EU proposal, it might just be the bot itself. A 2017 European Parliament report floated the idea of granting special legal status, or “electronic personalities,” to smart robots, specifically those which (or should that be who?) can learn, adapt, and act for themselves. This legal personhood would be similar to that already assigned to corporations around the world, and would make robots, rather than people, liable for their self-determined actions, including for any harm they might cause. The motion suggests:
Creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently. (Source: Slate)
Oh.
The “robots wouldn’t have the right to vote or marry.” I agree with robots not getting the vote, even though they’d certainly be smarter than certain low information voters, that’s for sure. On the other hand, robots/bots could “decide” to make some truly terrible electoral choices. And the prospect of robots marrying is too ghastly to contemplate. Would a robot be able to marry a bot?
But robots are surely more like actual living, breathing persons than are corporations, and SCOTUS has declared corporations people. So who knows what’s going to happen in the long run.
At least with a corporation, you can identify who’s responsible and accountable for bad corporate behavior: the officers of the corporation. They can be fined, and even become imprisoned, if their companies play fast and loose. Because we know that, when it comes down to it, the “corporation” isn’t the actor. It’s the people running the corporation.
Who’s responsible when it’s a robot? The robot’s owner? The person who created the code that the bot used to smoke the entire power grid of the US?
With robots/bots “self-learning”, getting smarter and perhaps nastier, who’s responsible when they go rogue?
All this fretting about robotic personhood:
It’s a forward-thinking look at the inevitable legal ramifications of the autonomous-thinking A.I. that will someday be upon us, though it’s not without its critics. The proposal has been denounced in a letter released April 12, signed by 156 robotics,* legal, medical, and ethics experts, who claim that the proposal is “nonsensical” and “non-pragmatic.” The complaint takes issue with giving the robots “legal personality,” when neither the Natural Person model, the Legal Entity Model, nor the Anglo-Saxon Trust model is appropriate. There are also concerns that making robots liable would absolve manufacturers of liability that should rightfully be theirs.
My goodness, who knew there were that many models of just what defines personhood? I’m just as happy to sit here in blissful ignorance, without worrying a whit about the economic, legal, and philosophical underpinnings that need to be considered if and when robots get declared people.
Certainly, before I gave such vaunted status to machines, let alone software, I’d vote for creating a demi-personhood category for sentient beings, like doggos and pygmy chimpanzees.
I don’t know what John Frank Weaver, a Boston attorney who works on AI law and author of Robots Are People, Too – and here I was thinking that my blog post title was original - thinks about doggo or bonobo (pygmy chimp) personhood, but he does think that we should be figuring out just what status robots have.
Weaver has written about what it means to give robots various aspects of personhood, including the right to free speech, the right to citizenship, and legal protections (even for ugly robots). As you can guess from the title of his book, he himself recommends limited legal personhood for robots, including the right to enter and perform contracts, the obligation to carry insurance, the right to own intellectual property, the obligation of liability, and the right to be the guardian of a minor.
Okay. I’m nodding along until I get to “guardian of a minor.” A minor human, or a minor robot? The inquiring mind of an actual human would like to know.
Meanwhile, while I’m fretting about whether dogs should be given some of the rights of personhood – so many dogs being so colossally superior to so many humans – I do have to point out that robots are dogs now, too. If you haven’t seen the Boston Dynamics pup, check it out here. (See Spot run!) All I can say is that, if humanoid robots are going to get personhood, surely dog robots should too. Arf!
______________________________________________________
*I initially read this as “robots”
No comments:
Post a Comment