The European Parliament Thinks Ahead

The technology of autonomous cars and other machines is off and running — sometimes, it seems, on its own, with insufficient regard for the potentially massive social and economic consequences.

Following the deaths recently of two people in accidents involving autonomous vehicles, concern has been heightened over the safety of the new technology. The impetus behind autonomous travel — carmakers all over the world investing billions of dollars in this wave of the future — acknowledge certain risks but say they are working hard at testing and perfecting it. When the cars are ready for market, they will be much safer than conventional vehicles driven by accident-prone humans, they assert.

Time will tell whether autonomous vehicles will ultimately be safe at any speed. In the meantime, there are other aspects of robotics that deserve more attention — for example, the current discussion at the European Parliament about whether robots should be accorded legal status as “electronic persons.”

The idea proposed by some MPs is that giving legal status to autonomous robots would make it possible to hold the machines accountable for any damage they may cause.

While the MPs should be given credit for thinking ahead about the role of robots in the near future, more credit should be given to the more than 150 experts in the related fields of robotics, artificial intelligence, law, medical science and ethics who pounced on the proposal as pernicious nonsense.

The experts said that such legislation could allow manufacturers, programmers and owners of robots to absolve themselves of responsibility for them. They could conceivably claim that “the car ran him over, not me!” Or “the drone launched those missiles, not us!”

The language employed in statements opposing “electronic persons” was notably mild. Take, for example, the statement of Nathalie Nevejans, an expert in the ethics of robotics at Artois University in France, who called the notion “as unhelpful as it is inappropriate.”

Nevejans futher warned that “the legislator could progressively move towards the attribution of rights to the robot…utterly counterproductive to the extent that we develop them to serve us.”

Inappropriate and counterproductive, indeed! Obviously, this is a prescription for a society run amok (even more than is already the case).

In Europe and the United States, an ongoing debate already rages about assigning liability for damages when an autonomous vehicle is involved in an accident. Who’s to blame — the individual owner, or the company providing the technology?

“Most fully autonomous vehicles will not be owned by individuals, but by auto manufacturers such as General Motors, by technology companies such as Google and Apple, and by other service providers such as ride-sharing services,” says the Harvard Business Review.

But GM, for one, has been lobbying for loopholes that would exempt it where the vehicle has not been properly maintained, e.g., if tires were slightly underinflated or if the oil hasn’t been changed as frequently as recommended. Liability for semi-autonomous systems of the kind that are hitting the road now will not be the same as for the fully autonomous models.

Definitive answers to such perplexing questions will have to wait for insurers to determine and court rulings in actual cases. But it’s rather unlikely that either of these will solve the problem by finding the car guilty.

It was not without good reason, then, that the author of the European Parliament proposal, parliamentarian Mady Delvaux, was not immediately available to comment on a letter against it signed by the experts.

It’s hard to imagine how anyone could seriously propose that robots be treated like people and be held legally responsible for their actions. The experts said the idea was born of science fiction and inspired by overly extravagant claims for artificial intelligence and robot autonomy.

It is as if they are saying that, if robots can sooner or later do everything that humans can do, including thinking and making decisions for themselves, then it is they who must bear the moral responsibility for those decisions. That is, if it looks like a moral decisor, and it walks like a moral decisor, and it talks like a moral decisor, then it must be a moral decisor…

But without a conscience and a fear of punishment (not to mention its own bank account or social security number), any application of civil or criminal justice to robots, no matter how sophisticated, can itself be only artificial and meaningless.

Intentionally or not, Delvaux’s proposal serves as a wry comment on the prospects of AI — a kind of parliamentary joke, especially in view of the fact that scientists have yet to succeed in programming robots to have a sense of humor. Could be that the 150 experts showed a lack of sense of humor on their own part by taking Delvaux’s proposal seriously.

In any case, the “electronic person” clause will presumably be deleted from the final draft of an initiative on artificial intelligence expected for presentation at the European Parliament at the end of April.

The robot tasked with writing and revising laws for the EP should see to that.

To Read The Full Story

Are you already a subscriber?
Click to log in!