IN THE future, when a car deliberately cuts you off in traffic, take down the licence plate number and phone a lawyer. Chances are, if it is driven by an angry robot, you could be rolling in the cash.
A research paper comparing self-driving vehicles, such as Google’s driverless car, with smart weapons has found legal roadblocks surrounding the risk of a crash, and not the technology, are more likely to halt robotic car roll-outs.
The paper, presented at the WeRobot conference in the US overnight and comparing the legal and ethical problems facing robotic cars with those confronting automated lethal weapons systems, says human drivers often resort to aggression when competing for the same bit of space on roads, and robotised cars will need to do the same.
Without showing aggression, it says, engineering the robotic car so it takes safest option each time may not be the best thing for all road users, and may even increase the risk of a crash in situations such as merging with freeway traffic, where showing a rush of blood becomes a factor.
However, making autonomous cars man up on the road also means that instead of protecting life, there is an increased risk of death and injury should an aggressively slanted robot car get it all wrong.
“Calibrated aggressiveness ... closes certain paths and opens others, and stops the oscillation between this and that,” the paper says.
“Nonetheless, this is striking behaviour for an autonomous vehicle that is presumably programmed to engage in the obviously safest behaviour.
“The judgment — and the correct one — is that the safest behavior actually includes some amount of aggression in driving.”That aggression, it says, will be shaped by road rules that spell out who can occupy that bit of road, or at the point where the suggestion of aggression becomes a real threat “expressed by aggressive moves of the vehicle”.
The paper also says that robot-driven cars need to see the red mist so that human drivers do not take advantage of their more safety-oriented attitude towards fighting for space on the road.
“This is the fact of human drivers rapidly realising that the self-driving car, in the end, has to defer to the more aggressive human driver,” it says.
“As they realise this, they simply do what they want in relation to the self-driving car, secure in the knowledge that it has no ability to carry through on threats of aggression that matches the human.
“The human driver, in the absence of a police officer willing to pull him or her over, can always marginally top the self-driving car in aggression.”Even if it was allowed to express human levels of aggression, the paper suggests the robot car “will always be one incremental step behind” a human driver.
“On the other hand, does anyone want the self-driving car aggressively sticking up for itself at high speed on the freeway?” The problem for car-makers, the paper says, is building aggressive self-driving cars that increase the chance of causing fatalities if they react wrongly to the level of aggressiveness of human drivers.
“One thing we are pretty certain of, however, is ... (that it) will have a notable effect on how much (money) you need to set aside for liability concerns just to cover your liability costs,” the paper says.
“It might push a significant (big-screen) actor to lobby to get out of some of these risks by getting special dedicated (human-operated vehicle) - robot lanes - not because it’s especially efficient (the self-driving car actually is safer and better among humans than humans), but because it is legally cheaper.
“It might also push smaller players out of the business, or ensure that they never start into the business from the beginning because of the liability costs and perhaps an inability to secure insurance at all.”