“The highly intriguing theory – supported by the extensive geological evidences including the bacteriological analysis of deep-laying hydrocarbons – about the abiotic nature of oil and its practically infinite recreation in the lower geological formations of earth was presented some 25 years ago. These findings were quickly dismissed, and the theory itself largely ignored and forgotten. The same happened with the highly elaborate plans of Nikola Tesla to exploit a natural geo-electrical phenomenon for the wireless transfers of high energy for free. Why? Infinity eliminates the premium of deeper psychologisation, as it does not necessitate any emotional attachment – something abundantly residing in nature cannot efficiently mobilize our present societies…”
Following the lines from the seminar work of prof. Anis H. Bajrektarevic on Energy, Technology and Geopolitics, let us present an interesting take on the E-cars, similar driverless technologies and its legal implications that will mark our near future.
Self-driving cars react in a split second: quicker than even the most attentive driver. Self-driving cars don’t get tired, they don’t lose concentration or become aggressive; they’re not bothered by everyday problems and thoughts; they don’t get hungry or develop headaches. Self-driving cars don’t drink alcohol or drive under the influence of drugs. In short, human error, the number one cause of road traffic accidents, could be made a thing of the past in one fell swoop if manual driving were to be banned immediately. Is that right? It would be, if there hadn’t recently been reports about two deaths, one during the test drive for a self-driving car (UBER) and one while a semi-autonomous vehicle was driving on a motorway and using its lane assist system (Tesla), both of which regrettably occurred in the USA in March 2018. In Tesla’s case it seems that the semi-autonomous driving assistant was switched off at the moment of the accident.
Around the globe, people die every day due to careless driving, with around 90% of all accidents caused by human error and just a small percentage due to a technical fault related to the vehicle. Despite human error, we have not banned driving on these grounds. Two accidents with fatal consequences involving autonomous vehicles being test-driven have attracted the full glare of the media spotlight, and call into question the technical development of a rapidly progressing industry. Are self-driving cars now just hype, or a trend that cannot be contained, despite every additional human life that is lost as a result of mistakes made by self-driving technology?
The legal side
For many, the thought that fully autonomous vehicles (a self-driving car without a driver) might exist in the future is rather unsettling. The two recent deaths in the USA resulting from (semi-) autonomous cars have, rather, may cause fear for others. From a legal perspective, it makes no difference whatsoever for the injured party whether the accident was caused by a careless human or technology that was functioning inadequately. The reason for the line drawn between the two, despite this fact, is probably that every human error represents a separate accident, whereas the failure or malfunction of technology cannot be seen as a one-off: rather, understandably and probably correctly, it is viewed as a system error or series error caused by a certain technology available at a particular point in time.
From a legal angle, a technical defect generally also represents a design defect that affects the entire run of a particular vehicle range. Deaths caused by software malfunctions cause people to quickly lose trust in other vehicles equipped with the same faulty software. Conversely, if a drunk driver injures or kills another road user, it is not assumed that the majority of other drivers (or all of them) could potentially cause accidents due to the influence of alcohol.
The desirability side
The fundamental question for all technological developments is this: do people want self-driving cars?
When we talk of self-driving (or autonomous) vehicles, we mean machines guided by computers. On-board computers are common practice in aviation, without the pilot him- or herself flying the plane – and from a statistical point of view, airplanes are the safest mode of transport. Couldn’t cars become just as safe? However, a comparison between planes and cars cannot be justified, due to the different user groups, the number of cars driven every day, and the constantly imminent risk of a collision with other road users, including pedestrians.
While driver assistance systems, such as lane assist, park assist or adaptive cruise control, can be found in many widespread models and are principally permitted and allowed in Europe, current legislation in Europe and also Austria only permits (semi-) autonomous vehicles to be used for test purposes. Additionally, in Austria these test drives can, inter alia, only take place on motorways or with minibuses in an urban environment following specially marked routes (cf. the test drives with minibuses in the towns of Salzburg and Velden). Test drives have been carried out on Austria’s roads in line with particular legal requirements for a little more than a year, and it has been necessary to have a person in the vehicle at all times. This person must be able to intervene immediately if an accident is on the horizon, to correct wrong steering by the computer or to get the vehicle back under (human) control.
Indeed, under the legislation in the US states that do permit test drives, people still (currently) need to be inside the car (even before the two accidents mentioned above, California had announced a law that would have made it no longer necessary to have a person in the vehicle). As a result, three questions arise regarding the UBER accident which occurred during a test drive in the US state of Arizona, resulting in a fatal collision with a cyclist:
- Could the person who was inside the vehicle to control it for safety reasons have activated the emergency brake and averted the collision with the cyclist who suddenly crossed the road?
- Why did the sensors built into the car not recognize the cyclist in time?
- Why did the vehicle not stick to the legal speed limit?
Currently, driving systems are being tested in Europe and the USA. In the USA, this can take place on national roads and, contrary to European legislation, also on urban streets. As long as we are still in the test phase we cannot talk of technically proven, let alone officially approved, driving systems. The technical development of self-driving cars, however, has already made it clear that legal responsibility is shifting away from the driver and towards vehicle manufacturers and software developers.
Whether, and when, self-driving cars could become an everyday phenomenon is greatly dependent on certain (future) questions:
- Are we right to expect absolute safety from self-driving cars?
- What decisions should self-driving cars make in the event that one life can only be saved at the cost of another?
- How should this dilemma be resolved?
If artificial intelligence (AI) and self-learning systems could also be included within the technology for self-driving cars, vehicles of this type might possibly become one day “humanoid robots on four wheels”, but they could not be compared to a human being with particular notions of value and morality. If every individual personally bears responsibility for their intuitive behavior in a specific accident situation, the limits of our legal system are laid bare if algorithms using huge quantities of data make decisions in advance for a subsequent accident situation: these decisions can no longer be wholly ascribed to a particular person or software developer if a self-driving car is involved. It will be our task as lawyers to offer legal support to legislators as they attempt to meet these challenges.
Dr. Andreas Eustacchio LL.M. (London LSE),
the Vienna-based attorney-at-law; born in Zambia