Autonomous Cars: Safety, Laws & Drivers
The future is here: autonomous vehicles are increasingly available throughout the market.
But the idea of self-driving isn’t limited to cars. From autonomous ships to autonomous trucks, technology is advancing in unprecedented ways. With innovation, however, comes responsibility and a whole new set of challenges to navigate. These challenges relate not only to the safety and reliability of the technology but also the legislative framework governing its use and, for cars in particular, the role of human drivers in this new vehicular landscape.
Levels of Automation in Autonomous Vehicles
To fully appreciate the intricacies of safety, legislation, and the role of drivers in autonomous vehicles, it's important to understand the varying degrees of vehicular automation. These degrees are typically classified into six levels, from 0 to 5.
Level 0: No Automation
At this level, the human driver is in total control of the vehicle, steering, braking, accelerating, and monitoring the environment. These are traditional vehicles with no automated driving features. The journey is entirely in the driver's hands, from start to finish.
Level 1: Driver Assistance
Vehicles at this level are equipped with one or more primary automated features, such as cruise control, designed to assist the driver. However, the driver must still oversee all other driving tasks. For example, a car might adjust its speed to maintain a safe distance from the vehicle in front but the driver is responsible for all other aspects of driving.
Level 2: Partial Automation
These vehicles are equipped with at least two automated features that work together, such as adaptive cruise control and lane-keeping. This relieves the driver from controlling those functions, but the driver must remain attentive to the driving environment and be ready to take over the other driving tasks at any time.
Level 3: Conditional Automation
At this level, the vehicle can handle safety-critical functions under certain conditions, but the driver must be prepared to retake control when the system reaches its limitations. The transition period is crucial, as the driver needs to be able to respond in a timely manner when required.
Level 4: High Automation
Vehicles with high automation can perform all driving tasks, even if the driver does not intervene when asked. However, this functionality may not operate in all environments or conditions. For example, a Level 4 vehicle might require a human driver to take over in severe weather conditions or off-road scenarios.
Level 5: Full Automation
This is the zenith of vehicular automation. A Level 5 vehicle can perform all safety-critical driving tasks for an entire trip, regardless of whether a driver is present or not. These vehicles can operate under any environmental conditions and perform any tasks a human driver could do.
Understanding these levels of automation is key to shaping the laws and safety standards around autonomous vehicles. It also helps in determining the role of drivers, which changes significantly as we move up the automation ladder. With each advancing level, the need for human intervention diminishes, paving the way for fully autonomous driving and a future where the lines between driver and passenger become increasingly blurred.
Real-Life Implications of Autonomous Driving: Tesla’s Autopilot & Full Self-Driving
In the ongoing discourse about autonomous vehicles, it's crucial to understand the real-world implications of the technology and its risks. A chilling case exemplifies this, involving a Tesla Model Y that hit a 17-year-old boy on a North Carolina highway in March, while supposedly operating in Autopilot mode.
The incident, which occurred when the car failed to slow down for a stopped school bus, resulted in severe injuries for the teenager, including a fractured neck and a broken leg. The boy's great-aunt, expressed grave concern about the repercussions of such technology, asserting that it could have resulted in the death of a smaller child.
This accident is not isolated. The Washington Post reports a disturbing increase in Tesla crashes in Autopilot mode, with 736 U.S. crashes recorded since 2019. This figure is far higher than what was previously reported. The data also highlights an alarming growth in fatalities and serious injuries related to Autopilot. As of now, at least 17 fatal incidents have been definitively linked to the technology, with five serious injuries reported.
While Tesla's chief executive, Elon Musk, firmly believes that vehicles operating in Autopilot mode are safer than those driven solely by humans, incidents like the North Carolina accident starkly contrast this. The data not only shows distinct patterns in fatal accidents involving Tesla's vehicles but also reveals how decisions by Musk, such as expanding the availability of these features and removing radar sensors, might be contributing to the increased incidents.
However, it's important to note that a report of a crash involving driver-assistance doesn't imply the technology was the cause. As Veronica Morales, spokeswoman for the National Highway Traffic Safety Administration (NHTSA), states, the human driver must always be in control and fully engaged in the driving task. Nevertheless, the surge in Tesla crashes is troubling.
The increased rollout of Tesla's Full Self-Driving, which brings driver-assistance to city and residential streets, is a likely cause for the increased accident rates. Moreover, the total number of crashes involving automation technology remains minuscule compared to overall road incidents. But the fact that Tesla accounts for the vast majority of these automation-related crashes underscores the risks associated with aggressive experimentation with automation.
The aggressive expansion of Tesla's Full Self-Driving from 12,000 users to nearly 400,000 in just over a year is coupled with a significant increase in crashes, with almost two-thirds of all driver-assistance crashes involving Teslas reported in the past year. While the exact reasons behind this correlation need further investigation, experts express concern about the high number of Tesla incidents in the data.
The teenager's case, the devastating result of automation complacency, serves as a stern warning about the potential dangers of autonomous driving technology. As we continue to embrace this groundbreaking technology, it is vital to balance our ambition with the need for safety, stringent regulation, and comprehensive user education. Critics, like Lynch, are already calling for a ban on automated driving until we can ensure safety for all road users.
How Autonomous Cars Affect Driving Laws
With the rise of autonomous cars, the legal framework governing traffic and driving must be reevaluated to accommodate these advancements. This is due to the fact that autonomous vehicles represent a paradigm shift in the understanding of mobility, taking control from human hands and entrusting it to artificial intelligence.
Traditional traffic laws and regulations indeed apply to autonomous vehicles as they do to conventional ones. Nonetheless, the "driverless" concept poses profound legal challenges, particularly in terms of liability and responsibility.
In a conventional accident scenario where human drivers are in control, the establishment of fault and subsequent legal processes are fairly straightforward. Here, the responsibility lies with the driver who has breached a traffic law, such as running a red light, or committed a human error like distracted driving.
However, the introduction of autonomous vehicles complicates this framework. Who should be held responsible if an autonomous vehicle causes an accident? Is it the human operator, the vehicle manufacturer, the software developer who designed the AI algorithm, or the operator of the autonomous system? These questions have started a fresh debate in traffic law.
For instance, the role of a human operator, or a 'fallback-ready user', becomes blurred in the context of a fully autonomous vehicle. If the autonomous car is fully self-driving, with no need for human intervention, then it becomes challenging to hold the human occupant responsible for the vehicle's actions. The individual may not even be in a position to intervene if the car makes a wrong decision.
On the other hand, the role of manufacturers and software developers in accidents involving autonomous cars is another crucial area for examination. If an accident is caused due to a malfunction or error in the self-driving system, responsibility may lie with the companies that developed and implemented the software or hardware. Hence, product liability laws may come into play, where companies could be held responsible for faults in their products that lead to accidents.
Another key issue revolves around data. Autonomous vehicles are typically equipped with various sensors and cameras to help navigate the road. In the event of an accident, data from these sources could be crucial in determining what went wrong and who or what is at fault.
As autonomous vehicles continue to evolve and become more prevalent, there's a pressing need for regulatory frameworks to evolve alongside them. Legal bodies worldwide are now grappling with these complexities, aiming to devise laws that can ensure safety while fostering innovation.
The Future of Autonomous Driving
The dawn of autonomous vehicles is not just transforming our roads but also driving the evolution of our legal systems. As we continue to embrace this new era of mobility, it becomes crucial to balance the exciting potential of autonomous vehicles with the intricate legal challenges they present.
In this rapidly evolving landscape, one thing is clear: while autonomous vehicles represent a thrilling leap forward in technology, they also call for rigorous scrutiny, comprehensive legislation, and a reconsideration of the role of the human driver. As we navigate this new terrain, the goal must be to ensure that the drive towards innovation doesn't compromise safety, accountability, and the welfare of road users.