The Basics of Autonomous Vehicles, Part II: Legal Challenges and Opportunities


The technology industry is well-known for its agility, often launching products that not only respond to the latest consumer trends, but that disrupt the consumer consciousness and industries at large. And while legal service providers also meaningfully innovate, the legal system itself often lags behind innovation. This is due to our legal system being built on the judicial principle of stare decisis, which is intended to generally provide predictability and consistency in the application of the law. With this predictability can come slower adaptation to quick-moving technological issues. Congressional action can also result in statutory changes to the law, but statutes are usually slow to passage and require additional rule-making by administrative agencies before they are fully fleshed out. And these statutes and administrative rules are often the subject of litigation and require judicial interpretation and application. This lag is already becoming apparent in the autonomous vehicle (AV) industry.

Below is Part II of our AV series focusing on the legal challenges and opportunities facing the AV industry. Read Part I here.


While we don't necessarily punish actors for acting "unethically" per se, the determination of whether an actor has acted within what society considers "ethical" will inform a fact-finder's decision-making. The safe operation of fully autonomous AVs will require the vehicles to make swift value judgments to avoid collisions or to choose correctly between the lesser of two harms. Humans are capable of making these types of judgments easily for example, the decision to swerve into a neighboring vehicle to avoid striking a pedestrian. But how should an AV behave in a situation where it is required to make such decisions, especially if both outcomes could cause harm?

Consider the classic ethical thought experiment involving a runaway trolley. Ahead of the trolley, there are five people tied up on the tracks and unable to move. The subject in this experiment is standing next to a lever that is capable of diverting the trolley from the main track onto a sidetrack, thereby saving the five people. However, on the sidetrack stands a worker who is completely unaware of what is occurring and who will be hit by the trolley if the subject pulls the lever. The subject is then asked whether it would be more ethical to do nothing and allow the trolley to kill the five people on the main track or pull the lever and divert the trolley onto the sidetrack, where it would kill the worker. The problem asks, in essence, whether it is better to take an action that will cause harm or to allow events to unfold without intervention, even if doing nothing results in greater harm.

The trolley problem is not merely hypothetical in the context of AVs; if the vehicle is in a situation where any action could potentially cause harm, it must be programmed to respond and include the option of simply doing nothing. Should the vehicle treat all human lives as equal? Should it prioritize the safety of its driver? Should it prefer vehicle-vehicle collisions over vehicle-pedestrian collisions? Or should the outcome not matter, so long as the vehicle is not programmed to intentionally cause harm? Such programming will require engineers, standards bodies, and regulatory agencies to develop algorithms that are capable of making these types of complex value judgments at or near human speed, and there is no guarantee that the outcome in a particular scenario will be judged to be the "correct" one particularly as a matter of law.


One of the more extensive impacts AVs will have on the law likely will be in the tort realm. Traditionally, liability for collisions between non-AVs was primarily based in the law of negligence, which apportions liability according to fault. Under this scheme, every driver owes every other driver the duty to act as a reasonably prudent person would act under the same or similar circumstances. If a driver's conduct falls below that standard, the driver has breached that duty and will be liable for any damages caused by the breach.

The law of negligence assumes the existence of human beings who are capable of acting prudently to produce a particular outcome under any given set of circumstances. But how would the law of negligence apportion liability in collisions between AVs, where no human decision-making or action occurred? Could the driver of an AV truly be considered "at fault" for an accident if he or she had no control over the operation of the vehicle? And could the "reasonably prudent person" standard ever be applied to AVs themselves under the existing scheme? One framework would hold the manufacturer of the AV responsible when it is determined that the AV is at fault for the accident, particularly if it can be determined that the accident was directly attributable to functionality beyond the driver's control. Another would hold the driver of the AV responsible for all accidents under the theory that AV drivers assume the risk of using an AV. A third solution in a future where AVs become the dominant mode of transportation is a vast expansion of no-fault insurance schemes, thus avoiding the issue of liability altogether.

As AVs become more commonplace, products liability law may play a much larger role in determining tort liability than the law of negligence. While products liability occasionally comes into play when apportioning liability in accidents between non-AVs (e.g., when a blown tire is determined to be the cause of the accident rather than driver error), determinations of fault in the vast majority of non-AV collisions turn on questions of negligence.[1] Products liability law imposes liability on the manufacturers and sellers of defective products that cause harm to consumers. Such liability can be predicated on allegations that the product was improperly designed, that the product suffered a manufacturing defect, or that the product should have come with a label warning of its potential dangers. Most products liability claims proceed on a theory of strict liability, which does not require the plaintiff to prove negligence. Rather, the plaintiff must show that the product suffered a defect and that the defect caused the plaintiff's injury.

A key inquiry when applying the law of products liability to AV collisions is whether the harm caused by the functioning of the AV's artificial intelligence (AI) systems can truly be considered to be a defect—i.e., whether it is a "feature" or a "bug." For example, consider an AV equipped with an automatic braking system that fails to engage when the AV is approaching an intersection, causing it to strike another vehicle and injure that vehicle's driver. The plaintiff in that case may have a strong argument that his or her injuries were caused by a defect, as the automatic braking system did not engage when it should have.

But what about a situation in which the AV's AI systems cause harm even though they worked as intended? For example, referring back to the ethical dilemmas discussed earlier, consider an AV that is programmed to prefer striking vehicles to striking pedestrians. If that vehicle causes an accident in which a neighboring driver is seriously injured, the driver could pursue a claim under the less-common risk-utility theory of products liability, which imposes liability for design defects if the risks of the design outweigh the utility of the design. Alternatively, courts could employ a modified negligence standard by which to judge the reasonableness of the AV's performance in such cases. Under this hypothetical scheme, an AV system could be said to perform unreasonably if either (a) a human driver or (b) a comparable AV system could have avoided the accident under the same circumstances.[2]

Intellectual Property (IP)

Under § 101 of the Patent Act, patent protection is available for any machine, process, manufacture, or composition of matter.

Courts and the United States Patent and Trademark Office have struggled with the issue of software patent eligibility for many decades, as it is considered in some contexts to be a patent-eligible process, while in others a patent-ineligible abstract idea. For software to qualify as a patent-eligible process, the claims of the software patent application must recite something "significantly more" than an abstract idea or include an "inventive step" that moves the claims beyond the mere implementation by software of tasks that were previously performed by humans. This is a sometimes nebulous standard that has resulted in inconsistent application, raising a sometimes difficult hurdle.

As AV adoption becomes more widespread, the AV industry likely will begin to establish standards to enable connectivity across various system platforms similar to those already established in the communications industry. When a company owns a patent on an invention that must be used to comply with a certain industry standard (as set by various standards setting organizations), the patent is known as a Standard-Essential Patent (SEP). Owners of SEPs must license their patents to other users on fair, reasonable, and nondiscriminatory (FRAND) terms, which can expose them to licensing disputes and severely limit their ability to obtain injunctive relief for patent infringement.


At the state level, jurisdictions across the country have raced to develop and implement AV rules and regulations, many of which are focused on allowing the state's roadways to be used as a testing ground for AV technologies. As of February 2020, most states have taken steps, either through legislation or executive order, to permit and regulate the operation of AVs on their state's roadways (see Figure 1).


Each state that has regulated the operation of AVs has done so differently. For example, some states allow AV operation for testing purposes only, while others allow full deployment and commercial use. As of the date of publication, 12 states (Arizona, California, Florida, Georgia, Michigan, Nebraska, Nevada, North Carolina, Ohio, Tennessee, Texas, and Washington) allow testing or full deployment of AVs without a human operator.[3]

There has been comparatively little activity at the federal level. Two bills—the AV START Act in the Senate and the SELF DRIVE Act in the House—were introduced in 2017, but have made little substantive progress since then. In 2020, however, the U.S. Department of Transportation released the fourth version of its AV guidance document, titled "Ensuring American Leadership in Automated Vehicle Technologies: Automated Vehicles 4.0." While the DOT guidance is voluntary, it outlines federal principles for the safe development and integration of AVs into the American transportation landscape.


[2] Bryant Smith, Automated Driving and Product Liability, 2017 MICH. ST. L. REV. 1, 6 (2017)


Authors: Phillip Goter and Joseph Herriges