Loading
Loading
Automated Driving System Monitoring and Safety Validation Standards
No federal performance standard exists for driver monitoring in vehicles with partial driving automation (SAE Level 2), nor is there a mandatory pre-deployment safety validation framework for higher-level automated driving systems. Level 2 systems automate steering and speed control but require continuous human monitoring — a task humans are cognitively poorly suited for due to automation complacency and vigilance decrement. Without operational design domain (ODD) enforcement, these systems operate in conditions they were not designed for, and without validated driver monitoring, the human backup fails silently. Multiple NTSB recommendations to NHTSA have been classified "Open—Unacceptable Response."
Level 2 automation is deployed in millions of vehicles on U.S. roads. NHTSA's Standing General Order crash reports document hundreds of crashes involving vehicles with automated features. As deployment scales — more vehicles, more use cases — the absence of standards creates compounding risk. Approximately 94% of motor vehicle crashes involve human error, and the human-automation handoff problem at the core of Level 2 systems is well-documented in human factors research but unaddressed by regulation.
Eye-tracking, head-pose detection, and cabin monitoring technologies exist and are deployed by some manufacturers — GM's Super Cruise uses infrared camera driver monitoring. However, there is no federal minimum standard for what constitutes adequate monitoring, no requirement to deploy it, and no requirement to prevent system operation outside the ODD. Tesla continues to rely on steering-wheel torque sensing, which the NTSB found inadequate across at least 4 fatal crash investigations (Williston FL, Mountain View CA, Culver City CA, Delray Beach FL). In each case, Autopilot permitted prolonged driver disengagement and operated in conditions outside its design domain. The 2018 Uber ATG fatal pedestrian strike in Tempe, AZ showed the same pattern from a different angle: no pre-deployment safety validation was required, and Uber's safety risk assessment processes were inadequate. NHTSA's NCAP does not evaluate driver monitoring effectiveness. NHTSA has taken a reactive approach, relying on voluntary safety self-assessments rather than pre-deployment validation.
A federal performance standard for driver engagement monitoring (building on the technology GM and others already deploy) would establish a regulatory floor. ODD enforcement requirements — preventing system activation in conditions outside its validated domain — would close the most dangerous gap. A pre-deployment safety validation framework for ADS would shift the paradigm from reactive crash investigation to proactive safety assurance.
A team could benchmark existing driver monitoring technologies (eye tracking, head pose, torque sensing) against engagement detection accuracy using publicly available driving datasets. Another approach: develop an ODD boundary detection system that fuses weather, lighting, and road geometry data to determine when conditions fall outside a system's validated domain. Relevant skills: computer vision, human factors, automotive systems engineering.
- NTSB Automated Vehicles Investigative Outcomes — https://www.ntsb.gov/Advocacy/SafetyTopics/Pages/ADS.aspx - NTSB Tesla Investigation HWY16FH018 — https://www.ntsb.gov/investigations/Pages/HWY16FH018.aspx - NTSB Uber Investigation HWY18MH010 — https://www.ntsb.gov/investigations/Pages/HWY18MH010.aspx - NTSB Comments to NHTSA on ADS Framework — https://www.ntsb.gov/Advocacy/safety-topics/Documents/2021-Comments-to-NHTSA-Framework-for-ADS-Safety-ANPRM.pdf - NTSB Recommendations H-19-47, H-19-48, H-17-38, H-17-39: all "Open—Unacceptable Response" from NHTSA.
NTSB Automated Vehicles Investigative Outcomes, Tesla/Uber investigations, NTSB comments to NHTSA; https://www.ntsb.gov/Advocacy/SafetyTopics/Pages/ADS.aspx; accessed 2026-02-19