By Roni Raviv, CTO of GreenRoad Imagine you’re on the road to a business conference. You’ve put the car into cruise control and automatic steering mode, and you’re engrossed in a business call. Suddenly your vehicle signals that it needs you to take control immediately. A sensor is not registering data. You are slow to react, automatic steering has disengaged and you find your vehicle swerving into the median. This particular scenario may be hypothetical, but it’s inspired by real concerns. As we progress farther along the road to fully autonomous vehicles, auto makers and regulators alike are pointing out that the impressive, but limited semi-autonomous capabilities of level 1, 2 and 3 vehicles can lull drivers into a state of unawareness that slows their reaction time when human intervention is needed. Several automakers have sworn to bypass level 3 autonomy altogether, recognizing that the dangers of driver inattention will only escalate until level 4 autonomy enables humans to take their hands off the wheel once and for all. But when it comes to the level 1 to 3 semi-autonomous vehicles already on the road, it’s too late to turn back. The aviation industry recognized some of the same challenges as automation, and began to replace some of the burdens of manually controlled flight with new responsibilities for pilots, namely understanding and monitoring automated systems. The idea of “situational awareness” emerged, defining the need for pilots to be aware of the many layers of their tasks throughout their flight, including maintaining an awareness of the health and status of the automated processes taking place. Ideally, this philosophy will soon convey to enterprise fleets, and commercial drivers will begin to appreciate and practice situational awareness when operating semi-autonomous vehicles. However, we can’t expect untrained consumers to do the same, at least not on a consistent basis. Full autonomy is still many years away from mass availability. So it seems that, if we can’t remove semi-autonomous features, we must at least find ways to make the handoff between vehicle and driver safer and more effective. There are a few ways vehicle engineers can approach this. Using multi-sensor data, crowdsourcing and other information to make handoffs sooner In much the same way that Waze crowd sources traffic data from drivers on the road, level 1 to 3 autonomous vehicles can use vehicle-to-vehicle communication or cloud-based apps to collect, share and react based on information about road conditions and other obstacles. If a vehicle in autonomous mode knows there’s an accident at the next exit or a slick stretch of road two miles ahead, it may choose to reroute around the obstacle. If rerouting isn’t an option, it may simply choose to alert the driver and perform the needed handoff earlier, giving the driver more time to react and engage. The default goal should be to alert drivers as soon as possible, even if it means occasionally delivering a false alert. Engineers are also working on improving the ability of semi-autonomous vehicles to make accurate risk predictions by combining information from multiple sensors to make more informed decisions. Typically, multi-sensor fusion is focused on improving the vehicle’s ability to identify impediments on the road and respond. But vehicle makers may also be able to use sensor fusion to improve the handoff from automated driving system to human by alerting the driver as early as possible that a handoff is needed. Expanding the use of sensor fusion to judge driver awareness and ability, not just external conditions Sensor fusion can be used to not only gain context on the driver’s surrounding, but on the driver’s state of awareness. Inward-facing cameras or sensors may be able to use facial recognition technology or biometrics to infer the driver’s cognitive state. If he or she is occupied in something else, the vehicle may activate a voice prompt or seat vibration to regain the driver’s attention, as airplanes do if they sense they are flying too low or in another unideal manner. Alternatively, if the driver appears drowsy, the vehicle may limit its speed to reduce the needed reaction time in the event of a handoff. If the driver appears to be highly incapacitated – for example, under the influence of alcohol – the vehicle may decide to reroute itself to avoid the chances of a driver handoff, or it may choose to simply pull over in the event that a handoff is required. It may even elect to avoid the drive altogether, requiring the driver to get a ride from a fully capable human driver. Learning from each driver’s past behavior and adjusting the alert methods according to what works for that individual Autonomous vehicle features will depend on machine learning to continuously improve their driving behavior based on their experiences, their knowledge of a particular area, and even their knowledge of their human drivers’ preferences. For instance, some drivers may be comfortable with the vehicle driving as fast as it safely can in foggy conditions, while others may prefer it to slow down to their level of comfort in case intervention is needed. When designing semi-autonomous cars, the design should support the ability to also learn individual drivers’ reaction times and adjust their alert methods and thresholds accordingly. For example, if a vehicle notices its driver is particularly slow to respond when asked to take control, it may need to limit that driver to only using semi-autonomous features on clear days or for short stretches of time. It may also need to “ping” the driver more often to make sure he or she retains some level of alertness while semi-autonomous features are engaged. Will drivers be annoyed if they have to drive manually more often and leverage semi-autonomous features less? Absolutely, but engineers should remember what’s at stake: bowing to consumer pressure and pushing autonomy further than it’s ready to go will not only result in human casualties but also reduced consumer trust around autonomous vehicles for years to come. Those high stakes underscore the importance of getting semi-autonomy right, starting now. They may even be an argument for regulators to ban some semi-autonomous features that are thought to be higher risk – even those that are already in mass production. Would this be tantamount to trying to put toothpaste back in the tube? No doubt consumers would complain and automakers will have to take a few lumps, but it beats the unsettling alternative. It’s fair to say that driver inattention is more a problem of human error than machine deficiency. But isn’t human error precisely what we’re hoping to solve with autonomy and semi-autonomy? If we accept that humans are not reliable driving machines, we must engineer around human limitations or we’re simply creating technology for technology’s sake. Circumventing level 3 vehicles is a positive step, but what are we doing to make the handoff between semi-autonomous vehicles and drivers safer today? And can autonomy afford to proceed without addressing these concerns?