Endpoint Security , Governance & Risk Management , Internet of Things Security

Tesla's Autopilot Tricked by Split-Second 'Phantom' Images

New Research Shows How Some Autopilots Misinterpret Images, Projections
Tesla's Autopilot Tricked by Split-Second 'Phantom' Images
Inserting split-second images into video billboard advertisements can confuse some autopilot systems in vehicles. (Source: Ben-Gurion University of the Negev)

The McDonald’s video billboard advertisement features crispy bacon landing gently on a slice of cheese atop a beef patty. Then, for 500 milliseconds, a stop sign flashes.

See Also: Industry Cyber-Exposure Report: Deutsche Börse Prime Standard 320

It may sound like a subliminal ploy designed to steer someone away from a burger, but it’s actually new research from Israel’s Ben-Gurion University of the Negev into how some autopilot systems from Tesla and Mobileye, owned by Intel, can be tricked into reacting after seeing split-second images or projections.

The research paper

The success of the experiment shows yet another type of potential risk that drivers relying on autopilot systems could face. Hackers have compromised digital billboard systems before, often with the aim of imparting humor but also to show the vulnerability of infrastructure-related connected devices.

The researchers say the problem is not one of poor code implementation or a security issue, per se. Rather, the issues “reflect a fundamental flaw of models that detect objects that were not trained to distinguish between real and fake objects,” writes Ben Nassi, one of the researchers, in a preview on his website.

For an attacker, such a trick could be conducted remotely and leave no evidence at the scene of an accident, the researchers say. Their paper, “Phantom of the ADAS: Securing Advanced Driver Assistance Systems from Split-Second Phantom Attacks,” is due to be presented at the virtual ACM Conference on Computer and Communications Security on Monday. It was first reported by Wired.

Efforts to reach Tesla and Mobileye weren’t immediately successful.

Fooling Autopilots

The researchers focused on two types of an Advanced Driver Assistance System, or ADAS: Tesla’s HW system, which is considered semi-autonomous, and Mobileye 360, an external camera-based system that uses computer-vision algorithms. A demonstration video shows how a Tesla Model X running the company’s HW3 autopilot would react to the display of a brief image.

The Tesla is shown at night on a two-lane road with the digital billboard on the left side of the road. Upon “seeing” the display, which flashes a stop sign, it slows to a stop just after the billboard. The experiment was conducted at a very low speed.

A Telsa comes to a stop after seeing a 500-millisecond image of a stop sign embedded into a McDonald’s advertisement.

In the second demo, researchers show how an even quicker phantom image, displayed for just 125 milliseconds, can fool Mobileye’s 360 Pro system. The flashed image shows a speed limit of 90 kph. The car, a 2017 Renault Captur, displays that speed limit to the driver.

Although drivers are supposed to maintain awareness while letting assisted driving technology do most of the work, it may be unlikely that drivers would see the phantom images and suspect something is wrong.

In another test, the researchers used projectors to display phantom images to see how the ADAS systems would react. Tesla’s system recognized a projection of a person wearing a tuxedo as a real person. In recognition of a possible person in the road, the car automatically applied its brakes. It also registered a projection of a vehicle as a real vehicle, the researchers write.

In another projection experiment, a speed limit of 90 kph was directed onto a tree’s leafy branches. Mobileye’s system recognized it as a sign, although to a human eye, it’s difficult to make out. In another twist, they used a small drone to flash that speed limit onto a pillar as the vehicle passed.

Tesla and Mobileye’s system also interpreted projections of objects as real objects. (Source: Ben Gurion University)

Calling Ghostbusters

Not all autopilot systems are vulnerable to projections or flashed images. Wired spoke to Charlie Miller, who along with Chris Valasek remotely compromised a Jeep Cherokee in 2015 by exploiting a vulnerability in UConnect telematics system.

Miller tells Wired that Tesla’s autopilot largely relies on cameras and a bit of radar, while systems developed by Waymo, Uber and GM’s Cruise use laser-based lidar, which would not be susceptible to such attacks.

Solving the problem for the Mobileye and Tesla systems requires ensuring that the vehicles can distinguish the “authenticity” of an object. The researchers developed a deep-learning system called Ghostbusters that they say reduces successful attacks between 81% and nearly 100%.

Ghostbusters uses five lightweight deep convolutional neural networks to examine a presented object’s reflected light, context, surface and depth, the researchers write. The fifth model wraps all of the other’s determination to make a final call.

How the Ghostbusters system determine real objects from images (Source: Ben Gurion University)

“The GhostBusters can be deployed on existing ADASs without the need for additional sensors and does not require any changes to be made to existing road infrastructure,” Nassi writes. The code is available on GitHub.


About the Author

Jeremy Kirk

Jeremy Kirk

Executive Editor, Security and Technology, ISMG

Kirk was executive editor for security and technology for Information Security Media Group. Reporting from Sydney, Australia, he created "The Ransomware Files" podcast, which tells the harrowing stories of IT pros who have fought back against ransomware.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing devicesecurity.io, you agree to our use of cookies.