Automated vehicles (AVs) appear to be the future, with more and more automatic features being added to modern vehicles. With self-driving cars seemingly on the horizon, both experts and consumers alike are left concerned with the security of these systems and how susceptible they are to cyber attackers.
What They Are and How They Work
An AV is a vehicle with a driving system capable of not only observing its surroundings, but digesting and understanding them, so that safe and straightforward decisions can be made to reach a selected location.
To achieve this, AVs face many difficult challenges, including adverse external conditions such as weather, obstacle avoidance, following traffic laws, as well as many other additional problems. However, through a combination of artificial intelligence (AI), particularly machine learning (ML), and sophisticated hardware components, automated driving becomes a real possibility.
A variety of exteroceptive sensors are employed in order to gather accurate information from the environment to feed into AI systems.
Light detection and ranging systems (LIDARs), alongside cameras, are the primary sensory equipment. LIDARs function by firing pulses of light that reflect off of environmental objects - these reflected signals are then what are analysed, with this being particularly useful as it allows for the creation of a depth map (3D Mapping). They have an impressive and highly accurate range of 200 metres, with a wide field of view, but occasional unexpected reflections can obscure the data collected. Cameras are incorporated in conjunction to this technology due to their ability to detect colour, assisting with the recognition of road lights and traffic signs - but this comes at the cost of data collection being compared to weather conditions such as rain or fog.
Additional sensors including ultrasound, radars and global navigation satellite systems (GNSSs) are also in use, but they are typically for one specific task such as parking assistance or to validate data gathered by other sensors. The figure below details where exactly these sensors are incorporated on an AV :
AI and Machine Learning
When driving, the driver must be able to anticipate and appropriately respond to unexpected or even dangerous situations. To achieve this in AVs, AI software must be able to accurately perceive the data it is receiving and then interpret it before coming to a decision on how to act; this is done by three data processing techniques.
First is the perception system, which is used to convert the raw sensory information it receives into data that can be used to represent the environment surrounding the vehicles, that is adaptive and frequently updated. This system is made up of scene understanding, the idea of identifying road markings, traffic signs, traffic lights, detection of other moving vehicles/agents and even sound event classification. Features associated with scene understanding are already implemented on cars that are on the road, such as lane assistance. Alongside scene understanding is scene flow estimation, with this process being used to understand how a scene may evolve and predict the motion of moving obstacles like other vehicles or pedestrians. Once these two processes help build a dynamic environment, scene representation is used to help the vehicles understand where it is relative to all other agents and obstacles that have been detected.
Next is the planning system, which aims to complete all the required calculations needed in order to ensure the vehicle can act autonomously. This ranges from route planning, which involves trying to find the most efficient route based on the AVs current coordinates, to behavioural planning, aiming to determine the most desirable route behaviour to enact in the specific circumstance (such as knowing to wait in the right lane if having to turn right at a roundabout). This system is one of the most complex features of AVs, as it takes fine motion planning to simultaneously move safely and appropriately on the road whilst ensuring progress is being made towards the final destination.
Lastly, the control system is responsible for the execution of the behaviour premeditated by the planning system. A challenge presents itself here, as it’s difficult to map a desired behaviour to actual commands to supply to the vehicles hardware components that can cause these changes, especially at high speeds where variables get a lot more complicated and affected by external conditions like weather and current vehicle weight. Nonlinear controls have to be used to account for this complexity, with ML techniques showing great promise to improve the validity of control model predictions.
AV Vulnerabilities and their Remediations
With such new and complicated technology being utilised to create AVs, there is a wider attack surface and greater likelihood of cyberattacks. Here I will detail the potential security issues present with this modern technology.
CWE-1039 (Automated Recognition Mechanism with Inadequate Detection or Handling of Adversarial Input Perturbations) is used to describe products with automated detection functionality that can properly distinguish inputs; machine learning in AVs are affected by this weakness. It was found that an AV surrounded by two rings of salt, one solid and one dotted, would be unable to move from their current position as this is the no entry road pattern . Similar faults have been found by placing a traffic cone on the bonnet of the car, as this too is registered as no entry.
CWE-1039 would also apply to the incorrect detection of traffic signs, which may occur if an attacker obscures a road sign, which may cause the system to error or behave in an improper fashion that may be hazardous to other vehicles on the road. An example attack consisted of using black tape on a 35 mph sign.
When doing this, the Tesla model S speed assist functionality interpreted the sign as an indication of an 85 mph speed limit - causing the vehicle to rapidly accelerate. Fortunately, this is only seen in earlier test models of the car, and is not an issue in 2020 and newer versions of the vehicle. 
A study conducted by Georgia Institute of Technology  found that in modern object-detection models, people with darker skin tones were five percentage points less likely to be detected, with this difference being observable even when variables like time of day or obstruction were controlled for, demonstrating that the technology still has a long way to go.
When identical signals to the one LiDARS transmit are sent from an unexpected position, it can alter where the vehicle perceives the objects to be, so an AV may be deceived into perceiving an object is 20m away, when in reality it is only 1m, causing a collision. This attack can be achieved with two photodetectors as cheap as £0.53 - as this photodetector can generate an output voltage equivalent to the LIDARs.
Jonathan Petit, alongside other researchers, have demonstrated a proof of concept for this attack using a LIDAR and two photodetectors. In this practical example, the LIDAR originally detects the wall as one metre away, yet once fake echos are created, it detects the wall is 20 and 50 metres away. 
Signal spoofing attacks are an extension of signal relaying, going as far as to create fake objects, as opposed to making objects appear in a different position. LIDARs have a range of approximately 200m, light travels this distance back and forth in approximately 1.33 microseconds. LIDARs should listen for at least this amount of time for incoming reflections - this is also the window of opportunity to inject spoofed signals.
This is done by sending a counterfeit signal after the first echo that makes a point seem further away - as the LIDAR believes the signal has travelled a longer distance. However, if too much time has passed since the first echo and then a fake signal is sent, the LIDAR will not detect it . This is demonstrated in the figure below:
Supply Chain Attacks
One major concern is the regulation of manufacturing electronic control units (ECUs) and other AI systems used to create AVs. Most car manufacturers integrate ECUs obtained from third parties, resulting in these car components coming from a multitude of companies. As a result of this, managing the security of these components becomes exponentially more difficult as the number of ECU manufacturers increases, however, it is worth noting that more modern car companies such as Tesla are beginning to use their own hardware which makes this regulation easier.
The complexity of regulating security in AI systems is also rapidly increasing, simply as it is a relatively new branch of technology. Detecting security issues in pre-trained models, such as intentional back doors, is a formidable challenge considering the complexity and open-source nature of machine learning making it difficult to determine the origin or existence of such vulnerabilities.
CVE-2022-10558: Tesla Model 3 DOS
Security researcher Jacob Archuleta discovered a web-based denial of service attack in the Tesla Model 3 web interface . This was achieved with a specifically crafted-webpage, which when visited by the chromium browser on the Teslas infotainment system, results in it crashing. Due to improper process separation, this also resulted in the entirety of the Tesla Model 3’s interface failing, preventing the user from seeing the speedometer and from using the turn signal, climate control or navigation features; this issue was rated with a High (7.1) severity.
CVE-2015-5611: UConnect Vulnerability
In 2015, cars such as the Jeep Cherokee, Fiat-Chrysler, Dodge Challenger and a handful of other models were fitted with the vulnerable UConnect 8.4AN/RA3/RA4 infotainment systems . The UConnect system had direct access to vehicle controls and could be taken over remotely by a malicious third party without any form of authentication being required, due to a vulnerable port being open on Sprint’s network. Once compromised, an attacker could interact with not only the UConnect system, but also various other connected control systems within the vehicle; this allows the threat actor to control the information displayed within the car (such as speed) as well as control braking, steering and A/C fans.
Remediating issues regarding image recognition technology will continue to be a slow and difficult challenge as we approach fully autonomous cars. The most effective way to get ahead of the problem is to have researchers and vendors work in conjunction with one another, so these issues are identified and resolved prior to the public release of AVs. Unfortunately this in itself is a problem, with many vendors not letting academics test actual models and training sets used by the manufacturers, due to fear of public scrutiny. While this doesn’t necessarily dilute the technical accuracy of any studies on that matter, it does mean these tests cannot be directly applied to models the public may actually use.
The key principles of cyber security for connected and automated vehicles , created by the HM Government, details the procedures and assessments that should be in place by automotive companies and retailers in order to ensure security of the final product - as well as having measures of incident response in the event of an attack. It is vital that all manufacturers and engineers abide by these instructions in order to maximise the security of autonomous vehicles.
Attacks that attempt to provide the LIDAR invalid sensory information can be resolved in a handful of ways. Redundancy is one technique that can be used to prevent spoofing and signal relaying, as by implementing multiple wavelength LIDAR means a threat actor has to attack both signals simultaneously; this is not more technically challenging but also typically more expensive.
Software adjustments to the LIDAR can also improve its security, such as random probing. In spoofing attacks, an attacker must synchronise with theLIDAR so they know precisely when to fire a pulse back - yet if this interval is random and constantly changing - then an attacker will have a hard time executing this attack as they wont be able to fire a pulse that will be detected by a LIDAR (or atleast won't be able to do it consistently).
To prevent supply chain attacks, a strictly enforced, widespread strategy should be employed. In this instance, AI security policies should be established across the supply chain, including third-party manufacturers. An effort should also be made to identify potential risks and threats relating to ML in autonomous driving, although this may be impossible to do with 100% accuracy.
CVE-2020-10558 is now resolved in any version >= 2020.4.10, as the security researcher worked alongside the tesla team to provide efficient and quick remediation.
CVE-2015-5611 was mitigated with the FCA issuing a voluntary recall of 1.4 million vehicles affected by the issue, so that a patch could be applied to the UConnect infotainment systems software. Alongside this, Sprint has disabled traffic to the vulnerable port on its network.
ICS-CERT also recommends that users of these technologically advanced vehicles should take defensive measures, such as ensuring wifi in use is encrypted with the WPA2 encryption option, not inserting media such as USBs into a vehicle unless it has come from a trusted source and using VPNs when remote access is required.
Manufacturers and developers of autonomous vehicles, as well as the people behind their components, should have a large focus on the security of their product. Sensory systems used to map the environment around the AV should be developed in a way to ensure they don’t supply AI systems erroneous information, whilst also ensuring that valid sensory data cannot be misinterpreted by these AI systems in a way that will result in hazardous behaviour.