Article By : Stefania Sesia, u-blox
While lidar, radar and cameras come to mind when considering V2X apps, precise positioning information should also be part of the mix in the drive toward Level 3+ autonomous vehicles.
In the run-up to fully self-driving vehicles, autonomous driving functionality will be restricted to clearly defined scenarios. Highly accurate and trustworthy positioning frameworks will complement lidars, radars, and cameras and enable vehicle-to-everything applications, allowing Level 3+ autonomy to take hold.
The road to autonomous driving has been bumpier than expected. You may already own a car that feels like it drives itself, while (legally) remaining fully under your control. But the technological increment from such advanced driver assistance systems (Level 2 ADAS according to the SAE’s classification) to actual hands-off, eyes-off autonomous driving (Level 3+ AD) – and the legal implications that it brings – have given the world’s automakers pause.
![Stefania_Sesia_u-blox](https://i0.wp.com/www.eetimes.com/wp-content/uploads/Stefania_Sesia_u-blox.jpg?resize=225%2C300&ssl=1)
Undoubtedly, it will be years before the first commercially available vehicles are qualified to drive autonomously everywhere, all the time (Level 5). Until then, however, the operational footprint will advance to include increasingly challenging scenarios, allowing drivers to get distracted, but still requiring them to reclaim control within several seconds when requested (Level 3). As the technology matures to Level 4 and then Level 5, that requirement will gradually disappear.
Until then, a vehicle’s ability to reliably determine whether it can safely switch on its autonomous driving features – based on traffic conditions, the weather, and, crucially, which portion of the road it is on – will be vital to protecting its passengers, fellow drivers and pedestrians. As a result, precise, reliable, and trustworthy positioning will be an essential enabler for Level 3+ autonomous vehicles.
Four main functions
On the road, autonomous vehicles are continuously engaged in four main tasks: perception, scene detection and prediction, decision and then actuation.
First, there is the perception of each individual vehicle’s surrounding environment, including objects, traffic signs and lane markers. The perceived objects and the environment are then used for scene detection and prediction – recognizing the situation the vehicle is facing with reference to a digital map and predicting how it will evolve in the future.
At any given moment, the vehicle faces a decision: Based on its understanding of the reality unfolding around it, what is the optimal maneuver that will allow the vehicle to meet a specific driving strategy? That decision is turned into reality by the autonomous driving system in the final step: actuation.
Global Navigation Satellite System (GNSS) technology is an essential component of the positioning system used to localize vehicles and match position on a digital map of its surroundings. In practice, OEMs differ in how they fuse GNSS output with information gathered by local sensors such as lidars, radars, and cameras. The precise localization of the vehicle also allows the positioning of perceived objects, enabling precise identification of the overall scene in a specific environment.
Integrity demands trustworthy position data
Positioning systems are expected to reliably deliver trustworthy position data – in other words, with a high level of integrity. Over the past decade, the technology has come a long way, extending the availability of positioning beyond the reach of GNSS signals (into tunnels, underground) and overcoming weaknesses that are baked into standard GNSS technology (atmospheric error sources, slow convergence, multi-path effects).
GNSS receivers have dramatically improved their performance, concurrently tracking all major satellite constellations on multiple frequency bands to improve availability, accuracy and convergence times. Inertial measurement units (IMU) made up of gyroscopes and accelerometers detect changes to the vehicle’s trajectory, thereby bridging satellite signal outages.
More recently, a new generation of GNSS correction services geared towards mass market applications broadcasts GNSS augmentation data that can be integrated by advanced GNSS receivers across broad areas. GNSS correction services improve absolute position accuracy to achieve decimeter-level accuracy even in challenging environmental conditions such as in urban areas. They do so by delivering data that can be used to correct the main sources of position errors such as ionospheric delays and satellite orbit errors.
Ultimately, GNSS receiver and augmentation data, IMU output, mechanical sensors (wheel-tick sensor, for example), a dynamic model constraining motions to those expected of four-wheeled vehicles, are processed using sensor fusion algorithms. The output is a highly accurate “dead reckoning” position output that is available almost everywhere, anytime.
Localization data
Clearly, falsely identifying a vehicle’s location to enable autonomous driving functionality in unauthorized locations can put lives at risk. As a result, the localization function used to enable Level 3+ autonomous driving applications can be subject to stringent safety and integrity requirements.
Among these is functional safety, defined in ISO 26262 as a methodology specifically designed to avoid unreasonable risk due to hazards caused by malfunctioning behavior of the electrical and electronic systems used in vehicles. ISO 26262 applies to hardware and software components and defines the requirements to be met by safety relevant functions of the system as well as by processes, methods and tools used throughout the vehicle’s development process and lifecycle.
Functional safety is complemented by safety of the intended functionality (SOTIF), which is defined in ISO 21448 and covers integrity along the entire chain. While functional safety is focused on the solution’s design, SOTIF asks how safety might be affected by external factors such as misuse, signal reflections, outages and deliberate or accidental interference, or internal weaknesses stemming, for example, from the reliance on probabilistic algorithms.
For example, typical requirements for safe automotive applications are Automotive Safety Integrity Level B (ASIL-B) compatibility with integrity risks down to one failure per one to 10 million hours and a protection level at meter level.
Level 3+ cars when?
Now that all the building blocks are in place, including automotive-grade, high-precision positioning as well as frameworks that comply with ISO 26262 and ISO 21448, we expect highly automated driving to take off starting in 2024. Meanwhile, more car makers will launch Level 3 programs for passenger cars while the market is still dominated by AD 2+ until the end of the decade.
Elsewhere, the truck market looks set to leapfrog Level 3, focusing directly on Level 4. The bumpy road to Level 3+ autonomous driving will, then, finally open to a smooth highway.
This article was originally published on EE Times.
Stefania Sesia is head of application marketing for automotive at u-blox.
![](https://i0.wp.com/www.eetasia.com/wp-content/themes/aspencoreasia/images/logo.png?w=1110&ssl=1)