Video: (AM2) LiDAR Is Key to Safer ADAS and Autonomous Vehicles | Duration: 2100s | Summary: (AM2) LiDAR Is Key to Safer ADAS and Autonomous Vehicles | Chapters: LiDAR System Availability (14.585s), LiDAR for Autonomous Driving (108.04s), LiDAR Technology Explained (240.315s), LiDAR Technology Explained (327.925s), LiDAR Obstacle Detection (523.675s), Autonomous Driving Levels (642.65s), LiDAR for Autonomy (739.265s), LiDAR Safety Features (1041.02s), Sensor Pros and Cons (1209.215s), LiDAR Reliability Questions (1388.39s), Sensor Redundancy Systems (1471.02s), LIDAR Interference Management (1621.295s), LiDAR vs Radar (1700.735s), Innovus Three Update (1767.495s), Ideal Sensor Stack (1789.35s), LiDAR and Connectivity (1865.04s), LiDAR Technology Comparison (1952.17s), Concluding Remarks (2076.985s)
Transcript for "(AM2) LiDAR Is Key to Safer ADAS and Autonomous Vehicles": Ask most people what separates a level two LiDAR from one design for level three or level four, and the answer is usually the same. More range, higher resolution, lower cost. Those factors matter, but they are not the dividing line. The real distinction is availability. In a level two system, a LiDAR can momentarily fail and the driver remains the fallback. At level three, performance thresholds tightened. Longer range at high speed high west speeds, higher resolutions, stricter validation because responsibility begins to shift. But at level four, the equation changes fundamentally. There is no human to defer to. The sensor must function consistently and predictably under all validated conditions. If a family is seated in the back, the system cannot pose because of condensation, road spray, or partial blockage. Reliability is no longer a feature specification. It's a safety obligation. Over the past decade, Innovus has worked with automakers to define those scenarios and engineer a LiDAR system capable of meeting them, including deployment in the level three program launched by BMW. In his keynote, Elad Hofstetter will explore why availability and the quality of the underlying sensor data ultimate ultimately determine how far automate autonomy can go. Elad is the chief business officer at Innovus Technologies. Elad has over fifteen years of experience in r and d product management and product product management positions in sectors like biosensing and pharmaceuticals. Before Innovus Technologies, he worked for LifeBeam Technologies and Teva for pharmaceutical industries. Elad holds a master of science in biomedical engineering from Tel Aviv University. His keynote will be followed by a q and a. So ask your questions. The floor is yours, Elad. Hello everyone. I'm Elad, the CBO with Innovus. In this session, I will talk a little about LiDAR as a key to safer with the ADAS and autonomous drive. First, a little about the end of this. For the past ten years, we have been developing LiDARs and perception for autonomous drive. We have a series of automotive products, starting with Innovus one that was part of the BMW level three launch in 2024. It's on the road, including our perception stack. Going into Innovus two platform, which includes multiple configurations for level three, level four, long range, and medium and short range. Innovus two platform is being used by platforms of default driving group, Mobileye, Dynamo Truck, and more. It allows level three and level four of autonomy. All these products were designed and developed to meet the specs to enable level three and level four autonomous threat. In addition, automotive standards such as lifetime, durability, quality, and more. It's not some magic powder one sprinkles over a lighter to make it a tumulticrate. This must be done by design across the whole process in advance. I will start by explaining briefly about LiDAR. LiDAR stands for laser imaging detection and ranging. A LiDAR measures the distance to a given point it's directed at, creating a three d image of the scene. Lighter is an active sensor, meaning it creates its own signal and not dependent on external lighting like a camera. It's important since laser laser from the LiDAR is not affected by night, weather, and doesn't have the optical illusions such as the camera will have. LiDAR is not a new technology. It has been existing for several decades. However, in the last decade, Fighter has been adjusted and designed to meet automotive needs like automotive grade standards and quality and specs to enable autonomy levels, level two, level three, four, and so on. These are a few examples of illusions where in camera, where a large and far away object can seem as close. Also a lighter is not confused between light, dark, shadow. For a lighter it really doesn't make any difference. So how does a lighter work? So basically a lighter is made out of four key components. There is a laser transmitter that emits the light. It goes through the optics and scanner and the beam goes out of the LiDAR outwards. The beam will bounce off some objects outside and a portion of it is reflected back into the receiver. The signal from the receiver is processed and the output is a three d image, which is called a point cloud. By measuring the time it took the light to bounce back from the object and go back into the receiver, you can measure the distance for any given pixel. We can also see here a simplification of a LiDAR with a 1D scan which the LiDAR is emitting the laser, it's hitting this scanning mirror and reflected back from the box with the sphere on it. Basically it's creating a two d point cloud in this case. This is just a look from above of the scene is created. Using multiple laser beams and other mechanisms, you can create a three d point cloud such as the one we see here, which is very dense and rich. From a high performance LiDAR, objects and obstacles can be easily detected due to the detection range point cloud density. Even from the naked eye, it's very easy to see that there is a vehicle here, there's a truck, there's a two wheeler, there's a pedestrian, and there's something on the road like a small obstacle. Even if I don't know what it is, obviously there is an obstacle here on the road. With a high performance LiDAR with the right specs, you can also detect lane markings in the drivable area, road boundaries, and a lot more things which are needed for driving an autonomous drive. We will use the Leidot Point Cloud to detect the objects on the road using machine learning and perception algorithm running over the point cloud. The objects in the scene will be detected and classified. The green boxes, these are bounding boxes around the objects. These are detected and classified as vehicles in this example. These objects need to be detected beyond two fifty meters in order to enable high speed autonomous driving on the highway. In this video, we can see, in addition, lane markings identified from the lighter point cloud. In addition to the object detection and classification. All of these are running in real time. This is key for lane keeping control and similar features, which are important for level three and four autonomous drive. An example of a case of the importance of obstacle detection is from the Yosemite Park. Unfortunately, several cars which were not using LiDARs failed to identify some obstacles while using their autonomous features. This happened repeatedly at the same Y junction inside the park. So we decided to hit the road and went to test the same scene just using a lighter in this case. In the video, you will see the point cloud colored in different colors. The green is for a drivable area where a vehicle can drive since it's a flat surface. And the red or pink parts will show areas in which there is an obstacle or in which there is something which goes 14 centimeters above the ground because a passenger vehicle cannot drive in that place. This we will call as non drivable. So you can easily see the surface where it's flat here. Then reaching that Y junction, can easily see the areas which are detected as non drivable in red. This is inherent as part of the LiDAR feature. Light is also not so sensitive to rain, unlike cameras, several droplets just in the window. And it's really hard to see. And for the lighter, this has barely an impact. Moreover, even in harsher conditions such when it's snowing and it's dark, the camera will barely see anything while the LiDAR will function very well. This is a case which is heavily snowing, growing through the dark and the camera will hardly see anything in front of it while for the LiDAR it's hardly even detectable. To place everyone on the same page, a while back, the Society of Automotive Engineers constructed an autonomous level scale from zero till five. Most of the vehicles on the road today have level zero up to two or two plus autonomy. To date, there are only two passenger vehicle OEMs with level three autonomy on the road. Mercedes and BMW with our Innovus one. They are using front facing LiDARs for highway applications. There are also Level four vehicles, such as robotaxis, which can be found in cities like San Francisco, Shanghai, and other locations. For the most part, they use multiple LIDARs to create a full coverage of the long and short range around the vehicles. As these are designed for urban environments which are more challenging or for autonomy due to their density and vibrant scenes. These are commercial vehicles and not owned by customers or the end users. There's a very important line that sets the difference between level three and level, above and every level under it. It's related to the responsibility. For the level three autonomy, responsibility is on the OEM, while below it, it's not. Hence, the OEMs will go through extreme measures to validate the system and utilize high performance and safe systems. Nothing less than that. So what value does the LiDAR bring to meet high levels of autonomy? Let's touch on this just in a nutshell. High performing LiDAR is needed to meet the use cases. We'll touch on some of those, meaning we will need high resolution, field of view, long detection and many more attributes. As in this theme, which is clear, clean and anything in it can be easily detected and identified. From a bicycle rider through a pedestrian, a pole and even just the height of the curve. All these can be easily seen and identified. There are lots of relevant use cases to handle. An important one is a small obstacle detection which requires long range, high resolution, especially for the vertical and quick refreshment. Detecting a small obstacle at a distance in a short latency is key for driving on the highway. This is one of the most important parameters that set eventually the driving speed under autonomy. As in this example, we have a Euro pallet and a black tire that is detected above 170 meters. These are obstacles which is challenging to detect from afar. The lower pick was taken at a real distance from the obstacles. Can someone see the obstacles in the picture from that lower pick? What we can see on the left is the point cloud showing the euro palette and the black tire. We can see there are multiple pixels from each of them, showing them in very high confidence in the right position and location. It's not enough just to detect the small obstacles from a distance. The system has to differentiate between overdrivable and non drivable obstacles above 200 meters. Multiple pixels on the obstacle are needed for this. Non drivable will be defined as an obstacle above 14 centimeters from the surface. This is for a passenger's vehicle. If anything, on the road will be detected as nondrivable, which actually might not be, there would be multiple false alarms and brakes. The lighter must possess high vertical resolution and superability for such capabilities and purpose. We can see here on the top a drivable and non drivable obstacle on the road. The drivable is small, is low in height, and the vehicle can drive over it. While the non drivable cannot be the case as it's 15 centimeters like a Euro pallet. Furthermore, other use cases like pedestrian detection, a high refresh rate is needed, high horizontal resolution, and long range. Acceptance criteria is not a large adult, but rather a small toddler. First, it has to be detected from a distance with multiple pixels. In this video, the moving target, which is actually a toddler mannequin, is detected from afar. Again, we can see multiple pixels which are needed, like here on the top. Both you can see the stand. A and the vertical part of this mannequin toddler. Often, light will be mounted on the roof due to multiple functional advantages. This needs to be accounted for also for cutting scenarios of a low profile vehicle, like in the demonstration we can see here on the left. For these scenarios, a large horizontal field of view and vertical field of view are needed. In addition to the low latency and high refresh rate and accuracy in order to detect such cut in quickly and precisely. Other use cases like when we're mounted up on the roof, You need to handle not only things that are low, but also overhanging loads like the protruding log from a truck or trailer. The LiDAR needs a very large VFA to see both downwards, detect the ground, the cut in scenarios. In addition, simultaneously any potential overhanging load from a vehicle on the site. Sufficient resolution density is needed as well as these protruding objects could be long and narrow and hard to detect. Having high performance LiDAR on its own is not enough. The lighter needs features such as handling icing. Ice surface could accumulate under certain conditions on the lighter. So we need to integrate some heater or some other mechanism in order to de ice it, remove it from the surface, and remove any residues. Safety is key for autonomy, though it's not enough for adoption. With that high availability of the autonomous system, there will be no adoption. It starts from the component level like being resilient to water droplets on the window. On the top we can see the point cloud with no droplets on the LiDAR window itself. On the bottom, there are lots of droplets scattered all over the LiDAR window. The difference between them and the performance is truly negligible. In this video, we can see how the LiDAR is covered with droplets from the rain coming from splash tail and the rain and the vehicle in front of it. The lighter is resilient to the drops and does not identify the splash or the splash as an obstacle. Even in the cases where a thick layer of ice covers the lighter window, it will not go blind. Some degradation will be expected. Though the lightest availability will be maintained. Without it, the level three or level four autonomy will not be available. You can see here the LIDAR with a thick layer of ice covering almost most of the window. Nevertheless, the point cloud of the scene can still be seen. As in going to this environment. Even snowflakes are detected in real time. They are not identified as an obstacle. Or an object or some optical for more phenomena. Avoid any false alarms or break done for the vehicle. Yet this is just in a nutshell. There's a lot more. The use cases, needs, the features, the quality standards, and many more than that. To summarize, although the focus until now was on the lighter advantages. At the end of the day, there are pros and cons of each sensor type using. These are just a few examples, but there are many more. Starting with lane detection, which is a strong point for cameras being able to detect colors and contrast in the scene. Also, LiDARs today are performing better and better in meeting these things. Small obstacle detection. This is where the LiDAR really shines. A radar can go far and demonstrate long range detection, but the separability, the point of the density and avoiding ambiguity in differentiation between drivable and non drivable obstacle is a big challenge for the radars. Stationary object detection. This is another point of advantage for the LiDAR. In multiple use cases, it will be tough for a radar. Well, for camera, it could have some localization challenges. Night performance or non lit areas. This is excellent for the LiDAR and radar, which as we said before, these are active sensors. They create their own illumination. Address where there in the cases of fog, that's where the radar will sign. It's the least impacted by fog heavy fog in the scene. Color semantics and understanding. A key point for the camera like reading signs, though LiDAR able to fill some of those use cases today. Last but not least in this list, it's depth accuracy. Inherently, this is the true LiDAR know how. It measures the distance to certain points in the scene. At the end of the day, the lidar, the camera and the radar of the eyes, the ears and the sensors of the vehicle. Removing of any of these sensors will limit the vehicle's autonomy level, safety and availability. They all complement each other as we have with our sensors today. We'll now move on to the question parts. Thank you for listening until now. Ella, can you come on stage to, answer the questions? Elad, I think here you are. Wonderful. Wonderful. Thank you so much for this presentation. And so. now we have a few questions for you. I'll start, and I'll include a few questions from the audience. So listening to your presentation, I noticed that you said the LiDAR needs to endure, needs to remain available and ensure proven longevity. As we move from level three to level four, it's all about how reliable the LiDAR is because you don't have anyone to defer to and expect to engage. So the LiDAR, if I understand correctly, needs to be available under all conditions. So my question is about availability redundancy. So what redundancy exists if the LIDAR becomes unavailable? Because I believe it sometimes happen. So what redundancy exists? Okay. Yeah. Okay. Yeah. Good question. And this yeah. This this is tough. It's a big part of the, actually, of the system to, enable this. But first of all, you do need to ensure that the availability is very high, meaning 99%, and more. Because if it's not available, then actually you cannot meet any of the requirements for the ODDs of level three, and level four, and so on. However, there are also other measures which are taken on the system level to enable degrees of redundancy for different use cases. So they could do, let's say, overlap of, LiDARs and other, sensors, to cover, different cases which are challenging. There could be several systems. For example, like, primary path and a secondary path. That if one system is not available or has some problem, like the primary one, so then the secondary path, will come into play. For example, like MRM, minimum risk maneuver, systems, are designed for these cases. Well, let well, if we think of a use case when the lidar drops offline temporarily or, fully, why? Well, I don't know. Can cameras or radars, take over and safely maintain operation or must the vehicle transition immediately, to minimal risk? So eventually, like we touched also in the presentation, the auto sensors, they complement each other. The radar, the camera, and the lighters, and each one of them has its strong points and also weak points. In the case that one of the sensors is not available, so you're only left with the strong points of the others. But you also need to keep in mind, you also have the weak points of the others as you don't have the strong points. Lidar is key for l three and l four applications. Without having it, many of the use cases relevant use cases for an LODD of l three and l four will not be available. A system of l four without a lidar or an l three will have to turn into an l two or l two plus as they cannot meet some of those functions without it. Okay. I have a question. it will not be. enough. I'm sorry. I have a question from the audience. Someone is asking, can LIDAR systems interfere with each other? Liner systems can interfere with each other, and this is something that has to be taken into account, by design to ensure you avoid, such interference. For example, systems of l four could have multiple, sensors and LiDARs mounted on them as you want a coverage of three sixty, and you might wanna have a coverage of short range and long range, and in some cases, ultra long range as well. You need to ensure that all these sliders, they don't interfere with each other. And you can also assume that if there's one vehicle which is driving, with these applications and sensors on the road, so there could be others as well. Others in the adjacent, road or in other roads on the sides or in the front and so on. And you have to ensure that not only your lighters don't interfere with your lighters, but also other systems and LiDARs out there do not interfere, with each other. So, yeah, this is something that has to be taken into account, by design, and there are different means to do so. Someone is at well, is saying, you talk a lot about lidar versus camera. What about lidar versus radar? Yeah. Well, we don't there wasn't enough time maybe to go into everything. Both LIDAR and cameras are active sensors, meaning they produce their own energy and from that, their their targets. At the end, I touched a little bit on the radar, versus, the camera. Sorry. The radar versus, the LIDAR and the camera as well. So there are some things that are in common, like the fact that they're active, and so on. However, there are key differences, between, the LiDAR and the radar. For example, the LiDAR resolution and ability to separate between objects and systems and and objects and obstacles and anything, in the scene is by far higher, the resolution and so on. However, the radar has some other advantages. For example, it's less affected by, fog conditions and so on. And these are some of the reasons that, these, complement each other. Someone is asking if you have any design wins for Innovus three. Okay. Good question. So interview three is an interesting and compelling, a product. We don't have anything public to share at this point. At the right time, we will share. Someone is asking what is the ideal, level four census stack for commercial vehicles such as buses? Well, uh-huh, like like in many things, especially in autonomous, it depends who you ask. Talking to different OEMs and platform providers, each of them have their own vision on what is the ideal system for l three and for l four as well. And you can also see it with systems today on the road. Today, there is no standard of what is the best practice in mounting, the sensors and the platforms and so on. And we need to also to take into into account. It's not only about what is the optimized, way to mount the sensors. There are there are many other things you need to take into account. For example, the vehicle design, the. shape, the size, how does it look like, how many people we want to put inside, and what does it do. And all these are factors, they need to take into account of this. So today, there is no I don't think there is an ideal design, and I think it's more it's based a lot on the case, the vehicle, and the application. Another question for well, we have many questions from the audience. That's wonderful. How do you think the evolvement of LiDAR whereas CB2X has slowed down globally and is the key part among car connectivity? Sorry. I couldn't hear the last part of the sentence. The involvement say that. LIDAR. Yeah. How do you think the involve the involvement of lidar whereas c v two x has slowed down globally and is the key part among cars connectivity? Okay. I see. Well, interesting. I don't have a good visibility, of all of the connectivity between the vehicles and the infrastructure, and so on. This is not our bread and butter working on this. We try to focus on the autonomy of the vehicles. However, we do see a lot of things around in terms of the infrastructure and the connectivity between them, including also LIDARs like ITS, physical AI, and so on coming to place. And I assume also the communications with those will come together. Our focus today is providing the sensors and the system above it and the perception for these applications and needs. And communication and other parts will come from other partners and players. I have a question. Oh, well, that's that's my question. Traditional time of flight LiDARs have evolved. It is field proven and must produced, but they are known to have issues with reliability, integration limits, and cost. What about silicon photonics and module base, frequency modulated continuous wave, Lidars? Is it something you envision or not really? Okay. Yeah. Good question. There's a lot of talk about this and other technologies out there, and FMCW is definitely very interesting technology. But if you look out there, for example, on the different implementations of vehicles on the road from l two through l three and l four, the dominant by far is the time of flight, lidars, especially with the nine zero five nanometer wavelength out there. And the main reasons is because of the maturity of this, technology, the availability of the, subcomponents, the infrastructure of the sub suppliers, fabs to, support this, and eventually also the cost. FMCW is very, interesting. There are still there's still, I think, some way to go, Mhmm. for it to to mature, including also the, supporting infrastructure, for this. But I can say that several years ago, there was a lot of talk around 09/2005 and not meeting performance and so on, for example, like, and other things. But at the end of the day, it's about implementation. And just looking at the adoption of nine zero five, and you can compare the range performance and anything needed, like resolution, field of view, and so on. So it's there, nine zero five. Will there be other, technologies like FMCW and others in the future as well? Yeah. I I guess so. But today, this is the dominant one. We are reaching the end of this, session. We had other questions. Maybe we can send them to you by email so that you can answer. But thank you so much. It was really insightful. We had a great conversation. Thank you. Thank you so much. It was my pleasure. Talk to you soon. Bye bye. Bye.