Competition & Collaboration – Who is teaming up for Autonomous Driving?

Competition is good for business, this also applies to autonomous driving. Nevertheless every organization has to keep its eyes open for promising collaboration partners. Both traditional car manufacturers and new market players like Waymo (Google) are looking for suitable cooperations to fill their gaps.

Autonomous driving drives the change

The race towards autonomous driving has affected this trend in particular. Autonomous driving is not just about introducing a new way of driving, it is disrupting the industry. Current automotive profit margins are nothing compared to what companies could earn with driverless taxis. Just imagine the variety of services that can be offered inside an autonomous car.
Car manufacturers are slowly transforming into mobility service providers. But they need help from other companies to perform the change. The outcome: several strong company blocks in competition with each other.

Block building in automotive

The different blocks are fighting for the lead in the race towards autonomous driving. The goal is to offer a robot taxi service by 2021. BMW for example gathered several companies like Continental, Mobileye, Intel and Hyundai in order to bring together their knowledge whereas Waymo has been buying cars from FCA to modify them on their own.
Like BMW, Mercedes-Benz counts on strong cooperations. The Swabians teamed up with Bosch and the Chinese comapny Geely to give Waymo a hard time. A Mercedes-Benz representative recently announced that they plan to build autonomous cars from scratch instead of using modified versions of already existing models like Waymo does.
More blocks worth mentioning are:
– General Motors with Cruise Automation
– Aptiv (Delphi) with nuTonomy and Lyft
– Volvo and Autoliv (marketed as Zenuity)
Another notable block emerged around Baidu in China. The company has more than 70 partners and is currently working on optimizing the platform Apollo: 2.0.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site:

American Center for Mobility Gives Automakers a Safer Venue for AV Testing

Autonomous vehicles were given a boost this spring when the American Center for Mobility opened in Michigan. Located at the historic former site of the Willow Run Bomber Plant in Ypsilanti Township, ACM is hoping to be the premier destination for AV testing.
“We built ACM on a collaborative approach, working with industry, government and academia on a local, regional, national and even international level,” said John Maddox, president and CEO of the American Center for Mobility. He spoke to attendees at ACM’s ribbon cutting ceremony, which brought together a number of political supporters and auto industry execs.
Michigan Governor Rick Snyder referred to ACM as “another huge step forward” for the state as it strives to maintain its leadership as the auto capital of the world. “[Mobility] does three things,” said Snyder. “It’s going to bring us a safer world in terms of saving lives and quality of life. It’s going to create opportunities for people – people that may be economically disadvantaged, have disabilities and other challenges in their lives. It will provide options to their lives they have not seen in the past.” Snyder added that as mobility evolves it will also bring a new level of efficiency to the infrastructure. “This is a place to be on the forefront of improving lives, of creating opportunities for our citizens in this state, but also the entire world,” Snyder continued.
Lieutenant Governor Brian Calley concurred, adding, “This is going to make such a big difference for our infrastructure, for our safety, but especially mobility for people that don’t have the same types of opportunities that many of the rest of us have.” Calley praised the way corporations, associations, state representatives and others came together to build ACM from the ground up. “It’s so special, so important,” Calley added. “It’s going to have such a profound impact on the entire world and it’s happening right here.”
U.S. Congresswoman Debbie Dingell, one of many staunch supporters of AV technology, expressed the importance of building a place where self-driving cars can be tested and validated. “One of the things that has surprised me is the public resistance to autonomous vehicles,” said Dingell. “Let’s be honest, the Uber accident [in March] has made people concerned. That’s why we need this test site.”
Kevin Dallas, corporate vice president, artificial intelligence and intelligent cloud business development at Microsoft, also joined the stage to discuss how the company will serve ACM as its exclusive data and cloud provider. “We see it as an opportunity to invest deeply in the first safe environment where we can test, simulate and validate connected autonomous vehicles,” said Dallas. “And then accelerate the delivery of applications and services around autonomous systems. We’re taking that very seriously.”
After the ceremony, William “Andy” Freels, President of Hyundai America Technical Center, Inc. (HATCI), took a moment to share his thoughts on ACM. “We became founding members at the end of last year,” said Freels. “We are literally about 15 minutes from this facility. It’s a real investment in our local R&D facility here. Initially we will start using ACM for sensor development and sensor fusion testing. Connectivity is obviously a very important part.”
While ACM is designed to serve many areas of autonomous car development, Freels thinks the primary benefits will come from testing the potential interactivity and communication between cars (V2V) and infrastructure (V2I).
“Like never before, vehicles are going to need to work together to communicate [with each other] and the infrastructure,” Freels added. “That’s really quite different from the way it has been done in the past, where we could do something completely independently. I think that’s a key point of this facility – being able to collaborate with the industry, as well as the government and the academia side of it.”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

To Mirror or not to Mirror: How Camera Monitoring Systems are expanding the Driver’s Perspective

This article was authored by Jeramie Bianchi – Field Applications Manager at Texas Instruments.
Objects in the mirror are closer than they appear–this message is the tried and true safety warning that has reminded drivers for decades that their rearview mirrors are reflecting a slightly-distorted view of reality. Despite their limitations, mirrors are vital equipment on the car, helping drivers reverse or change lanes. But today, advanced driver assistance systems (ADAS) are going beyond a mirror’s reflection to give drivers an expanded view from the driver’s seat through the use of cameras. See ADAS domain controller integrated circuits and reference designs here.
Camera monitoring systems (CMS), also known as e-mirrors or smart mirrors, are designed to provide the experience of mirrors but with cameras and displays. Imagine looking into a rearview mirror display and seeing a panoramic view behind your vehicle. When you look to your side mirror, you see a high-resolution display showing the vehicles to your side. These scenarios are becoming reality, as are other features such as blind-spot detection and park assist.
It’s important to understand the current transition from mirrors to CMS. It’s no surprise that systems in today’s vehicles are already leveraging ADAS features for mirrors. Most new vehicles in the past decade have added a camera to the back of the vehicle or attached a camera to the existing side mirror, with a display inside the vehicle to give drivers a different perspective of what’s behind or at the side of the vehicle.
Figure 1 shows the routing of this rearview camera and display system. The backup display is embedded in the rearview mirror and a cable routes to the rear of the vehicle.

The side mirror is different because the camera is located on the mirror. The side mirror still exists for viewing, but typically its camera works when the driver activates a turn signal or shifts in reverse. During a turn or a lane change, the camera outputs a video feed to the infotainment display in the dashboard and may show a slightly different angle than the side mirror itself, as shown in Figure 2.

Now that I’ve reviewed current CMS configurations that incorporate a mirror with a camera and display, it’s worth noting it’s possible to achieve a CMS rearview mirror replacement through the addition of one or two cameras installed on the rear of the vehicle.
From the rear of vehicle, video data from an imager is input to TI’s DS90UB933 parallel interface serializer or DS90UB953 serializer with Camera Serial Interface (CSI)-2. This data is then serialized over a flat panel display (FPD)-Link III coax cable to a DS90UB934 or DS90UB954 deserializer, and then output to an application processor for video processing, such as JacintoTM TDAx processors, and then shown on a rearview mirror display. If the display is located far from the Jacinto applications processor, you will need a display interface serializer and deserializer to route the data over a coax cable again. You could use the DS90UB921 and DS90UB922 red-green-blue (RGB) format serializer and deserializer, respectively, or, if you’re implementing higher-resolution displays, the DS90UB947 and DS90UB948 Open Low-Voltage Differential Signaling Display Interface (LDI) devices.
Figure 3 shows the connections between these devices when using a display onboard with an applications processor.

The second CMS is the side mirror replacement portion. The camera must be located in the same location where the mirror used to be. This camera’s video data displays a view of what the driver would see in the mirror. To achieve this, the camera data is serialized and sent over an FPD-Link III coax cable to a remote display located in the upper part of the door panel or included in the rearview mirror display. With a camera and display, now the side view can be in more direct line-of-sight locations for the driver. For example, if both the displays for side view and rear view are included in the rearview mirror, the driver only needs to look in one location.
Another option available in a side mirror replacement is to add a second co-located camera with the first, but at a different viewing angle. The benefit of this setup versus a standard mirror is that with two differently angled cameras, one camera can be used for the view that a side mirror would have provided and the second camera can provide a wider field of view for blind-spot detection and collision warning features. Figure 4 shows a two-camera side mirror replacement system.

The question you may be asking now is why drivers need cameras and displays if they can achieve most of the same functionality with a mirror. The answer lies in the features that cameras can provide over mirrors alone. If only a side mirror exists, side collision avoidance is solely up to the driver. With a camera, the detection of a potential collision before a lane change could activate vehicle warning alerts that prevent drivers from making an unwise action. Panoramic rear views with wide field-of-view (FOV) rear cameras or a separate narrowly focused backup camera can provide a driver with different line of sights and reduces or eliminates blind spots that would not be possible with mirrors alone.
This is just the beginning, though, because in order for vehicles to move from driver assistance systems to autonomous systems, a CMS can be incorporated into sensor fusion systems. CMS has the opportunity to incorporate ultrasonics and possibly even radar. The fusion of rear and side cameras with ultrasonics adds the capability to assist drivers in parking or can even park the vehicle for them. Radars fused with side mirrors will add an extra measure of protection for changing lanes and even side collision avoidance.
To learn more about how to implement sensor fusion, check out the follow-up blog posts on park assist sensor fusion using CMS and ultrasonics or front sensor fusion with front camera and radar for lane departure warning, pedestrian detection and even assisted braking.

What Happens When Autonomous Vehicles Turn Deadly?

Consumers and auto execs alike were horrified by the news that a self-driving Uber vehicle had hit and killed a pedestrian. The incident prompted Uber to ground its fleet of self-driving cars while the National Transportation Safety Board (NTSB) and the National Highway Traffic Safety Administration (NHTSA) reviewed the accident to determine who was at fault.

Uber is only one part of the growing autonomous vehicle sector, but the accident sent shockwaves throughout the entire industry. It’s the kind of incident that could thwart plans for AV deployment, attract a new level of scrutiny from lawmakers, and erode consumer confidence in a vehicle’s ability to drive itself.

Many in the auto industry wouldn’t even respond to a request for comment, but Nicolas de Cremiers, head of marketing at Navya, shared his reaction to what happened last March.

“As with any sector, human error is a possibility,” said de Cremiers, whose company produces autonomous shuttles and cabs. “It is crucial that we, as suppliers of autonomous mobility solutions, come together with communities and municipalities to begin taking steps towards creating safety standards and comprehensive measures for the upcoming Autonomous Vehicle Era in smart cities.”

de Cremiers remained optimistic for the future of AVs, adding, “In working towards a more fluid and sustainable future, by improving traffic flow and reducing congestion in urban centers, we will ultimately upgrade the quality life while raising safety standards for a world in perpetual motion.”

As far as regulations are concerned, Danny Atsmon, CEO of Cognata, a startup specializing in AV simulations, said there needs to be some “common procedures” before these vehicles are publicly deployed.

“It’s not a bad idea to have some commonality and standards among the different AV providers,” said Atsmon. “I do believe that after this incident, there are high chances that it will lead to some regulations.”

Gil Dotan, CEO of Guardian Optical Technologies, said it is the industry’s responsibility to “make sure we learn the most and make our tech smarter and more robust.”

“This will push carmakers and tech providers to be more cautious and responsible,” said Dotan, whose company is developing sensing tech for inside the cabin. “This has precedents in other industries, like aviation and space travel, where unfortunate events have occurred. The last thing we should take out of this is to stop our efforts.”

Dotan is among those who see the greater good in what AVs could achieve by eventually reducing the number of fatal car accidents. Atsmon agrees, but he said the incident is a reminder that AVs “still have years of development and a long validation process before it can be released on the road.”

Where does this leave Uber, the company at the center of it all? Jeffrey Tumlin, principal at Nelson\Nygaard, a transportation planning consultancy, said the video released by the Tempe Police Department is “remarkably damning.”

“Yes, the victim crossed a high-speed road – in the dark, on a curve,” said Tumlin. “But all the tech on the vehicle did nothing to slow the car or alert the human observer. While I still believe that AV tech can result in huge safety benefits, whatever tech was out there on the roads should not have yet been cleared for human trials.”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

In-Car Monitoring Technology for a New Level of Safety

Automakers are hastily working on new ways to monitor everything around the vehicle, but what about the driver sitting behind the wheel, or the passengers sitting in back? Thus far, car interiors haven’t received nearly as much attention.

That could change with the arrival of Tel Aviv-based Guardian Optical Technologies. The company is building sensing technology that will allow its potential users (such as OEMs) to take a closer look at what goes on inside the cabin. Its goal is to help manufacturers produce the first generation of automobiles that are “passenger-aware.”

“The data that our sensor can supply is very valuable to all sorts of applications inside the vehicle,” said Gil Dotan, co-founder and CEO of Guardian Optical Technologies.

While this technology could be applicable to other industries, Guardian chose to focus on automotive, which offers a number of possible use cases. For example, it can determine if a driver is drowsy, distracted or holding onto something, such as a smartphone. This information could be used to identify dangerous situations before it’s too late.

“And if you are an insurer, you would want to have this data so you can make sure that you optimize all your algorithms when it comes to charging for insurance,” said Dotan. “Both insurers and OEMs want to figure out what kind of behavior usually leads to accidents. Specifically if you’re an OEM you would want to optimize the safety systems inside the vehicle, whether they are proactive systems trying to avoid an accident.”

Could this lead to an autonomous driving mode that’s automatically turned on when drivers aren’t paying attention? It’s too early to say for sure, but it’s one possibility as manufacturers grapple with the rise in auto accidents.

“Saving lives is something we’re definitely interested in,” Dotan added.

Learning from the Road

Guardian’s technology has yet to be deployed, but early tests have revealed an interesting look at the way passengers behave when the vehicle hits a bump in the road. Dotan found that while objects jostle with the car’s movement, humans tend to come back to their original posture. This could be helpful in designing better, more supportive seats for tomorrow’s automobiles.

What about non-human passengers, such as pets? Dotan said it would be “very hard” to tell the difference between the various types of dogs, particularly those that differ in size. He believes that machine learning could help, along with the addition of 3D depth-mapping, which offers a greater level of in-car monitoring.

“Once we add the 3D aspect, you will find the outcomes to these algorithms are much more reliable and faster to provide an indication,” he said.

In December Guardian announced that it had raised $5.1 million in Series A funding from Maniv Mobility and Mirai Creation Fund. The company plans to use the funds to bring on more talent and to prepare its technology for production.

“We want to be in the assembly line,” said Dotan. “That’s our first go-to-market objective. Our sensor would also be very well suited for the aftermarket, but our first focus is OEMs.”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

Could In-Car Ads Lead to Cheaper Mobility?

Ride-hail services like Uber and Lyft are expected to be some of the biggest beneficiaries of autonomous vehicles. The cars could theoretically pick up passengers 24 hours a day. On the downside, these services will then be required to purchase or lease new vehicles from automakers, an expense they currently avoided by having drivers use their own automobiles.
With so much technology going into them, self-driving cars are likely to be very expensive. How will Uber and Lyft – or any taxi-type service – pay for them without increasing their fees?

Is In-car Advertising the Solution?

One solution is personalized in-car advertising. Using data gathered from customer’s phones, the car could deliver targeted ads that provide steady revenue for ride-hail services.
“I’m sure everything will be tried,” said Marc Weiser, managing director at RPM Ventures, which invests in seed and early stage companies, including automotive IT. “We’re not doing it yet, so why would we start? As long as the economics work…[but] if they don’t, that’s when you could start to see people paying for ads not to be there.”
Higher fees for no ads? That sounds like the model used by some streaming video services, including YouTube.
Jeffrey Tumlin, principal at Nelson\Nygaard, a transportation planning consultancy, thinks this model is inevitable.
“Of course that’s the market we’re heading to,” said Tumlin. “And given the business model for mobility, I would expect it’s going to be much more like the Hulu model, where the only thing you pay for is to turn the ads off. Think about advertising: when you are [in a car], you’re in a confined space. You’re surrounded by surfaces. The [car] knows who you are, where you are, where you’re going. It has your credit card information. It has anything that’s available about you online.”

Fearing the Ad Invasion

Grayson Brulte, co-founder and president of Brulte & Company, hopes that is not the case. He is not looking forward to a ride-hail service that bombards passengers with any form of advertising.
“[But] I am very fascinated and utterly interested in hyper-local experiences that can be monetized,” said Brulte, whose company develops and implements technology strategies for a global marketplace. “If you’re driving through a city and you have an augmented reality windshield, and it pops up an offer for Chipotle or something else – a paid or sponsored advertisement inside the augmented reality world – I think that’s interesting.”
Brulte said he is not bullish on the idea of a car that’s covered in ads or forces passengers to watch a screen. “It’s too invasive,” he said, comparing it to the loud and repetitive commercials that are displayed in some taxis. “It takes away from the whole experience. People try to turn it off all the time. They hate it. I don’t see that transferring to autonomous vehicles because there will be a backlash.”
Before that happens, Weiser theorized that ride-hail services might experiment with offering faster pickups for a monthly fee.
“Will I pay $500 a month to ensure that when I hit the ‘Lyft’ button to hail a ride, the car is there within five minutes every time?” Weiser questioned. “And if not, my fare is entirely free? We haven’t seen any of that. Right now the only differentiating payment models are [based on the] vehicle or the number of people inside the vehicle.”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

Biofuels for the greener Future?

Apart from producing huge amounts of pollution, fossil fuels are bound to run out. That’s a fact. Scientists have been searching for renewable power sources for ages, with the most successful one up to date being electricity. Electric vehicles have become a very popular topic recently. Car manufacturers of all sorts are interested in making the best electric car, from uber expensive Lamborghinis to more affordable Nissan’s.
However, just because we have one good method (that still needs improving in production and the charging time), we shouldn’t overlook other, maybe even better, alternatives to fossil fuels.

What are Biofuels?

Although biofuels are considered to be the most promising alternative by many, the industry is still in its infancy and for the time being, provokes as much controversies as the question of how green electric cars really are. Each type of biofuel we present below can be discussed in terms of its “eco value”, viability, and efficiency but what we know for sure is that they’re all worth taking a closer look at.

Made up from hydrocarbons, fossil fuels such as natural gas, fuel oil and coal have been formed from organic materials like plants and animals. Thanks to the heat and pressure of the earth’s crust (and the odd hundred million years), they’ve been converted to fuel.

Biodiesel is produced through the chemical reactions known as transesterification and esterification, essentially using vegetable or animal fats and oils being reacted with short-chain alcohols, such as methanol or ethanol. Ethanol tends to be used thanks to its low cost, but methanol produces greater conversion rates.

Bioalcohol is made with crops such as corn, sugarcane and wheat, or with cellulosic plants like corn stover, wood and some grasses. These crops aren’t naturally rich in sugars, but the grains are high in starch, and the rest of the plant is rich in cellulose and hemicellulose. 

Instead of being sent to landfill, waste can go through an anaerobic digestion process which creates gases known as LFGs (Landfill Gases). Thanks to its properties of 50% methane, the gas produced can be used as any other gas. It is estimated that 1 million tonnes of municipal solid waste (MSW) could give about 450,000 cubic feet of biogas per day.

Algae-derived fuels go through a similar process to the biodiesel in that it’s the oil from the algae that is used for the production of fuel. There are over 100,000 diverse strains of algae, all with unique properties that could be tailored for fuel production. Researchers say that algae could be between 10-100 times more productive than other bioenergy feedstocks.

PV panels (also known as Solar Panels) use the photovoltaic effect to harvest the sun’s rays and generate electricity, although not very efficiently; the biggest (and best) PV module has an efficacy of around 22%, which means these units produce (on average) anywhere between 100 to 365W of power. An electric car uses around 34kWh to travel around 100 miles.

Possible Biofuel Ingredients

Amidst hard work and effort to invent the best biofuel possible, scientists have been coming up with truly wacky ideas when it comes to eco-friendly car fuelling. Filling up on potatoes, poop or leftovers from your Sunday roast? Check out what could theoretically power your car.

Fossil Fuels vs Biofuels

While researchers are developing new variants of fuels regularly, it must be said that we’re some way from being comparable. A gallon of E85 fuel (85% ethanol 15% petrol) produces 80,000 BTU of energy, whereas a gallon of regular petrol produces 124,000 BTU of energy, so it isn’t as efficient. However, that same gallon will produce 39% less carbon dioxide (CO2).
Equally, biodiesel contains 75% fewer emissions than its counterpart. There are arguments as to the validity of biofuels as a mass produced alternative to fossil based fuels, particularly regarding the manufacturing, but as a pollutant, they are certainly less toxic for the environment.

The topic of biofuels is an endless one. We’ve only presented a few main facts regarding biofuels and, as you can see, it’s a complicated matter. Each problem leads to another and at the moment, it would be impossible to choose the best fuel for the future.
What we know for sure is that the increasing need for sustainability in the automotive industry along with today’s technological development do give hope. By being aware of the possibilities, spreading the knowledge and supporting green fuel initiatives, we can make a change! Let’s hope that scientist and engineers work along with technology to develop great solutions.
A download link for the full version of graphic:

About the author:

Giles Kirkland is a car tyres expert at Oponeo and a dedicated automotive writer researching on new technological solutions. His interests revolve around the revolutionary technologies used in modern cars. Giles passionately shares his knowledge and experience with the automotive and technology enthusiasts across the globe.

Uber Post-Mortem: Perception and Distraction

Originally published on Autonomous Driving Research:

With the recent release of video footage of the Uber accident in which their AV collided and killed a pedestrian walking a bicycle, we are able to reach some initial conclusions concerning this unfortunate and avoidable incident. This analysis is based on released Uber camera footage, Uber’s deployed sensor equipment, and experience with current AV system behavior.


From reviewing the video, we see this pedestrian (“Ped”) crossing perpendicular to traffic Lane-1 to Lane-2 in the dark, and the ultimate collision in Lane-2. Uber’s fleet utilizes Lidar onboard, using a version of Velodyne’s model HDL-64. This unit provides 360 degree coverage from its central viewpoint, with a viewable distance of 120 meters (~400 feet). With the roof mounted position (~80” from ground) and distance capability, reliable object detection even half that distance is achievable. And to clarify Lidar capability, this is in complete darkness with no visible illumination required. Given the Ped originated in the median and proceeded from Lane-1 into Lane-2, we have an object that is moving with a trajectory into the vehicle’s planner path. Uber’s AI should have detected this Ped as an object in motion towards the vehicle with a possible threat of collision. In addition, Uber claims to have 360 degree Radar coverage, however we question the coverage radius, elevation, and at what range. Given the current Uber fleet, we have no recognizable visible lower Radar units (could be hidden), and we have not been able to confirm any Radar in the roof sensor rack.

Therefore with Radar capability properly sensor fused with Lidar data, the Ped should have been detected even in Lane-1 at the reported 38 mph speed, and at least 150 feet distance. The Uber-AI would have had the opportunity to slow and/or brake to better understand the moving object approaching the vehicle in the adjacent lane.

There has been discussion concerning challenges in “object classification” in this situation, given it was a person walking a bike with bags, and whether the Uber-AI was confused on what the object was. If the AI is unable to classify an object, should it just ignore it? Was this stretch of roadway designated to exclude possible pedestrians or foreign objects (FOD) in this area? This is sometimes done on highways to reduce the sensitivity and eliminate excessive jerky braking/steering application for small miscellaneous items like plastic bags and small road debris. However this is a low-speed 35 mph zone. Is it possible that in Uber’s software pedestrians are only tracked while in dedicated cross walks, or bikes only in bike lanes? I hope not. We must be prepared for any possible scenario; pedestrians and bikes should be monitored in all areas, just like other objects that could come into conflict with travel. If there was any confusion as to what the Ped was in this incident (a pedestrian and a bicycle together make for a larger single detectable object), brake application should have been the first response given the larger object size obstructing the traveling lane. From the above picture, the lane obstruction of the Ped was at least 50%, and constitutes a lane blockage. The question is why the Uber-AI did NOT determine the Ped as a collision factor early in Lane-1 and slow/brake, or emergency brake or do an evasive lane change when the Ped was directly in front in Lane-2? (To date its been reported that no brake application was made until after collision.) The deeper issue is the claimed Lidar and Radar coverage of the vehicle: 1) are the sensors fused properly and detection distances optimal, 2) are the sensors properly calibrated, and 3) are the objects detected crossing the planner path properly detected as collision potentials. We would need to see Uber’s propriety UI view to get the full story of the software’s perspective on the surrounding objects. (No such release has been made as of this writing.)

Safety Driver

The purpose of safety drivers are to ensure the vehicles in development are acting safe and are performing as human drivers or better. The expectations are that those acting as safety drivers need to be even MORE conscience and aware of their surroundings as they shoulder the weight of not only their personal safety, but of those people and property around the vehicle, the company for which they represent, and the AV industry as a whole. In this situation, we see the Uber safety driver is clearly distracted by something in the lower driver-right of the dash. The eye movements are continuously shifting down and at times pinned to that lower non-driving visual plane. We are not sure if the driver is focused on the center dashboard screen or a cellphone, but the driver’s focus is shared between driving and something else.
Distracted driving by a safety driver can not be tolerated, given this event. Given the location where this occurred, the driver should have been more vigilant in scanning the darkened road and focussed OUTSIDE the vehicle. The driver could have had the opportunity to brake and swerve to avoid the collision. But without safety drivers continuously focusing outside the vehicle, we will continue to see potentials for collision until AV systems can be taught to work in highly dense unexpected pedestrian environments such as downtowns, campuses, and business parks. A static object in the median is one case, but a moving object from the median into the vehicle’s travel lane is a more critical scenario to attend to sooner rather than later. In this case, the last safety net (the safety driver) was not effective given the gap in this L5 platform.

With the rapid development of AVs towards L4/5 driverless mode, we may need to take a more conservative approach to ensure vehicles can operate in un-expected “edge” cases. Although the testing and development costs may rise, implementation of a safety driver attentiveness tool, similar to those being developed for eye motion, should be utilized to track safety drivers to ensure they are focused as necessary to be properly prepared to intervene as necessary. This has been the issue with L3 vehicle development and the “handoff conundrum” we have all been grappling with. In these situations where safety drivers are needed to operate AVs, the vehicles are technically L3/4 vehicles that will require human intervention during operation.

Therefore, we can learn from this situation when it comes to the handoff and what can happen if you are distracted at 40 mph at night, versus 75 mph on a freeway. Different levels of human interaction are necessary during testing and we will need to bridge this human gap as soon as possible while the AI is training towards L5 capability. But until we have accumulated millions of miles per platform in all conditions, we will need to address the driver attentiveness issues to eliminate future incidents.


Given this unfortunate distinction as being the first self-driving car casualty for our industry, we must all carefully learn from this incident.
In this case, the fault resides in two distinct areas: 1. Sensor/AI false-negative object detection resulting in non-evasive maneuvering, and 2. distracted driving by the safety driver.
This situation is not uncommon: pedestrians crossing the road in non-designated areas. If anyone has driven in more dense environments such as downtowns, crowded campuses, and other highly populous geographies such as China and India, this unexpected pedestrian crossing is a common occurrence even at night. AVs will need to be vigilant and leverage their advanced capabilities to see beyond human sight, understand the objects they recognize (even un-classified ones), and act accordingly to save pedestrians and passengers alike. Those are the advanced capabilities that will allow us in the AV industry to drive deaths and accidents towards zero, and the culmination of resources we are all engaging towards this mobility effort. A single death, even while developing these technologies, is unacceptable and must be clearly understood by all participants as to why they occurred.
We ask Uber to further share their AI data related to this incident to the AV community so we can all learn from the gap.

About the author:

Marcus Strom is currently Founder and Executive Director of Autonomous Driving Research (ADR), where he has been based in Phoenix, Arizona since 2015. ADR is an independent organization on autonomous driving, with the mission to evaluate, challenge, inform, and propose solutions for the successful integration of autonomous driving vehicles with current traffic systems. Through ADR, Marcus completed contract work in Technical Operations at Google/Waymo, based in Chandler, Arizona. He has maintained self-driving vehicle fleets through install, diagnostics, calibration, qualifying, maintaining, troubleshooting, and operating advanced self-driving autonomous vehicle hardware and software systems and OEM platform systems. He consults many different autonomous start-up ventures, financial investment firms, and third party suppliers in the industry. Marcus received his B.B.A. in Management Information Systems at Texas Tech University and his M.B.A. in Logistics, Materials, and Supply Chain Management from Arizona State University.

Is Image Quality relevant for Cameras in ADAS?

“Cameras are everywhere” is a claim that can be easily justified when looking around and seeing cameras in everything from consumer electronics to surveillance systems and cars. Cameras in a car, which assist the driver with various tasks including driving at night and reversing, are part of an array of sensors that make ADAS possible. The image quality performance of a camera is always assessed depending on the various functions. For example, if a camera is intended to deliver an image to a human observer, then the observer should perceive all of the necessary information in a “clean looking image.” Achieving a “clean looking image” to a human observer is of course highly subjective and thus naturally creates the main challenge of image quality testing. Nevertheless, there are many objective measurements available that can correlate and analyze the image quality of an image by a human observer.
An automobile camera treated as a sensor for an ADAS, however, has much different image quality requirements for its performance and is not described as “clean looking.” Nonetheless, it extremely important that we analyze the image quality of ADAS cameras to ensure their safety and effectiveness.
Image quality is FUN – Fidelity, Usefulness, and Naturalness. While the importance of fidelity is low in a case where an algorithm (such as in ADAS) instead of a human observer is the recipient, usefulness, in this case, becomes the most important aspect. An image without content or information is not useful for an ADAS. Thus it has become imperative that we create a way to evaluate how much relevant information the image contains.

Spatial Frequency Response

An ADAS will have to be able to detect objects in a defined distance, a spatial frequency response (SFR) measurement will provide the information about the optical performance of a camera. In other words, what level of detail can the device under test reproduce in a way that an algorithm can still detect something? The SFR can be used to provide different objective metrics to show how spatial frequencies in the object space are reproduced. ISO standard ISO12233:2014 describes one of the most common ways to measure this process. Even though this standard is aimed at digital photography, it is still widely accepted in other applications such as ADAS or other industrial applications that use cameras.

Many engineers have a specific chart in mind when they hear “ISO12233” (see Figure 1), which is based on subjective evaluation by an observer. The chart in figure 1, however, is not recommended because it only provides the limits and not the entire SFR. It is much better to use the s-SFR method described in the standard, which is based on a sinusoidal Siemens star. Another very popular method is the s-SFR method based on the reproduction of a slanted edge. It is important to keep in mind that the e-SFR method is easily influenced by sharpening algorithms. Sharpening is a method to improve the visual sharpness of an image for a human observer as well as a popular image enhancement algorithm.

Unfortunately, what benefits the human observer is counterproductive for object detection and classification, a typical task for ADAS.

HDR & Noise

Common scenarios for an automotive camera tend to have a very high dynamic range, i.e., the difference between the darkest regions and the brightest regions of a scene. A typical example is a car approaching the entrance or the exit of a tunnel. The ADAS needs to be able to detect objects such as lines or other cars in and outside the tunnel, which is a huge difference in luminance. To be able to provide information on all parts of the image, it is very common for cameras in ADAS to combine over- and underexposed images into a high dynamic range (HDR) image. Using this technology, which is performed on sensor level for the latest systems, the camera can potentially reproduce a contrast of 120dB or more. 120dB indicates that we have a contrast of 1:1.000.000 between the brightest and the darkest region of a scene.
As always, nothing comes easy, as such the HDR algorithms may also introduce some artifacts that are typical for these kinds of cameras: SNR (signal to noise ratio) drops. The higher the value of SNR, the lower the disturbing impact of noise. When plotting the SNR versus the scene light intensity, we normally have an increase of the SNR with the intensity. Therefore, we can assume that we have low SNR in low light and good SNR in all intensities above that. When the camera merges several images into a single HDR image, this assumption does not hold true, and we can have intensity ranges in the mid tones that also show a poor SNR. A low SNR leads to a loss of information and serious problems for image quality in ADAS.
The SNR is a very common metric for all kinds of signals, but the meaning of SNR values for the application is limited. In photography, the SNR is replaced by ISO15739:2013 “Visual Noise” metric, which provides a much better correlation between the human observer and the obtained metric. A new approach to evaluate a camera system with a direct correlation to the usability of the image is the contrast detection probability (CDP). Other than SNR, CDP can also directly provide the information on whether the system can detect specific object contrast under test or not. Essentially, it can also answer the question how serious an observed “SNR drop” really is. Engineers from BOSCH have presented the CDP measurement, and it is currently under discussion within the IEEE-P2020 initiative to become an official industry standard.


Cameras in photography are designed to mimic the human perception of color, while cameras for ADAS do not have the same constraints. All cameras used as detectors of objects, rather than a source of video or images for a human observer, do not focus that much on color. Nevertheless, these cameras need at least a basic understanding of the differences between a white line and a yellow line on a road. It also does not hurt to know if the traffic light is red or green, but it does not need to reproduce the colors very accurately.
Different ADAS provide an image to the human observer (such as a bird’s eye view for parking assistance), which has merged from different signal streams of several cameras. While it is not of very high importance that the colors are perfect, it is however important that at least all cameras are nearly equal regarding color reproduction. If the manufacturer fails to calibrate the various cameras properly, it will be very obvious where the stitching between different images occurs.


Light sources based on LED technology have manifold advantages over traditional technologies and are spreading rapidly in various applications. Headlamps have become a design element and brand differentiator, while traffic lights require far less maintenance and use less energy. The problem for camera systems with this technology is how it controls these LED-based light sources. Pulse width modulation (PWM) will turn the LED on and off with a defined frequency and for a defined fraction of the cycle (“duty cycle”). As the human visual system integrates the intensity over time, the human driver does not perceive any of this and sees a constant light source intensity. A camera system integrates light energy only during the exposure time, and if this exposure time is short, the camera system might show an artifact known as “flicker.” Camera systems that use a short exposure time suffer the most from this effect.
As HDR technologies are often based on the combination of images with different exposure times, highlights can perceivably be affected. This artifact can lead to a low-frequency blinking of visible light sources, e.g., headlamps of approaching cars or brake lights of cars in front. Both are visually annoying and more importantly can lead to wrong information for the ADAS. It is important for the ADAS to be able to differentiate between brake lights and a turn indicator and between a normal car and an emergency vehicle approaching from behind. As we might lose information or get wrong information due to flicker, today’s camera in the automotive environment requires strategies (hardware or software) that suppress the flicker effect.
To effectively test a camera system for flicker, the device has to be tested in an environment that can produce a large variety of PWM frequencies, different duty cycles and allow shifts in its phase between camera capture frequency and PWM frequency. The IEEE-P2020 initiative has a workgroup that is working on a standardized way to measure and benchmark the flickering behavior of camera system used in ADAS.

About IEEE-P2020:

The IEEE P2020 initiative is a joint effort of experts from various fields of image quality in the automotive environment. The workgroup gave itself the task of creating a standard that describes how camera systems shall be benchmarked and evaluated to ensure that these devices deliver the required performance for an ADAS system to make the best possible decisions. Members of the workgroup have affiliations to all parts of the typical supply chain in the automotive industry (OEM, Tier 1, Tier 2, …), test institutes (like Image Engineering) and academia. The first output of this workgroup will be a whitepaper that contains an overview of the requirements of a test system for automotive cameras and a gap-analysis over existing standards.

Expert Inteview with Zielpuls: Changing the existing Automotive Market Landscape

Markus Frey
Managing Director, Zielpuls

Arnd Zabka
Managing Partner, Theme Cluster Manager for Connected and Autonomous Driving, Zielpuls

We spoke with Zielpuls MD Markus Frey and Arnd Zabka, Managing Partner, Theme Cluster Manager for Connected and Autonomous Driving, on the company’s relation to the evolution of ADAS, vehicle automation and new mobility concepts. The two experts pointed out upcoming market changes for the automotive sector, important hurdles to take on the way to level 5 autonomy and exciting projects for Zielpuls in the future.

we.CONECT: What is your company’s/your relation to the evolution of ADAS, vehicle automation and new mobility concepts?

Zielpuls: The beginning change of the mobility market is the biggest challenge in our point of view. It comes hand in hand with new technologies, new developments and validation methods, new players, new customer groups and business models. It will change the game. If we look at mobility studies from 2013, we think that this should have been happened much faster. We at Zielpuls will help our customers to prepare the own organization for this new challenge and work as a catalyst to bring these new technologies to the customer.

we.CONECT: What sets you apart from similar service providers in the industry?

Zielpuls: We at Zielpuls are holistic thinking. Our business is the link between strategy and realisation. On the one hand, we help to start the development immediately, but the second important thing is to start the change management in the internal organizations and way of collaborate. We bring these two workstreams together.

we.CONECT: In your opinion: What are the big hurdles towards autonomous driving (autonomy level 5) that need to be surpassed by different stakeholders? How should they be resolved?

Zielpuls: The first prototypes will be available fast. The critical aspect is: How to get safety, security, privacy and sustainability to these systems? And in a way that is cheap enough to be available for everybody? For that on a short-term focus is the test data acquisition and the test and validation strategy for autonomous systems a hurdle. We have to work hard for test automatization and simulation of the complete system to handle this.

On a long-term view there are massive disruptions how mobility will change. Long-term success needs massive change of the actual mobility-supplier. There will be new business models associated with a new definition of the car. As an example an additional value will be the transport and pick up of children or personal delivery services, your finance consultant will pick you up at home and bring you to your work. As a benefit he gets your time for his service and product presentation.

we.CONECT: According to your expectations: How will autonomous driving technology change the existing automotive market landscape?

Zielpuls: By going from ownership to usership, there will be much more different cars and use cases in the world. The market will grow with new user groups. New players will enter the market. By lowering the entry barrier by using a car as a service, mobility will be assessable to new customer groups like children, people with no driving license or retired persons. Developing a car and production will be much easier for new players in the future and less heavy-industry focused.

we.CONECT: With smart cities beginning to roll out in the near future, how do you see the car markets in less developed cities/countries or regions being effected?

Zielpuls: There is a non addressed market for mobility as big as todays car market. So I think, that the markets in less developed cities will grow rapidly hand in hand with people’s increasing need for mobility without owning a car. Take the market for consumer goods in emerging nations as an example. The big players had problems selling their products in 100 units like laundry detergent. So they offered successfully single portions which people could afford.

we.CONECT: Do you think this is being overlooked by the OEMs & Tier 1s to a certain degree?

Zielpuls: The OEM’s are not focusing on the new market groups without a driving license. So new players can easily enter the market. The next seven years will build the fundament for the new mobility business. Every gap you don’t allocate will be filled by a new company. They will have to change from a car manufacturer to a service provider for their own cars. Additionally they need solutions in fleet management, billing systems and different service provisioning.

we.CONECT: What project would you like to work on next in this sector if given the opportunity?

Zielpuls: We would like to work on the step from level 3(4) to level 5. From the strategy to the realization. We want to take our customers on a journey into the future: We want to push new technologies and services quickly to pilot customers, incubate them in mass market products and parallel push the own organization to speed and bring together development of industry and sectors across the board.

we.CONECT: With the automotive industry moving into a new era, what do you think the car makers are not focusing enough on and how could this be a problem?

Zielpuls: It is all about (development) speed and talents.

By moving into a new era, the car manufacturers have to focus on multiple aspects at the same time. On the one hand side, the development of the “classical” car has to go on and more and more new systems have to be integrated (electric, connected, autonomous, service orientation, new interior, etc.). On the other hand side, complete new cars and architectures have to be developed in parallel. These cars will have a big adding value in the software, new engine systems and a complete new architecture. To have success, this architecture has to be opened for new technologies and platform systems. One important thing is to open the systems engineering and think in collaboration. No player in the market has enough financial power to play in all technology fields as a star. The star will be, who brings all technologies to one service together and can offer this to the customer.

we.CONECT: What do you believe would be a solution to this? And what can Zielpuls offer to OEMs & T1s to work towards this solution?

Zielpuls: A spin-off company in an attractive location and environment can be a possible solution. They attract high educated employees. Zielpuls is such a kind of company. We can help the car manufacturers to develop new state of the art solutions, we go with them on their new way and we’re building new organizations.

About the Interviewees:

Markus Frey: has been managing partner of Zielpuls GmbH since its foundation in 2008 and is responsible for Finance & Controlling and Information Technology. At the same time, he is Managing Director of the Zielpuls subsidiary in China with offices in Shanghai and Beijing. Before becoming an entrepreneur, he worked as a consultant for several years and worked for clients in the automotive and logistics industries. His consulting focus at Zielpuls lies in the areas of IT strategy and digitalization. In particular, Mr. Frey advises in the fields of autonomous driving, driver assistance systems, new mobility concepts, smart connected products and digital transformation. Mr. Frey made Zielpuls GmbH known to technology groups from more than 30 markets worldwide for the sustainable implementation and acceleration of internal development processes.

Arnd Zabka: has been a Managing Partner at Zielpuls GmbH since January 2018 and is responsible for the further development of the technology cluster “Connected and autonomous driving”. Before joining Zielpuls, he worket for Altran. Mr. Zabka has 17 years of consulting and project management experience in the areas of automotive electronics development, HAF, ADAS, infotainment, connected products and e-mobility.