Ayrton Boldt

Digital Hubs Lead Editor we.CONECT Global Leaders GmbH

OPTIS Webinar: Take the Lead on the HUD Revolution in Automotive

Involved in Augmented Reality development? Discover OPTIS virtual prototyping solution dedicated to the development of automotive Head-Up Displays.
Head Up Displays will certainly become fundamental in the coming years. Firstly because they are a major safety feature and above all, by inspiring driver’s confidence, they are essential to ease the adoption of Autonomous Driving.
As a new feature, HUDs, in particular Augmented Reality ones, require innovation in the design and optimization, with specific attention to Perceived Quality, as the image is permanently in the line of sight of the driver. Not to mention that you still have to face the traditional constraints of the automotive industry, from frequent design changes to cross-department collaboration.
OPTIS’ dedicated HUD Solution supports you during the virtual prototyping of your HUDs from entry to high-end models. From optical design, through analysis to dynamic visualization, discover how its unique simulation capabilities and ease of use support rapid HUD design iterations, automatic optimization and delivery according to your specifications and Perceived Quality targets.
Join our webinar and discover how to test and validate the HUD of your future vehicles with OPTIS Physics Based Simulation solution.

Webinar speakers:

Cedric BellangerCedric Bellanger

Product Marketing Manager
OPTIS

Ludovic ManillierLudovic Manillier

Business Development – Augmented Reality / HUD
OPTIS

Webinar time and date:

The webinar will be held 2 times on June 12th to give you the chance to join us when it suits you most.

June 12th, 2018 – 10 to 10:30 AM CEST

June 12th, 2018 – 5 to 5:30 PM CEST

Find out more about OPTIS' dedicated HUD solution:

Lite-On Technology Success Story: Lite-On accelerates the development process of HUD products with SPEOS.

OPTIS and EB combine their expertise in automotive solutions to provide a unique, commercial off-the-shelf solution that can be used by carmakers to develop and assess sensor fusion and augmented reality content.

To Mirror or not to Mirror: How Camera Monitoring Systems expand the Driver’s Perspective

This article was authored by Jeramie Bianchi – Field Applications Manager at Texas Instruments.
Objects in the mirror are closer than they appear–this message is the tried and true safety warning that has reminded drivers for decades that their rearview mirrors are reflecting a slightly-distorted view of reality. Despite their limitations, mirrors are vital equipment on the car, helping drivers reverse or change lanes. But today, advanced driver assistance systems (ADAS) are going beyond a mirror’s reflection to give drivers an expanded view from the driver’s seat through the use of cameras. See ADAS domain controller integrated circuits and reference designs here.
Camera monitoring systems (CMS), also known as e-mirrors or smart mirrors, are designed to provide the experience of mirrors but with cameras and displays. Imagine looking into a rearview mirror display and seeing a panoramic view behind your vehicle. When you look to your side mirror, you see a high-resolution display showing the vehicles to your side. These scenarios are becoming reality, as are other features such as blind-spot detection and park assist.
It’s important to understand the current transition from mirrors to CMS. It’s no surprise that systems in today’s vehicles are already leveraging ADAS features for mirrors. Most new vehicles in the past decade have added a camera to the back of the vehicle or attached a camera to the existing side mirror, with a display inside the vehicle to give drivers a different perspective of what’s behind or at the side of the vehicle.
Figure 1 shows the routing of this rearview camera and display system. The backup display is embedded in the rearview mirror and a cable routes to the rear of the vehicle.

The side mirror is different because the camera is located on the mirror. The side mirror still exists for viewing, but typically its camera works when the driver activates a turn signal or shifts in reverse. During a turn or a lane change, the camera outputs a video feed to the infotainment display in the dashboard and may show a slightly different angle than the side mirror itself, as shown in Figure 2.

Now that I’ve reviewed current CMS configurations that incorporate a mirror with a camera and display, it’s worth noting it’s possible to achieve a CMS rearview mirror replacement through the addition of one or two cameras installed on the rear of the vehicle.
From the rear of vehicle, video data from an imager is input to TI’s DS90UB933 parallel interface serializer or DS90UB953 serializer with Camera Serial Interface (CSI)-2. This data is then serialized over a flat panel display (FPD)-Link III coax cable to a DS90UB934 or DS90UB954 deserializer, and then output to an application processor for video processing, such as JacintoTM TDAx processors, and then shown on a rearview mirror display. If the display is located far from the Jacinto applications processor, you will need a display interface serializer and deserializer to route the data over a coax cable again. You could use the DS90UB921 and DS90UB922 red-green-blue (RGB) format serializer and deserializer, respectively, or, if you’re implementing higher-resolution displays, the DS90UB947 and DS90UB948 Open Low-Voltage Differential Signaling Display Interface (LDI) devices.
Figure 3 shows the connections between these devices when using a display onboard with an applications processor.

The second CMS is the side mirror replacement portion. The camera must be located in the same location where the mirror used to be. This camera’s video data displays a view of what the driver would see in the mirror. To achieve this, the camera data is serialized and sent over an FPD-Link III coax cable to a remote display located in the upper part of the door panel or included in the rearview mirror display. With a camera and display, now the side view can be in more direct line-of-sight locations for the driver. For example, if both the displays for side view and rear view are included in the rearview mirror, the driver only needs to look in one location.
Another option available in a side mirror replacement is to add a second co-located camera with the first, but at a different viewing angle. The benefit of this setup versus a standard mirror is that with two differently angled cameras, one camera can be used for the view that a side mirror would have provided and the second camera can provide a wider field of view for blind-spot detection and collision warning features. Figure 4 shows a two-camera side mirror replacement system.

The question you may be asking now is why drivers need cameras and displays if they can achieve most of the same functionality with a mirror. The answer lies in the features that cameras can provide over mirrors alone. If only a side mirror exists, side collision avoidance is solely up to the driver. With a camera, the detection of a potential collision before a lane change could activate vehicle warning alerts that prevent drivers from making an unwise action. Panoramic rear views with wide field-of-view (FOV) rear cameras or a separate narrowly focused backup camera can provide a driver with different line of sights and reduces or eliminates blind spots that would not be possible with mirrors alone.
This is just the beginning, though, because in order for vehicles to move from driver assistance systems to autonomous systems, a CMS can be incorporated into sensor fusion systems. CMS has the opportunity to incorporate ultrasonics and possibly even radar. The fusion of rear and side cameras with ultrasonics adds the capability to assist drivers in parking or can even park the vehicle for them. Radars fused with side mirrors will add an extra measure of protection for changing lanes and even side collision avoidance.
To learn more about how to implement sensor fusion, check out the follow-up blog posts on park assist sensor fusion using CMS and ultrasonics or front sensor fusion with front camera and radar for lane departure warning, pedestrian detection and even assisted braking.

To Mirror or not to Mirror: How Camera Monitoring Systems are expanding the Driver’s Perspective

This article was authored by Jeramie Bianchi – Field Applications Manager at Texas Instruments.
Objects in the mirror are closer than they appear–this message is the tried and true safety warning that has reminded drivers for decades that their rearview mirrors are reflecting a slightly-distorted view of reality. Despite their limitations, mirrors are vital equipment on the car, helping drivers reverse or change lanes. But today, advanced driver assistance systems (ADAS) are going beyond a mirror’s reflection to give drivers an expanded view from the driver’s seat through the use of cameras. See ADAS domain controller integrated circuits and reference designs here.
Camera monitoring systems (CMS), also known as e-mirrors or smart mirrors, are designed to provide the experience of mirrors but with cameras and displays. Imagine looking into a rearview mirror display and seeing a panoramic view behind your vehicle. When you look to your side mirror, you see a high-resolution display showing the vehicles to your side. These scenarios are becoming reality, as are other features such as blind-spot detection and park assist.
It’s important to understand the current transition from mirrors to CMS. It’s no surprise that systems in today’s vehicles are already leveraging ADAS features for mirrors. Most new vehicles in the past decade have added a camera to the back of the vehicle or attached a camera to the existing side mirror, with a display inside the vehicle to give drivers a different perspective of what’s behind or at the side of the vehicle.
Figure 1 shows the routing of this rearview camera and display system. The backup display is embedded in the rearview mirror and a cable routes to the rear of the vehicle.

The side mirror is different because the camera is located on the mirror. The side mirror still exists for viewing, but typically its camera works when the driver activates a turn signal or shifts in reverse. During a turn or a lane change, the camera outputs a video feed to the infotainment display in the dashboard and may show a slightly different angle than the side mirror itself, as shown in Figure 2.

Now that I’ve reviewed current CMS configurations that incorporate a mirror with a camera and display, it’s worth noting it’s possible to achieve a CMS rearview mirror replacement through the addition of one or two cameras installed on the rear of the vehicle.
From the rear of vehicle, video data from an imager is input to TI’s DS90UB933 parallel interface serializer or DS90UB953 serializer with Camera Serial Interface (CSI)-2. This data is then serialized over a flat panel display (FPD)-Link III coax cable to a DS90UB934 or DS90UB954 deserializer, and then output to an application processor for video processing, such as JacintoTM TDAx processors, and then shown on a rearview mirror display. If the display is located far from the Jacinto applications processor, you will need a display interface serializer and deserializer to route the data over a coax cable again. You could use the DS90UB921 and DS90UB922 red-green-blue (RGB) format serializer and deserializer, respectively, or, if you’re implementing higher-resolution displays, the DS90UB947 and DS90UB948 Open Low-Voltage Differential Signaling Display Interface (LDI) devices.
Figure 3 shows the connections between these devices when using a display onboard with an applications processor.

The second CMS is the side mirror replacement portion. The camera must be located in the same location where the mirror used to be. This camera’s video data displays a view of what the driver would see in the mirror. To achieve this, the camera data is serialized and sent over an FPD-Link III coax cable to a remote display located in the upper part of the door panel or included in the rearview mirror display. With a camera and display, now the side view can be in more direct line-of-sight locations for the driver. For example, if both the displays for side view and rear view are included in the rearview mirror, the driver only needs to look in one location.
Another option available in a side mirror replacement is to add a second co-located camera with the first, but at a different viewing angle. The benefit of this setup versus a standard mirror is that with two differently angled cameras, one camera can be used for the view that a side mirror would have provided and the second camera can provide a wider field of view for blind-spot detection and collision warning features. Figure 4 shows a two-camera side mirror replacement system.

The question you may be asking now is why drivers need cameras and displays if they can achieve most of the same functionality with a mirror. The answer lies in the features that cameras can provide over mirrors alone. If only a side mirror exists, side collision avoidance is solely up to the driver. With a camera, the detection of a potential collision before a lane change could activate vehicle warning alerts that prevent drivers from making an unwise action. Panoramic rear views with wide field-of-view (FOV) rear cameras or a separate narrowly focused backup camera can provide a driver with different line of sights and reduces or eliminates blind spots that would not be possible with mirrors alone.
This is just the beginning, though, because in order for vehicles to move from driver assistance systems to autonomous systems, a CMS can be incorporated into sensor fusion systems. CMS has the opportunity to incorporate ultrasonics and possibly even radar. The fusion of rear and side cameras with ultrasonics adds the capability to assist drivers in parking or can even park the vehicle for them. Radars fused with side mirrors will add an extra measure of protection for changing lanes and even side collision avoidance.
To learn more about how to implement sensor fusion, check out the follow-up blog posts on park assist sensor fusion using CMS and ultrasonics or front sensor fusion with front camera and radar for lane departure warning, pedestrian detection and even assisted braking.

CGI Studio – Our Tip for Automotive HMI Design

Digital instrument clusters, head-up displays, infotainment systems (IVI), rear seat entertainment – these are only a few HMI innovations that are demanded in an ever-converging automotive industry. We would like to present a HMI solution that covers all the specified innovations – CGI Studio. The scalable and hardware independent software platform is developed by Socionext Embedded Software Austria GmbH, a leading Austrian HMI tool provider and development partner for automotive and industrial customers worldwide.

CGI Studio

CGI Studio is Socionext’s 2D/3D software development solution for creating automotive Human-Machine Interfaces (HMI’s). The end-product is a solution that meets essential automotive requirements such as fast boot up time, small memory footprint and compliance to ever more stringent automotive standards and safety regulations. The special USP of CGI Studio lies in the representation of 2D / 3D display elements but it also includes features such as multilingualism, particularly sophisticated safety functions (functional safety) and a reliable and worldwide customer service. Many automotive manufacturers and suppliers worldwide already rely on this software tool. At the moment there are more than 50 million cars with CGI Studio on the road.

About SESA

Socionext Embedded Software Austria (SESA) was founded in 2000 in Linz and is a subsidiary of Socionext Inc., based in Shin-Yokohama, Japan. Despite being part of the Socionext Group SESA operates in a self-sufficient manner providing a product range developed entirely at the location in Linz, Austria. SESA supports its customers with the CGI Studio tool environment as well as provision of software services mainly in the areas of HMI development and embedded software.
For more information, visit www.cgistudio.at.

Socionext is one of the speaking companies at the Car HMI Digital Days, the first digital event on automotive HMI & UX. The digital event will take place on May 15-17 and feature numerous presentations and webinars from industry players and hidden champions. Registration is free so be sure to join us in May to witness cutting-edge insights from industry leaders!

Biofuels for the greener Future?

Apart from producing huge amounts of pollution, fossil fuels are bound to run out. That’s a fact. Scientists have been searching for renewable power sources for ages, with the most successful one up to date being electricity. Electric vehicles have become a very popular topic recently. Car manufacturers of all sorts are interested in making the best electric car, from uber expensive Lamborghinis to more affordable Nissan’s.
However, just because we have one good method (that still needs improving in production and the charging time), we shouldn’t overlook other, maybe even better, alternatives to fossil fuels.

What are Biofuels?

Although biofuels are considered to be the most promising alternative by many, the industry is still in its infancy and for the time being, provokes as much controversies as the question of how green electric cars really are. Each type of biofuel we present below can be discussed in terms of its “eco value”, viability, and efficiency but what we know for sure is that they’re all worth taking a closer look at.

Made up from hydrocarbons, fossil fuels such as natural gas, fuel oil and coal have been formed from organic materials like plants and animals. Thanks to the heat and pressure of the earth’s crust (and the odd hundred million years), they’ve been converted to fuel.

Biodiesel is produced through the chemical reactions known as transesterification and esterification, essentially using vegetable or animal fats and oils being reacted with short-chain alcohols, such as methanol or ethanol. Ethanol tends to be used thanks to its low cost, but methanol produces greater conversion rates.

Bioalcohol is made with crops such as corn, sugarcane and wheat, or with cellulosic plants like corn stover, wood and some grasses. These crops aren’t naturally rich in sugars, but the grains are high in starch, and the rest of the plant is rich in cellulose and hemicellulose. 

Instead of being sent to landfill, waste can go through an anaerobic digestion process which creates gases known as LFGs (Landfill Gases). Thanks to its properties of 50% methane, the gas produced can be used as any other gas. It is estimated that 1 million tonnes of municipal solid waste (MSW) could give about 450,000 cubic feet of biogas per day.

Algae-derived fuels go through a similar process to the biodiesel in that it’s the oil from the algae that is used for the production of fuel. There are over 100,000 diverse strains of algae, all with unique properties that could be tailored for fuel production. Researchers say that algae could be between 10-100 times more productive than other bioenergy feedstocks.

PV panels (also known as Solar Panels) use the photovoltaic effect to harvest the sun’s rays and generate electricity, although not very efficiently; the biggest (and best) PV module has an efficacy of around 22%, which means these units produce (on average) anywhere between 100 to 365W of power. An electric car uses around 34kWh to travel around 100 miles.

Possible Biofuel Ingredients

Amidst hard work and effort to invent the best biofuel possible, scientists have been coming up with truly wacky ideas when it comes to eco-friendly car fuelling. Filling up on potatoes, poop or leftovers from your Sunday roast? Check out what could theoretically power your car.

Fossil Fuels vs Biofuels

While researchers are developing new variants of fuels regularly, it must be said that we’re some way from being comparable. A gallon of E85 fuel (85% ethanol 15% petrol) produces 80,000 BTU of energy, whereas a gallon of regular petrol produces 124,000 BTU of energy, so it isn’t as efficient. However, that same gallon will produce 39% less carbon dioxide (CO2).
Equally, biodiesel contains 75% fewer emissions than its counterpart. There are arguments as to the validity of biofuels as a mass produced alternative to fossil based fuels, particularly regarding the manufacturing, but as a pollutant, they are certainly less toxic for the environment.

The topic of biofuels is an endless one. We’ve only presented a few main facts regarding biofuels and, as you can see, it’s a complicated matter. Each problem leads to another and at the moment, it would be impossible to choose the best fuel for the future.
What we know for sure is that the increasing need for sustainability in the automotive industry along with today’s technological development do give hope. By being aware of the possibilities, spreading the knowledge and supporting green fuel initiatives, we can make a change! Let’s hope that scientist and engineers work along with technology to develop great solutions.
A download link for the full version of graphic: http://bit.ly/oponeo-biofuels

About the author:

Giles Kirkland is a car tyres expert at Oponeo and a dedicated automotive writer researching on new technological solutions. His interests revolve around the revolutionary technologies used in modern cars. Giles passionately shares his knowledge and experience with the automotive and technology enthusiasts across the globe.

Codeplay Software Connecting AI to Silicon in Automotive

Charles Macfarlane
VP Marketing, Codeplay Software

Our Interview Partner:

Charles Macfarlane is VP Marketing at Codeplay Software, leading marketing, sales and business development there for the past 4 years. Charles also engages with the leading global processor and semiconductor companies to provide leadership software solutions. Before Codeplay, Charles held positions as chip designer, applications engineer, product manager and marketing with major companies such as Broadcom and NXP. At these companies he was working with multimedia products for imaging, video and graphics in successful products for Nokia, Samsung, Sony Ericsson and Raspberry Pi. Charles holds an honour degree in Electronic Systems and Microprocessor Engineering from Glasgow University.
Codeplay is internationally recognized for expertise in heterogeneous systems and has many years of experience with open standards software such as OpenCL™, SYCL™ and Vulkan™ for complex processor architectures. Codeplay is enabling advanced vision processing and machine learning applications using ComputeAorta, an implementation of OpenCL for heterogeneous processing, and ComputeCpp™, a product based on the SYCL open standard for single-source programming using completely standard C++.
Codeplay is based in Edinburgh Scotland over 60 highly skilled software engineers. Codeplay has earned a reputation as one of the global leaders in compute processing systems.

we.CONECT: What are the challenges facing your industry at the moment?

Charles Macfarlane: Moore’s law is slowing down, CPUs have been stuck at 3GHz for many years. New heterogeneous processors can provide massive performance for artificial intelligence (AI), but only using specialist programming techniques. For example, most AI uses “graph programming” methods to enable individual AI operations to be combined reducing the bandwidth used and maximizing processing throughput.

AI and Machine Learning usage in almost all market segments has brought a hunger for specialist processors and therefore a demand for skilled engineering resources. Product development will therefore find the following barriers:
– Availability of familiar development frameworks and languages
– Availability of engineers with existing relevant skills
– Support during a product’s lifetime (especially in automotive)
– Avoiding lock-in implementations
– Avoiding legal and commercial issues
– Benefiting from mature and proven standards used in other markets
– Tracking the latest processor architectures
– Allowing application development and hardware processor solutions to evolve independently

we.CONECT: How do your solutions address this?

Charles Macfarlane: Codeplay implements solutions based on established and widely adopted open standards. Codeplay works closely with The Khronos Group, an industry consortium focused on the creation of open standard, royalty-free application programming interfaces (APIs). Applications can now be developed using standard high-level C++ and deployed across heterogeneous processor systems without the need for specialized knowledge or skills for the underlying system. Our solutions also help connect AI to Silicon by using OpenCL. An example of this is our work on TensorFlow, Google’s popular AI framework. Codeplay’s SYCL implementation can be used to execute TensorFlow applications on any OpenCL enabled hardware.

Codeplay enables this by providing the following frameworks:
– ComputeAorta, an OpenCL open standards based solution for new specialized processors, making complex programmable devices easier to develop for by using well known programming standards, and
– ComputeCpp, a SYCL implementation enabling applications to be developed using standard high-level C++ and deployed across heterogeneous processor systems without the need for specialized knowledge or skills for the underlying system.

we.CONECT: What sets you apart from your competitors?

Charles Macfarlane: Reasons Codeplay is a leader and the first supplier considered for tough systems:
– Most Supported Platforms
– Working with the right customers driving the AI market
– Products already available and implemented
– Safest for product-ready implementation
– Fastest performance
– Based on widely adopted and understood standards
– Easiest to integrate

we.CONECT: You have recently partnered up with Renesas, could you tell us more about this?

Charles Macfarlane: Automotive is experiencing huge growth in intelligent vision processing for Advanced Driver Assist Systems (ADAS) and ultimately into fully autonomous vehicles. Safety is a major driver for enabling cars with the latest AI innovations allowing cars to avoid accidents and save lives. Renesas is a leading global supplier of advanced system processors for cars and trucks, with their second-generation R-Car series enabling automotive firms to successfully implement a full range of emerging smart-car strategies. Codeplay’s open standards-based technology will be included in future cars so that Renesas’ R-Car solutions can interpret the surroundings and safely take control to avoid accidents or aid with driving functions.

we.CONECT: How do you see your industry changing in the next few years?

Charles Macfarlane: Artificial Intelligence, in many forms, is already making an appearance in our lives, from voice devices to image recognition on our phones. We are incredibly early in the creation and adoption of smart devices, with greater intelligence in handheld devices, around the home, in the car and in industry, agriculture and medical – artificial intelligence can impact all parts of our lives in a very positive way.

So in the coming year we will see more specialised processor systems available for AI processing, rather than re-purposing existing solutions. Also the adoption of powerful AI frameworks such as TensorFlow will empower the programmer to build highly intelligent systems. These two advances in 2018 will bring greater-than-Moore’s law returns with huge steps forward in user experience. Codeplay has ensured ComputeAorta and ComputeCpp can address all market domains and processor types, providing open standards platforms for software developers. Codeplay’s extensive work with TensorFlow ensures programmers can benefit from the most popular processor platforms.

Elektrobit Webinar: Customer Feedback & Engagement Through the Connected Car

The benefits of the connected car are many and varied. For consumers, it offers enhanced opportunities for infotainment, communication, productivity, route guidance and convenience. For well-organised service providers, it can be a promising revenue stream and a major data source for customer satisfaction insights.

But how to collect customer insights in a way that is convenient for both you and your customers?

Elektrobit developed an automated service providing real-time feedback from spoken input by the driver. Watch the webinar session with EB Senior Software Engineer Ajay Rammohan and experience how the connected car can become a direct feedback channel between consumers and vehicle manufacturers – from customer input via speech to visualized feedback in your CRM.

Expert Inteview with Zielpuls: Changing the existing Automotive Market Landscape

Markus Frey
Managing Director, Zielpuls

Arnd Zabka
Managing Partner, Theme Cluster Manager for Connected and Autonomous Driving, Zielpuls

We spoke with Zielpuls MD Markus Frey and Arnd Zabka, Managing Partner, Theme Cluster Manager for Connected and Autonomous Driving, on the company’s relation to the evolution of ADAS, vehicle automation and new mobility concepts. The two experts pointed out upcoming market changes for the automotive sector, important hurdles to take on the way to level 5 autonomy and exciting projects for Zielpuls in the future.

we.CONECT: What is your company’s/your relation to the evolution of ADAS, vehicle automation and new mobility concepts?

Zielpuls: The beginning change of the mobility market is the biggest challenge in our point of view. It comes hand in hand with new technologies, new developments and validation methods, new players, new customer groups and business models. It will change the game. If we look at mobility studies from 2013, we think that this should have been happened much faster. We at Zielpuls will help our customers to prepare the own organization for this new challenge and work as a catalyst to bring these new technologies to the customer.

we.CONECT: What sets you apart from similar service providers in the industry?

Zielpuls: We at Zielpuls are holistic thinking. Our business is the link between strategy and realisation. On the one hand, we help to start the development immediately, but the second important thing is to start the change management in the internal organizations and way of collaborate. We bring these two workstreams together.

we.CONECT: In your opinion: What are the big hurdles towards autonomous driving (autonomy level 5) that need to be surpassed by different stakeholders? How should they be resolved?

Zielpuls: The first prototypes will be available fast. The critical aspect is: How to get safety, security, privacy and sustainability to these systems? And in a way that is cheap enough to be available for everybody? For that on a short-term focus is the test data acquisition and the test and validation strategy for autonomous systems a hurdle. We have to work hard for test automatization and simulation of the complete system to handle this.

On a long-term view there are massive disruptions how mobility will change. Long-term success needs massive change of the actual mobility-supplier. There will be new business models associated with a new definition of the car. As an example an additional value will be the transport and pick up of children or personal delivery services, your finance consultant will pick you up at home and bring you to your work. As a benefit he gets your time for his service and product presentation.

we.CONECT: According to your expectations: How will autonomous driving technology change the existing automotive market landscape?

Zielpuls: By going from ownership to usership, there will be much more different cars and use cases in the world. The market will grow with new user groups. New players will enter the market. By lowering the entry barrier by using a car as a service, mobility will be assessable to new customer groups like children, people with no driving license or retired persons. Developing a car and production will be much easier for new players in the future and less heavy-industry focused.

we.CONECT: With smart cities beginning to roll out in the near future, how do you see the car markets in less developed cities/countries or regions being effected?

Zielpuls: There is a non addressed market for mobility as big as todays car market. So I think, that the markets in less developed cities will grow rapidly hand in hand with people’s increasing need for mobility without owning a car. Take the market for consumer goods in emerging nations as an example. The big players had problems selling their products in 100 units like laundry detergent. So they offered successfully single portions which people could afford.

we.CONECT: Do you think this is being overlooked by the OEMs & Tier 1s to a certain degree?

Zielpuls: The OEM’s are not focusing on the new market groups without a driving license. So new players can easily enter the market. The next seven years will build the fundament for the new mobility business. Every gap you don’t allocate will be filled by a new company. They will have to change from a car manufacturer to a service provider for their own cars. Additionally they need solutions in fleet management, billing systems and different service provisioning.

we.CONECT: What project would you like to work on next in this sector if given the opportunity?

Zielpuls: We would like to work on the step from level 3(4) to level 5. From the strategy to the realization. We want to take our customers on a journey into the future: We want to push new technologies and services quickly to pilot customers, incubate them in mass market products and parallel push the own organization to speed and bring together development of industry and sectors across the board.

we.CONECT: With the automotive industry moving into a new era, what do you think the car makers are not focusing enough on and how could this be a problem?

Zielpuls: It is all about (development) speed and talents.

By moving into a new era, the car manufacturers have to focus on multiple aspects at the same time. On the one hand side, the development of the “classical” car has to go on and more and more new systems have to be integrated (electric, connected, autonomous, service orientation, new interior, etc.). On the other hand side, complete new cars and architectures have to be developed in parallel. These cars will have a big adding value in the software, new engine systems and a complete new architecture. To have success, this architecture has to be opened for new technologies and platform systems. One important thing is to open the systems engineering and think in collaboration. No player in the market has enough financial power to play in all technology fields as a star. The star will be, who brings all technologies to one service together and can offer this to the customer.

we.CONECT: What do you believe would be a solution to this? And what can Zielpuls offer to OEMs & T1s to work towards this solution?

Zielpuls: A spin-off company in an attractive location and environment can be a possible solution. They attract high educated employees. Zielpuls is such a kind of company. We can help the car manufacturers to develop new state of the art solutions, we go with them on their new way and we’re building new organizations.

About the Interviewees:

Markus Frey: has been managing partner of Zielpuls GmbH since its foundation in 2008 and is responsible for Finance & Controlling and Information Technology. At the same time, he is Managing Director of the Zielpuls subsidiary in China with offices in Shanghai and Beijing. Before becoming an entrepreneur, he worked as a consultant for several years and worked for clients in the automotive and logistics industries. His consulting focus at Zielpuls lies in the areas of IT strategy and digitalization. In particular, Mr. Frey advises in the fields of autonomous driving, driver assistance systems, new mobility concepts, smart connected products and digital transformation. Mr. Frey made Zielpuls GmbH known to technology groups from more than 30 markets worldwide for the sustainable implementation and acceleration of internal development processes.

Arnd Zabka: has been a Managing Partner at Zielpuls GmbH since January 2018 and is responsible for the further development of the technology cluster “Connected and autonomous driving”. Before joining Zielpuls, he worket for Altran. Mr. Zabka has 17 years of consulting and project management experience in the areas of automotive electronics development, HAF, ADAS, infotainment, connected products and e-mobility.

Has Smartphone Connectivity Missed the Mark?

Technology is expected to make our lives easier, but that isn’t always the case. If deployed too quickly, a new idea – even the best idea – can become a thorn in the user’s side.
This is particularly apparent when examining smartphone integration in today’s automobiles. It all sounds pretty great on paper: connect your phone (Android or iOS) to a properly equipped vehicle and enjoy the benefits of Android Auto or Apple CarPlay. Apps that were once exclusive to phones and tablets can now be used inside the car.
The problem is that their inclusion, while useful in some areas, is nowhere near the quality consumers have come to expect from modern-day devices.

Hands Off

The first thing you’ll notice when stepping into a modern vehicle is that the touch screens are absolutely horrendous. Poor color saturation can be forgiven and low resolutions can be ignored, but when touching the screen you’ll expect it to respond flawlessly, just like a smartphone. It doesn’t. In fact, there will likely be times when you will simply use the vehicle’s buttons (when possible) to avoid the clunky and unreliable touch screen.
It’s a good thing CarPlay and Android Auto offer voice recognition options; without them, some consumers may be too annoyed to use either solution.
The touch screen is only part of the problem, however. Both connectivity options are layered in menus, forcing consumers to jump through hoops to select the simplest of things. And while CarPlay comes pre-loaded onto all of Apple’s current phones, Google Pixel owners will have to download the Android Auto app separately.

Benefits Among the Chaos

Despite the imperfections, in-car connectivity may provide a significant benefit to society, even if the features don’t work very well.
“The nature of human beings is that they have to have their device and they have to be able to look at it,” said Dillon Blake, senior director of business at Runzheimer, a mobile workforce and software solutions provider. “It’s hard to find people who will put their phone down. They have to see their texts and emails.”
Blake said that CarPlay and Android Auto “take away the distraction of looking at the device, trying to type on the device, all those pieces.”
“The issue is, some of the technology is not there yet,” he added, noting the difference between generations of SYNC, a connectivity solution in Ford vehicles. “SYNC 3 is intuitive, absolutely nothing like SYNC itself. They’re two very different experiences.”
Blake would like to see additional solutions that help consumers keep their eyes on the road. He mentioned the concept to read texts that pop up in front of the driver’s view, along with those that are read aloud so the driver does not have to look at anything. These features could help improve road safety, but they may also widen the divide between vehicles, as not all features will be applied to all makes and models simultaneously.

Unavoidable Distractions?

Connectivity may prove to be beneficial to those who use it properly. But what about consumers who are just as distracted by the car’s bells and whistles as they are the smartphones in their hands?
Phil Moser, VP of Advanced Driver Training Services, said this poses another danger but emphasized the difficulty in determining how many accidents can be attributed to distracted driving.
“People will not readily admit, ‘Oh yeah, I was trying to program my radio,’ or, ‘I was playing with my GPS,’” said Moser. “If you reconstruct a crash and you realize a person could have stopped but they kept accelerating, then you know they were distracted.”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

Algolux Interview: Enabling Autonomous Vision

Dave Tokic
VP Marketing & Strategic Partnerships at Algolux

Dave Tokic
VP Marketing & Strategic Partnerships at Algolux

About Dave:

Dave Tokic is vice president of marketing and strategic partnerships at Algolux, with over 20 years of experience in the semiconductor and electronic design automation industries. Dave most recently served as senior director of worldwide strategic partnerships and alliances for Xilinx, driving solution and services partnerships across all markets. Previously, he held executive marketing and partnership positions at Cadence Design Systems, Verisity Design, and Synopsys and has also served as a marketing and business consultant for the Embedded Vision Alliance. Dave has a degree in Electrical Engineering from Tufts University.
Dave’s Pop in the Job? Helping make cars safer by enabling the auto industry to provide next generation computer vision and imaging capabilities today.

The Interview:

As part of his presentation at the Auto.AI 2017 we had a chat with Dave about his vision of autonomous driving and its connection to Algolux. The company was part of our startup lounge showcasing applications and technologies for autonomous and connected cars.