Startup PerceptIn developed an Autonomous Vehicle for less than $10,000

Until now, the race for autonomous driving was mainly about vehicles that can drive on public roads. But there also are alternative models that target other locations.

According to most studies, Waymo (Google’s subsidiary in the Alphabet Group) and General Motors are leading the competition for the introduction of autonomous driving. Both want to develop a driving service with robot taxis that targets vehicles that can drive on all roads.

Competition for Speed

However, this still requires some development work. The faster a car drives, the better the technology must be around it – this also applies to autonomous driving. Fatal accidents can occur at high speed, as the recent accidents involving Tesla and Uber have shown. Fast driving autonomous cars require efficient sensors that are able to “see” what is happening in the distance. This applies to all sensor types such as radar, camera or Lidar sensors.

Additionally, the data collected by the sensors has to be processed promptly in order to initiate the right actions at high speed in an emergency – not to mention the braking distance. This also implies a high computing capacity.

Why not go slower?

The problem is that powerful sensors and high computing capacities are very expensive and this is an obstacle to the technology. That’s why startup Perceptin follows another path, making it possible to significantly reduce the costs for an autonomous vehicle. Former Baidu employee Shaoshan Liu founded Perceptin in 2016 with the goal of creating a reliable vehicle that should not be used on public roads but in confined areas – i.e. on university campuses, on company grounds or in parks.

To this end, the company developed an electric car that can be produced for less than $5,000 in China. Hardware and software for autonomous driving doubles the price. Here the factor speed comes into play again. The slower the vehicle, the less computing capacity is needed. It does not use a Lidar system either, but inexpensive camera, radar and ultrasound technology.

The omission of Lidar sensors is compensated by camera technology that enables a 3D image via point clouds. This is certainly not suitable for high velocities, but is perfectly fine for vehicles not faster than 20 km/h. Its position is determined with an accuracy of 20 centimeters by GPS and an autometry sensor. The medium-range radar can only see 50 meters and the inexpensive ultrasonic sensor has a range of five meters.

Sales and Customers

Perceptin has already found its first customer – ZTE. The Chinese telecommunications company purchased five units, which will be used on its own premises. Overall, Perceptin hopes for sales figures in the six-figure range.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

Introducing Tesla 9.0

Tesla introduced a new update – autonomous driving features included.
Recently the Tesla project started to falter as it was hit by economic problems, lawsuits and not least by the fatal crash of a car controlled by autopilot. Now Tesla announced a new update to manage the turnaround.
Tesla’s autopilot was one of the first sold in line and offered a Level 2 semi-autonomous driving solution. The cars were able to drive by themselves but the manual requested the human driver to keep their hands on the wheel and stay alert. The driver had to be able to take over control at any time because technology was not ready to master all traffic situations.

Autopiloted accidents

Lately there has been an increase of incidents involving the Tesla Autopilot. Latter seems to tempt humans to lose their attentiveness in traffic and take up other activities. In Great Britain, a driver lost his license for sitting on the passenger seat whilst driving in an autonomous car.
On March 23, 2018 in California, a Tesla Model X car crashed into a freeway divider, killing the driver and causing a fire. Recently the National Transportation Safety Board (NTSB) of the USA took over the investigations in the case – their report stated that the driving was not paying attention. The car was oriented towards the car ahead and lost its point of reference at the divider. At that point, the driver had to take over control of the vehicle – unfortunately, he didn’t. The car accelerated, pulled to the left and crashed into the divider.
There have been more incidents involving Tesla models causing collisions by accelerating for no apparent reason. One reason for this could be a faulty update. Many users sued Tesla for selling life-threatening technology; these cases have been settled out-of-court. However, the company is facing more lawsuits initiated by consumer protection organizations.

Update 9.0

Elon Musk announced on Twitter that there will be a new update starting in August involving new autonomous driving features. What those are and what autonomy level the Tesla models will reach with them was not communicated. The software update does not provide the integration of a lidar system. A lidar enables a 3D image of the vehicle environment and serves for orientation and position finding. Last year, Tesla was denied the ability to build autonomous cars by a GM manager because Tesla cars aren’t equipped with lidars. In fact, Tesla uses camera sensors above all.
However, researchers at Audi and MIT have developed algorithms that allow 3D information to be calculated from the camera images. Whether that is Tesla’s plan is unclear, but it cannot be ruled out. We can just hope that the update in August will provide more safety. Not only Tesla’s reputation is at stake, but also that of autonomous driving technology as such.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

Waymo takes the Gloves off

Google’s subsidiary Waymo is considered a market leader for a technology that is not marketable yet. That means Waymo is closest to commercializing autonomous driving, according to several research studies. The company plans to set up on-demand transportation services as a competitor for conventional cabs. Originally based in California, Google/Waymo got the authorization to take the project to Arizona.

Waymo’s vehicles

Initially Waymo wanted to build its own cars but eventually moved away from the idea. Today Waymo’s fleet consists of Fiat-Chrysler (FCA) models, more precisely of Chrylser Pacifica vehicles. Additional collaborations with Jaguar-Land-Rover and Honda are following the same model. It’s likely that the Jaguar models cover the luxury segment, FCA the mid-range segment and Honda the compact car division. Lately Waymo ordered 62,000 new Chrysler Pacifica vehicles that are currently being equipped with autonomous car technology. Waymo said nothing about acquisition and modification costs, but industry experts estimated the costs at about 31 billion USD – for modification purposes only!

Waymo & Uber?

Meanwhile Uber raised its hand and proposed a collaboration with Waymo, which seems utopian after the 2 companies settled a legal dispute forcing Uber to pay Waymo 2.6 billion USD. Subject of the lawsuit: stolen trades secrets, in this case information about a Lidar.

Moreover, Uber has to take responsibility for a fatal crash in Arizona, where an autonomous car ran over a woman due to a deactivated emergency brake system. Therefore, Uber aborted all testing activities in Arizona and won’t resume testing in the state. The incident cost a lot of trust in the technology – this led to stricter testing regulations with the USA.

When is the time?

It is questionable that Waymo will be able to offer an autonomous transport service in 2018. The modification of more than 60,000 FCA vehicles is expected to take more than 1 year. However Waymo should not take the foot off the gas, main competitor GM plans to mass produce a highly-autonomous vehicles (Level 4) in 2019.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

Mobileye and Intel test Autonomous Vehicles in Jerusalem

In March 2017, Intel bought Israeli tech company Mobileye – this was not Intel’s first step towards autonomous driving, but definitely one of the most important ones. Mobileye provides computing capacity and sensors for the area around Jerusalem. The company also uses its expertise to guide several countries on the implementation of autonomous driving. Mobileye’s expertise is undisputable, at the latest since they sent 100 self-driving test vehicles onto the streets of Jerusalem.

The operation’s slogan: If you can do it in Jerusalem, you can do it everywhere, as its traffic is said to be extremely heavy and exhausting for human drivers. Apart from that, Jerusalem is also Mobileye’s company location.

Camera Sensors & True Redundancy

For the moment the testing fleet is only equipped with camera sensors. 8 cameras capture images to detect obstacles and traffic signs and for positioning and mapping. By this the vehicle can develop optimal routes by itself. The procedure of using camera sensors only is called “true redundancy”. The advantage over the use of different kinds of sensors (“real redundancy”) is the small amount of data processed.

KI & RSS

Data is processed by AI and converted into corresponding actions. In order to prevent AI from commanding dangerous maneuvers, Mobileye developed the so-called Responsibility-Sensitive-Safety (RSS) – a mathematical model that aligns AI orders with internal protocols. If a certain action or maneuver is not listed in the safety protocols, RSS prevents the execution. Intel has published the standards behind these protocols.

Computing Power

Today, all testing vehicles are equipped with the EyeQ4 chip. However, Mobileye has already unveiled its successor, the EyeQ5 chip, with a computing power ten times as strong as the current chip. The EyeQ5 will be in full mass production by 2020 and was already ordered by BMW for 2021.

First Troubleshooting

Shortly after sending out the test fleet, first issues emerged. One car ran over a red light despite the efforts of a safety driver. At least Mobileye already discovered the cause of the malfunction and solved it: A TV camera interfered with the transponder signal of the traffic lights. Because of the missing signal, the car crossed the road as if there were no traffic lights.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

VW to introduce Autonomous Parking in Hamburg

Volkswagen approaches autonomous parking, testing the technology in a parking garage in Hamburg, Germany.
Last year Bosch and Daimler already presented their joint valet-parking concept at the IAA which was implemented inside the Mercedes-Benz Museum in Stuttgart. Recently Bosch presented eGo, a similar concept developed at the RWTH Aachen Campus. Now Volkswagen also hopped the trend testing an autonomous valet-service in Hamburg.

Parking today: time- and money consuming

Looking for a parking spot not only costs nerves, but also time and money. According to an INRIX study searching for parking is one of the major hidden costs of driving, devouring more than 3,000$ per driver in the US in 2017. This implies fuel costs that occur for finding a parking space. In New York, a driver yearly spends 107 hours looking for a space on average. A possible solution would be interlinking vehicles and parking spaces in order to save drivers‘ time and money.

Parking in the future

The valet-service concept is based on these goals. Autonomous cars are able to communicate with free parking spaces and can park much closer to each other, because there is no human exiting the car. The search for a spot still may take time, but the driver doesn’t have to care at all, as he exited the car before it entered the parking garage. Watching the cars do their thing won’t be possible anyway – humans won’t have the permission to enter the autonomous parking garages.
VW plans to test autonomous valet parking for the next two years. First the parking garage in Hamburg has been mapped and equipped with signs that guide the autonomous vehicle. The innovative parking lot will work for autonomous car models by VW, Audi and Porsche. Drivers can exit their cars at the entrace gate and initiate the search for a parking space with only one swish on their smartphone. Future plans include mixed traffic, meaning manually controlled vehicles and autonomous vehicles looking for a space in the same parking lot.

Road to ITS 2021: Autonomous trucks and more

Hamburg and VW are conducting a strategic partnership, so there are more projects in the pipeline until 2021, when the Intelligent Transport Systems World Congress (ITS) takes place in the city. Soon there will also be autonomous trucks running around the port of Hamburg. VW wants to use the trucks of its subsidiaries MAN and Scania to foster the development of autonomous driving. Nissan and Renault are taking a similar road to push autonomous driving technology development.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

When Driverless Car Demonstrations Are Less Than They Seem

This report was inspired by the many and varied demonstrations of self-driving vehicle technology over the past few years and the widening gulf between the appearance of capability and the reality. The aim is to inform non-specialists about some of the different methods used to enhance the apparent driving proficiency of prototype driverless vehicles.
Self-driving vehicles form an understanding of where they are and where they want to go using advanced versions of contemporary mapping and navigation systems — mature technology. This includes dynamic route planning that changes course based on traffic conditions and road closures. Ideal paths derived from mapping are the foundation stone of nearly all (if not every) self-driving system. The disparity in capability between projects lies in how the car copes with differences between the ideal route and the actual environment. The best systems recognise objects and create an understanding of their real-time situation, together with predictions of how the scene might unfold. Lesser systems do not have this ability, or are capable only in simpler scenarios. This inadequacy can be disguised by the design of the demonstration (not that anyone would do such a thing). To explain the background clearly, this report covers the following areas:
A beginner’s guide to object recognition — a brief overview of what a self-driving artificial intelligence (AI) tries to identify in its surroundings and why.
An introduction to scene understanding and prediction — an overview of how the artificial intelligence can use its understanding of the local environment to make driving decisions.
An overview of different demonstration events; relative difficulty and how to spot fakes — four complexity levels:
• The parking lot demonstration
• The closed course demonstration
• The carefully selected on-road demonstration
• The high-confidence on-road Level 4 demonstration
This includes examples of how the demonstration can be simplified to make the vehicle appear more capable and some ways that you can investigate further. The issue is that, as shown in the table below, nearly all demonstrations appear sensational, so it is important to bring greater objectivity to the near certain euphoria felt on exiting the vehicle.

The only conclusion is buyer beware — look carefully behind the curtain. Very few people have travelled in a driverless vehicle and the experience remains impressive, even in circumstances where it is heavily staged. This report simply aims to assist objectivity in the face of thrilling and often seemingly compelling technology demonstrations.

The full report including all insights and graphics is available for download. Please find the download button below to access your report:

About Ad Punctum
Ad Punctum is a consulting and research firm founded by an ex-automotive OEM insider. We bring focused interest, an eye for the story and love of detail to research. Intellectual curiosity is at the centre of all that we do and helping companies understand their business environment better is a task that we take very seriously.
About The Author
Thomas Ridge is the founder and managing director of Ad Punctum, based in London. You may contact him by email at tridge@adpunctum.co.uk.
For Further Contact
If you would like to discuss this report, please contact the author.

Rio Tinto: Where Autonomous Driving is already Reality

Mine operator Rio Tinto owns a fleet of autonomous trucks that is used to transport mining materials to increase efficiency and lower costs. It’s already been 10 years since the first self-driving dump truck transported overburden to a nearby train station. Back in 2008 the tests were so succesful that the company decided to strengthen its fleet. Over the years the number of autonomous trucks steadily increased – today there are more than 80 autonomous vehicles applied to the Pilbara mine in Australia.

AHS Autonomous Haulage System

The dump trucks were originally produced by the construction machine manufacturers Komat’su and Caterpillar. They are called AHS which stands for Autonomous Haulage Sytem and are controlled from the site’s operations center. Each truck is eqipped with a GPS transmitter so they can be located at any time.
Unlike conventional dump trucks the autonomous ones are in permanent use. That’s a difference of about 700 hours per year which makes 15% of the total runtime. They do not need vacation and cannot be detracted like humans behind the wheel. Rio Tinto representatives also underline the safety aspect – in the whole past decade there was not even one reported injury.

New Milestones

Last year the autonomous trucks carried a quarter of the whole overburden and dumped it into trains. Latter are also automated – if loaded up, they automatically start their journey through the Outback of Australia.
This year Rio Tinto reached the next milestone. The autonomous trucks have now carried one billion ton of soil through the site. Because of the huge success, Rio Tinto plans to implement additional trucks. The goal is to have 140 autonomous trucks driving through the mines of Pilbara until the end of 2019.

About the author:

David Fluhr is journalist and owner of the digital magazine “Autonomes Fahren & Co”. He is reporting regularly about trends and technologies in the fields Autonomous Driving, HMI, Telematics and Robotics. Link to his site: http://www.autonomes-fahren.de

American Center for Mobility Gives Automakers a Safer Venue for AV Testing

Autonomous vehicles were given a boost this spring when the American Center for Mobility opened in Michigan. Located at the historic former site of the Willow Run Bomber Plant in Ypsilanti Township, ACM is hoping to be the premier destination for AV testing.
“We built ACM on a collaborative approach, working with industry, government and academia on a local, regional, national and even international level,” said John Maddox, president and CEO of the American Center for Mobility. He spoke to attendees at ACM’s ribbon cutting ceremony, which brought together a number of political supporters and auto industry execs.
Michigan Governor Rick Snyder referred to ACM as “another huge step forward” for the state as it strives to maintain its leadership as the auto capital of the world. “[Mobility] does three things,” said Snyder. “It’s going to bring us a safer world in terms of saving lives and quality of life. It’s going to create opportunities for people – people that may be economically disadvantaged, have disabilities and other challenges in their lives. It will provide options to their lives they have not seen in the past.” Snyder added that as mobility evolves it will also bring a new level of efficiency to the infrastructure. “This is a place to be on the forefront of improving lives, of creating opportunities for our citizens in this state, but also the entire world,” Snyder continued.
Lieutenant Governor Brian Calley concurred, adding, “This is going to make such a big difference for our infrastructure, for our safety, but especially mobility for people that don’t have the same types of opportunities that many of the rest of us have.” Calley praised the way corporations, associations, state representatives and others came together to build ACM from the ground up. “It’s so special, so important,” Calley added. “It’s going to have such a profound impact on the entire world and it’s happening right here.”
U.S. Congresswoman Debbie Dingell, one of many staunch supporters of AV technology, expressed the importance of building a place where self-driving cars can be tested and validated. “One of the things that has surprised me is the public resistance to autonomous vehicles,” said Dingell. “Let’s be honest, the Uber accident [in March] has made people concerned. That’s why we need this test site.”
Kevin Dallas, corporate vice president, artificial intelligence and intelligent cloud business development at Microsoft, also joined the stage to discuss how the company will serve ACM as its exclusive data and cloud provider. “We see it as an opportunity to invest deeply in the first safe environment where we can test, simulate and validate connected autonomous vehicles,” said Dallas. “And then accelerate the delivery of applications and services around autonomous systems. We’re taking that very seriously.”
After the ceremony, William “Andy” Freels, President of Hyundai America Technical Center, Inc. (HATCI), took a moment to share his thoughts on ACM. “We became founding members at the end of last year,” said Freels. “We are literally about 15 minutes from this facility. It’s a real investment in our local R&D facility here. Initially we will start using ACM for sensor development and sensor fusion testing. Connectivity is obviously a very important part.”
While ACM is designed to serve many areas of autonomous car development, Freels thinks the primary benefits will come from testing the potential interactivity and communication between cars (V2V) and infrastructure (V2I).
“Like never before, vehicles are going to need to work together to communicate [with each other] and the infrastructure,” Freels added. “That’s really quite different from the way it has been done in the past, where we could do something completely independently. I think that’s a key point of this facility – being able to collaborate with the industry, as well as the government and the academia side of it.”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

What Happens When Autonomous Vehicles Turn Deadly?

Consumers and auto execs alike were horrified by the news that a self-driving Uber vehicle had hit and killed a pedestrian. The incident prompted Uber to ground its fleet of self-driving cars while the National Transportation Safety Board (NTSB) and the National Highway Traffic Safety Administration (NHTSA) reviewed the accident to determine who was at fault.

Uber is only one part of the growing autonomous vehicle sector, but the accident sent shockwaves throughout the entire industry. It’s the kind of incident that could thwart plans for AV deployment, attract a new level of scrutiny from lawmakers, and erode consumer confidence in a vehicle’s ability to drive itself.

Many in the auto industry wouldn’t even respond to a request for comment, but Nicolas de Cremiers, head of marketing at Navya, shared his reaction to what happened last March.

“As with any sector, human error is a possibility,” said de Cremiers, whose company produces autonomous shuttles and cabs. “It is crucial that we, as suppliers of autonomous mobility solutions, come together with communities and municipalities to begin taking steps towards creating safety standards and comprehensive measures for the upcoming Autonomous Vehicle Era in smart cities.”

de Cremiers remained optimistic for the future of AVs, adding, “In working towards a more fluid and sustainable future, by improving traffic flow and reducing congestion in urban centers, we will ultimately upgrade the quality life while raising safety standards for a world in perpetual motion.”

As far as regulations are concerned, Danny Atsmon, CEO of Cognata, a startup specializing in AV simulations, said there needs to be some “common procedures” before these vehicles are publicly deployed.

“It’s not a bad idea to have some commonality and standards among the different AV providers,” said Atsmon. “I do believe that after this incident, there are high chances that it will lead to some regulations.”

Gil Dotan, CEO of Guardian Optical Technologies, said it is the industry’s responsibility to “make sure we learn the most and make our tech smarter and more robust.”

“This will push carmakers and tech providers to be more cautious and responsible,” said Dotan, whose company is developing sensing tech for inside the cabin. “This has precedents in other industries, like aviation and space travel, where unfortunate events have occurred. The last thing we should take out of this is to stop our efforts.”

Dotan is among those who see the greater good in what AVs could achieve by eventually reducing the number of fatal car accidents. Atsmon agrees, but he said the incident is a reminder that AVs “still have years of development and a long validation process before it can be released on the road.”

Where does this leave Uber, the company at the center of it all? Jeffrey Tumlin, principal at Nelson\Nygaard, a transportation planning consultancy, said the video released by the Tempe Police Department is “remarkably damning.”

“Yes, the victim crossed a high-speed road – in the dark, on a curve,” said Tumlin. “But all the tech on the vehicle did nothing to slow the car or alert the human observer. While I still believe that AV tech can result in huge safety benefits, whatever tech was out there on the roads should not have yet been cleared for human trials.”

About the author:

Louis Bedigian is an experienced journalist and contributor to various automotive trade publications. He is a dynamic writer, editor and communications specialist with expertise in the areas of journalism, promotional copy, PR, research and social networking.

Codeplay Software Connecting AI to Silicon in Automotive

Charles Macfarlane

Charles Macfarlane
VP Marketing, Codeplay Software

Our Interview Partner:

Charles Macfarlane is VP Marketing at Codeplay Software, leading marketing, sales and business development there for the past 4 years. Charles also engages with the leading global processor and semiconductor companies to provide leadership software solutions. Before Codeplay, Charles held positions as chip designer, applications engineer, product manager and marketing with major companies such as Broadcom and NXP. At these companies he was working with multimedia products for imaging, video and graphics in successful products for Nokia, Samsung, Sony Ericsson and Raspberry Pi. Charles holds an honour degree in Electronic Systems and Microprocessor Engineering from Glasgow University.
Codeplay is internationally recognized for expertise in heterogeneous systems and has many years of experience with open standards software such as OpenCL™, SYCL™ and Vulkan™ for complex processor architectures. Codeplay is enabling advanced vision processing and machine learning applications using ComputeAorta, an implementation of OpenCL for heterogeneous processing, and ComputeCpp™, a product based on the SYCL open standard for single-source programming using completely standard C++.
Codeplay is based in Edinburgh Scotland over 60 highly skilled software engineers. Codeplay has earned a reputation as one of the global leaders in compute processing systems.

we.CONECT: What are the challenges facing your industry at the moment?

Charles Macfarlane: Moore’s law is slowing down, CPUs have been stuck at 3GHz for many years. New heterogeneous processors can provide massive performance for artificial intelligence (AI), but only using specialist programming techniques. For example, most AI uses “graph programming” methods to enable individual AI operations to be combined reducing the bandwidth used and maximizing processing throughput.

AI and Machine Learning usage in almost all market segments has brought a hunger for specialist processors and therefore a demand for skilled engineering resources. Product development will therefore find the following barriers:
– Availability of familiar development frameworks and languages
– Availability of engineers with existing relevant skills
– Support during a product’s lifetime (especially in automotive)
– Avoiding lock-in implementations
– Avoiding legal and commercial issues
– Benefiting from mature and proven standards used in other markets
– Tracking the latest processor architectures
– Allowing application development and hardware processor solutions to evolve independently

we.CONECT: How do your solutions address this?

Charles Macfarlane: Codeplay implements solutions based on established and widely adopted open standards. Codeplay works closely with The Khronos Group, an industry consortium focused on the creation of open standard, royalty-free application programming interfaces (APIs). Applications can now be developed using standard high-level C++ and deployed across heterogeneous processor systems without the need for specialized knowledge or skills for the underlying system. Our solutions also help connect AI to Silicon by using OpenCL. An example of this is our work on TensorFlow, Google’s popular AI framework. Codeplay’s SYCL implementation can be used to execute TensorFlow applications on any OpenCL enabled hardware.

Codeplay enables this by providing the following frameworks:
– ComputeAorta, an OpenCL open standards based solution for new specialized processors, making complex programmable devices easier to develop for by using well known programming standards, and
– ComputeCpp, a SYCL implementation enabling applications to be developed using standard high-level C++ and deployed across heterogeneous processor systems without the need for specialized knowledge or skills for the underlying system.

we.CONECT: What sets you apart from your competitors?

Charles Macfarlane: Reasons Codeplay is a leader and the first supplier considered for tough systems:
– Most Supported Platforms
– Working with the right customers driving the AI market
– Products already available and implemented
– Safest for product-ready implementation
– Fastest performance
– Based on widely adopted and understood standards
– Easiest to integrate

we.CONECT: You have recently partnered up with Renesas, could you tell us more about this?

Charles Macfarlane: Automotive is experiencing huge growth in intelligent vision processing for Advanced Driver Assist Systems (ADAS) and ultimately into fully autonomous vehicles. Safety is a major driver for enabling cars with the latest AI innovations allowing cars to avoid accidents and save lives. Renesas is a leading global supplier of advanced system processors for cars and trucks, with their second-generation R-Car series enabling automotive firms to successfully implement a full range of emerging smart-car strategies. Codeplay’s open standards-based technology will be included in future cars so that Renesas’ R-Car solutions can interpret the surroundings and safely take control to avoid accidents or aid with driving functions.

we.CONECT: How do you see your industry changing in the next few years?

Charles Macfarlane: Artificial Intelligence, in many forms, is already making an appearance in our lives, from voice devices to image recognition on our phones. We are incredibly early in the creation and adoption of smart devices, with greater intelligence in handheld devices, around the home, in the car and in industry, agriculture and medical – artificial intelligence can impact all parts of our lives in a very positive way.

So in the coming year we will see more specialised processor systems available for AI processing, rather than re-purposing existing solutions. Also the adoption of powerful AI frameworks such as TensorFlow will empower the programmer to build highly intelligent systems. These two advances in 2018 will bring greater-than-Moore’s law returns with huge steps forward in user experience. Codeplay has ensured ComputeAorta and ComputeCpp can address all market domains and processor types, providing open standards platforms for software developers. Codeplay’s extensive work with TensorFlow ensures programmers can benefit from the most popular processor platforms.