In tech right now the hot topic is cars, specifically driverless ones. Google is at it. Audi is at it. Tesla too.

Behind the scenes there's a lot of work going on to progress the industry. We spoke to Danny Shapiro, Nvidia senior director of automotive, to get an idea of what to expect and when.

What he details delves into the depths of cloud-based learning, trust issues, and the advance of computing power. The car of the future sure does sound like a technological treat, so hold onto your Hasselhoff perms and read on for an in-depth overview about the cars of the future.

How far away do you think we are form driverless cars being an on-the-road reality?

They're already on the road today [referring to Audi's testing in the US].

As a consumer reality?

For Audi's piloted driving all these computers were in the car just a year ago [shows image of a fully loaded boot]. That's now shrunk down to a single board. There are camera inputs, other sensor inputs, and it has an Nvidia Tegra K1 [processor] running everything. This is able to handle traffic-jam assist, which means autonomous driving on the highway, lane keeping, and adaptive cruise control to maintain a safe distance.

That has been tested on the road, it was the car that drove from Palo Alto to Las Vegas CES [2015 Consumer Electronics Show]. And it's been announced it will go into production in a couple of years.

So is 2017 a reality?

They haven't explicitly given a launch date, so I don't want to speak on behalf of Audi. But I would say it's going to be between 2017 and early 2019, that's kind of the range. I think they are probably planning on under-promising and over-delivering, rather than put an aggressive target there are slip it.

But then you also have Tesla using Nvidia technology. They're going to be brining out some new features soon.

What new features can we expect?

Tesla has added new sensors to the vehicles, ones that currently aren't being used. Via updates they can turn on these sensors.

Tesla / NvidiaRenovo_Coupe_NV_virtual_dash

But don't expect fully autonomous. It's going to arrive in different modes: we're going to see traffic jam assist; we're going to see highway pilot; we're going to see self-parking, where you can get out of the vehicle and it will self-park in the garage.

Isn't it all a legal minefield?

Part of these things are governed by laws, part of these are governed by the lawyers at these auto makers and what level of risk they're willing to take.

I don't know about the UK explicitly, but in the US the law states that there must be a driver behind the wheel. And the driver is responsible. Now that doesn't prevent a driver from suing somebody if he turns on his traffic jam assist and then gets into an accident.

I think what we're going to see is that these systems are incredibly safe and the rate of accidents, injuries and deaths is going to decline. Insurance will very quickly adapt to people who have these features - just like you get adapted insurance for airbags, or ABS, or whatever other technology is in your car and makes it safer.

Does that change the face of the industry?

Once we have cars not colliding any more it's going to really change the insurance industry, it's going to change the auto repair industry, it's going to change hospital emergency rooms.

So is this technology going to drive down the scale of the automotive industry?

It's possible. But we're seeing a rise in car share in general, and seeing a lot of new business models such as Uber.

So how will driverless cars work?

Nvidia Drive PX is designed to take up to 12 camera inputs. We can process data coming in from radar, lidar, ultrasonic.

These sensors - you could call them smart sensors - give you a 3D signature. Lidar gives you this nice detailed point cloud. Video coming from the cameras - and the camera is kind of a dumb sensor, because all it is is a screen full of pixels every 30th second, and there's no inherent information in that right? - provides a bunch of pixel colours. So you have to do a lot of processing; complex computer visualisation algorithms. And that's what the GPU [graphics processing unit] is great at, which is why we're using the GPU to handle all of this.

NvidiaNV_DRIVE_CX_PX

We can run a variety of different types of algorithms using our deep learning methodology to identify the enormous vocabulary of objects and situations that are happening. A benefit of course, compared to a human, is that we have more cameras, so we can have a full 360-degree view of what's happening around the car, and it's never looking away at a smartphone. It's monitoring 30 times a second. As a result of that we can build a very good environment map of the situation.

Different auto makers are then building the app layer on top of that. How they want to handle situations: how they react, how aggressive they want their autonomous car to drive, how sharply they brake, steer, etcetera.

You think that's going to take a long period of time to get right?

It will be an iterative process. But again I think that's the nice thing about the design of Nvidia Drive PX. Come the get-go the system is architected to play ball. There's all this training for the deep learning network that's going to happen in the cloud.

Once you've fed it hours and hours and hours and hours of video in all different driving situations and locations; once it processes that and builds this real network, it's basically modelling how the brain thinks. It's not hard-coding everything in, it's real-time analysis. The system can get smarter and smarter.

So I think what's going to happen is there'll be a lot of development going on and these cars will be deployed on the road, and I can even envision a situation where all this hardware is going in your car and you're driving around, and it's basically shadowing [your driving]; analysing the situation and saying what it would do, but you're still driving the car. It's going to compare whatever you do to what it's going to do and learn from that as well.

Are you saying it's independent learning per car, or shared information?

All that information is going to go up to the cloud. Then a single update will come down and update all the cars. So it's a shared learning; think of crowdsourcing if you will.

Supposedly 'the car of the future will be the most powerful computer you own'. Just how powerful are we talking?

The new Tegra X1, which is the processor unveiled at CES, is a 1-teraflop processor. So that's one trillion floating point operations per second.

NvidiaNVIDIA DRIVE Cluster

In the year 2000 ASCII Red was the fastest supercomputer out there. Again, 1-teraflop processor. It took up 1,600 square feet. It was a supercomputer the size of a three bedroom house. And it consumed 500,000W of power. And it needed another 500,000W of air conditioning to keep it cool.

That exact same level of performance is in Tegra X1.

But that's still not enough power. Nvidia Drive PX has two of those [Tegra X1]. And we're still being asked by our customers for more.

So is it a Hasselhoff Knightrider future for us all?

I think that voice is going to be a big part of it. I think that people [right now] look at voice in cars and go "it's crap". I ask for something and I either get "I do not understand" or the wrong thing coming back. It's a very limited vocabulary. Again, that all comes down to processing.

Being able to filter the sound in the car, to get a large vocabulary and to do language processing requires a huge amount of compute cycles. And that's why Siri and Google Now and everything goes to the cloud: those systems are actually pretty robust, but they have to go to the cloud.

What we're advocating is that you can't guarantee you're going to have connectivity in the car. So by putting the processing horsepower into the car you can do natural language processing and really improve the user experience.

And do you think the public are on board with the idea of autonomous, learning vehicles?

Once we have the car responding and understanding human conversation I think people are going to feel more confident to let the car drive itself.

One of the things, really, is to build the trust. That's a critical thing. So the car can be doing incredible things but for the customer or driver who's in there, they may be feeling a level of stress because they're not sure.

You know, does the car see the pedestrian or not? By modifying the user interface to indicate that the car sees that pedestrian - whether an icon on the screen or whatever - that's going to help the driver to relax and build trust.

It'll be an iterative process and people will very quickly learn to trust the vehicles and recognise how much safer they are. It'll make for better drivers, no distracted drivers.