Ride in NVIDIA’s Self-Driving Car

Ride in NVIDIA’s Self-Driving Car


Today, in a special edition of DRIVE Labs,
we’re taking you on an autonomous drive and we’re going to show you the pieces of software
we’re building, running together, in the car, enabling the vehicle to drive itself. Our pilot is Dennis. I’m your copilot. Let’s go. We are now on the road and we’ll be engaging
autonomy once we get on the highway, but before we do that, I want to show you our perception
functionality already in action in the car. Perception is basically what enables the car
to see. We take in raw sensor data and translate it
into semantic understanding of the world, of the scene that we’re in. So take a look at that happening on our front
camera. We have DriveNet detecting obstacles, the
bounding boxes around the cars. We have WaitNet detecting the intersection,
the yellow box around everything. WaitNet also detecting traffic lights and
traffic signs and LightNet is classifying traffic lights state correctly as red. We also have signed classification going on
using SignNet. At the same time, DriveNet is detecting pedestrians
in the cyan bounding boxes on the far side of the intersection. We also have OpenRoadNet tracing out the free
space around obstacles on the scene. And on top of that, we have our object tracking
from frame to frame. You see the track IDs on the top of each bounding
box. We also have our camera-based DNN, distance
estimation running, so you see the distance in meters displayed at the bottom of each
box. ClearSightNet is also running in the background,
assessing whether and how well the cameras can see in our four cameras surround perception
set up, on our embedded AGX platform. All of this rich perception functionality
is what our planning and control software are going to use to execute the autonomous
driving maneuvers that you’re about to see. We’re now getting onto the highway on-ramp
and entering the coverage area of the high definition map that we’re going to use today
for the car to create a route plan that we’re going to follow. Basically, the car will localize itself onto
the map and create a lane plan that tells us when it needs to stay in the lane, when
it needs to take a lane change to stay on the route, and when it needs to take a highway
interchange. The second thing that’s about to happen is
that we’re going to transition out of human-driven mode, driven by Dennis, into autonomous machine
driving mode, where the car is going to drive us. So taking a look at the top right of our screen,
we see automatic cruise control, ACC, and Lane Keep, LK. When they’re both off, Dennis is driving. When they come on, the car will be driving
us. So here we go. Taking a look at the screen Lane Keep is now
on. ACC is now on. We’re driving fully autonomously. Dennis, his hands are off the wheel but staying
close for safety reasons and we are officially starting our autonomous drive. Okay. We are now in full autonomy. The car is keeping us in the lane. Let’s take a look at how that is happening. That thick green center path that you see. That is the Path Perception Ensemble, DRIVE
Labs episode one and it is computing, not just the center path and the edges of our
lane but also the center path and edges of the left adjacent lane and the right adjacent
lanes. And we visualize that with different colors. So green is our Ego lane, left adjacent is
red and right adjacent is blue. Next, we need to determine which of the obstacles
belong in which of these different lanes. The way that we do that, we have the bounding
box detections from DriveNet. We have free space boundary detections from
OpenRoadNet. Where those two meets is what is called the
object fence and that fence is off where that object is in space. We combined this object fence information
with lane geometry information from Path Perception Ensemble and this now enables us to do obstacle-to-lane
assignment. The car fence takes on the color of it’s assigned
lane. We are now approaching our first maneuver,
first autonomous maneuver. The car is letting us know that based on our
route plan, we need to take a lane change to the right. Here we go. The car is performing a surround radar and
camera lane change safety check, and we are now moving from Lane Keep mode into Speed
Adaptation in order to figure out the speed profile to get into the next lane and into
Lane Change mode. Moving from the center path of the current
lane into the center path of the target lane. And we have now completed that lane change. Okay. We’re now getting ready for our second set
of autonomous driving maneuvers. Going straight into the highway interchange
onto 280. Now, although we know this is coming up based
on localization to the HD map, we will not be using any clues from the map to actually
navigate this maneuver. We are handling this using Path Perception
Ensemble only. Lane Handling mode on the screen is split
because this is a lane split interchange and now the challenge is going to be for Path
Perception Ensemble to maintain confidence throughout this interchange because it has
both high curvature and high grade. But take a look at Path Perception Ensemble. It’s still green, meaning it has high confidence,
that it’s navigating this difficult curved graded highway interchange correctly. We are now coming up on our next set of autonomous
driving maneuvers to get onto highway 87. The first thing that we’re going to need to
do is another lane change to the right to get into the correct exit lane, and then handle
another lane split highway interchange, followed by another lane change under time pressure. So here we go. First lane change. You see Lane Handling mode, go into Speed
Adaptation, finding the lateral path into the next lane. Ensemble going from red to green as it’s landing
in the target lane. Finding confidence that it’s found the lane. We have just handled another lane merge and
we are going to have a little bit of grade profile changes in the road coming up. Right there, this is why it’s important to
have calibration continuously running in the car. We see the Lane Handling mode move into Split
Mode. The car needs to correctly take that lane
split to the right to not unintentionally exit the route. Path Perception Ensemble is now navigating
another high curvature interchange. We see the center path staying green, and
we are now moving right into that third maneuver. This is a lane change under time pressure. We don’t have a lot of time here to move from
the right lane into the next adjacent left lane, in order to not incorrectly exit from
our planned route. So here we go. We’re switching from Keep mode into Speed
Adaptation into Change mode and landing in the center of the target lane to complete
that set of maneuvers. And we are now going to complete the rest
of our autonomous route and head back to the garage. And we’re back. We hope you enjoyed our autonomous drive today
and enjoyed seeing how our software is enabling the car to drive itself. For any questions, reach out to us through
the comments section. Check out our other DRIVE Labs videos, and
we’ll see you next time.