Let’s find some lane lines: Udacity Self-Driving Car Engineer, Course Work

Alexander Stadnikov
6 min readDec 5, 2020

Overview

Recognizing lanes on the road is one of the essential tasks which human drivers perform well. It’s possible because nature with evolution gifted us perfect sensors. Autonomous systems are only at the beginning of their epoch. It’s a non-trivial task for any robot to read and interpret data about the world around them. Computer Vision tries to eliminate the gap between us, humans, and robots.

The goal of the project is to recognize lanes on the road with some limitations:

  • The recognition isn’t real-time.
  • There’re good weather conditions. It’s a sunny day.
  • The car is moving along a straight line on the highway.
  • Lines are visible.
  • The traffic isn’t dense.

The project is a part of Udacity Become a Self-Driving Car Engineer.

The project setup

The project consists of a notebook and assets.

There’re two types of assets — images and pictures. Images are test samples (actually, frames from a video stream) stored in JPEG format with dimensions 960x540 pixels. Video files are in two forms:

  1. Two MPEG-4 with dimensions 960x540 pixels
  2. One MPEG-4 with dimensions 1280x720 pixels

The idea is to implement a pipeline. It will be done in two stages:

  • Recognize lanes on images
  • Recognize lanes on videos

A video stream is just a set of frames. The solution for images will be scaled to be used with videos.

I’ll use Python and OpenCV to implement the pipeline. In the code snippets below, I avoid boilerplate code and focus on the most interesting parts. Also, for the post, I prefer to use more readable code; the implementation might differ.

The code is available on GitHub.

Pipeline

The pipeline consists of the next steps:

  • Apply Grayscale Transform
  • Apply Gaussian Blur
  • Detect Canny Edges
  • Filter the uninteresting region out
  • Apply Hough Transform to detect line segments
  • Extrapolate lane lines from line segments
  • Stabilize lane lines
  • Adds overlayed lane lines to the original image

Each step is explained below.

Lanes recognition on images

Consider the next original image:

Test image solidWhiteCurve.jpg

Apply Grayscale Transform

The point is to recognize white and yellow lines. These colors will have high contrast with the road if the image is in grayscale.

We need to load the image and convert it into grayscale:

import matplotlib.image as mpimg
import cv2
import numpy as np
img = mpimg.imread('./test_images/solidWhiteCurve.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
Grayscale of the original image

Apply Gaussian Blur

The grayscale image has a lot of potential noizes. It’s possible to reduce them with the technic Gaussian Blur. The blur must not be too aggressive: the blur kernel size of 5 is enough for the project.

kernel_size = 5
blur = cv2.GaussianBlur(gray, (kernel_size, kernel_size), 0)
After Gaussian blur applied

Detect Canny Edges

At this step, we recognize lines on the image with the Canny Edge detector. The detector helps to get a set of edges. Edges are just borders between contrasting areas.

The Canny detector uses two parameters: the low threshold and the high threshold. All edges with contrast below the low threshold filtered out. All edges above the high threshold are recognized as edges immediately. The rest edges between the high and low thresholds will be detected depending on their connectivity. The recommended ratio for low/high thresholds is 2:1 or 3:1.

low_threshold, high_threshold = 50, 150
edges = cv2.Canny(gray, low_threshold, high_threshold)
All detected edges

Consider only the interesting region

The outcome of the previous stage contains too many edges. Many of them are not interesting in the project. It’s possible to avoid all such edges if we consider only a specific region in front of the car:

region = [(0,539), (450, 325), (490, 325), (959,539)]
region = np.array([self.region], dtype=np.int32)
# the method region_of_interest is available on GitHub
masked = region_of_interest(edges, region, dtype=np.int32))
Now we have less data for the analyze

Hough Transform

Hough Transform helps us to extract shapes from the image. The trickiest part of the step is finding the correct parameters. Since such parameters are discovered, this operation provides us line segments which construct almost straight lines:

rho = 1
theta = np.pi/180
threshold = 15
min_line_length = 20
max_line_gap = 10
lines = cv2.HoughLinesP(masked, rho, theta, threshold, np.array([]), minLineLength=min_line_length, maxLineGap=max_line_gap)
Highlighted detected edges

Extrapolation

The code for this and next stages is a bit big for the post. Please, check it in the repository.

At this stage, it’s possible to extrapolate these lines. To do it properly, we need to split lines onto the left and right sets of segments. How to determine if the element is part of the left or right group?

With coordinates of the segment, it’s possible to calculate the slope of the element. It’s easy to make a mistake here. On the image, the Y-axis goes from top to down.

  • For the left line, the slope is negative — X increases, but Y decreases.
  • For the right line, the slope is positive — both X and Y increases.

Segments with a minimal absolute value of the slope might be ignored because they’re almost horizontal and aren’t valuable.

Edges splitter to the left and the right groups

Now, it’s time to extrapolate all these segments in both groups and overlay them onto the original image.

Small bonus: video without extrapolation

We will discuss the stabilization later

Lanes recognition on videos

First attempt

Let’s pipeline with the provided video files. The result is far away from the expected. Lines shake. Moreover, in the last video, there’s a short segment with a very bright asphalt. Previously used parameters for the pipeline wrongly detect lines. The video below is a short compilation of the result. Look at this strange deviation at the end of the clip. Lines even crossed!

Stabilization

Let’s assume we can keep coefficients of both extrapolated lines for some number of frames. Now we can calculate the means for them. If extrapolated coefficients differ significantly from their means, then we use these means from the previous frames.

The resulting code contains the class LaneLinesDetector, which summarizes all the stages with the stabilization as well.

The video a bit long but contains all three samples—the most complicated sample at the end.

Improvements

The provided workaround with stabilization is enough for the current project. Unfortunately, it doesn’t work well on more curly roads. In the next projects, this technique must be changed with more sophisticated algorithms from Machine Learning.

Other possible improvements:

  1. Getting images from an infrared camera
  2. Usage of Random Sample Consensus (RANSAC) and Kalman Filter
  3. Extrapolation to curves instead of to straight lines

References

--

--

Alexander Stadnikov
0 Followers

Software Developer. I’m interested in the next human evolution through technologies.