Advanced Lane Lines Finding Part I: Camera calibration and image (un)distortion + Bonus

Source code for the article

Alexander Stadnikov
4 min readJan 2, 2021

In the previous article, Let’s find some lane lines we addressed a simple scenario. There was nice weather, almost empty and straight highway. In reality, the environment is more complicated. Let’s solve a more complex problem step by step.

Before we start any serious analysis of a video, we need to be sure the video is accurate. We, humans, have a wonderful vision out of the box. It’s perfectly tuned by nature. What about “artificial eyes” — cameras?

The first invented camera was the Obscura Camera. The principal difference between Obscura and modern cameras are lenses. Lenses create an interesting effect named Optical Distortion. The Obscura doesn’t have this problem, but using them in robotics… well, good luck, let me know please if you can.

When I was a child, I really enjoyed observing optical distortions. Ok, let’s be honest, all of us. For example:

Non-uniform distorion.

There’re two types of distortions — the Radial and the Tangential. The Radial distortion makes straight lines curvy. The Tangential tilts the image — it might happen if lenses aren’t perfectly parallel to the image plane.

Wait, but how does it affect the autonomous systems? If you need to turn wheels following the lane, you have to calculate the lane's curvature. If your image isn’t perfect — you can’t calculate it precisely.

Most people don’t care about the problem because their smartphones are already calibrated. It’s crucial in robotics — you take a camera from vendor A and install it on the chassis developed by vendor B, and you must calibrate the camera.

Too many words… Let’s begin to do something useful.

To calibrate a camera, you need to take an image, take a picture, and compare the actual result with the expected. It’s better to do with special images — Calibration Patterns. The most simple is the Chessboard Calibration Pattern.

An example of the Chessboard Pattern

It would help if you made several pictures of the pattern from different angles and distances. About 20 pictures would be enough. You need something like this:

Many Chessboards

Now we must process these images with OpenCV. OpenCV has a nice function findChessboardCorners. It takes the set of images, set of expected positions of corners and returns you distortion coefficients and the internal camera matrix.

Let’s use, for example, a chessboard with nine columns and six rows:

w, h = 9, 6

We need to create a list of expected positions of corners. OpenCV works with 3D space, but we don’t need z-coordinate.

impurt numpy as npobj_coords = np.zeros((w * h, 3), np.float32)
obj_coords[:, :2] = np.mgrid[0:w, 0:h].T.reshape(-1, 2)
# obj_coords: (0, 0, 0), (1, 0, 0), ..., (8, 5, 0)obj_points = [] # 3D real word points
img_points = [] # 2D image points

Let’s say we stored all our pictures in a folder camera_calibration as JPG files.

import glob
import cv2 as cv
gray = Nonefor file in glob.glob('./camera_calibration/*.jpg'):
img = cv.imread(file)
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
res, corners = cv.findChessboardCorners(gray, pattern_size, None)
if not res:
print(f"Unable to find chessboard corners on {file}")
continue
img_points.append(corners)
obj_points.append(obj_coords)
shape = gray.shape[::-1]
_, mtx, dist, _, _ = cv.calibrateCamera(obj_points, img_points, shape, None, None)

Now we can undistort any image captured by this camera with undistort call:

original_img = cv.imread('orig.jpg')
undistorted_img = cv.undistort()
cv.undistort(img, mtx, dist, None, mtx)
Now we’re ready for the next steps.

Bonus

The chessboard has its drawbacks. It’s very sensitive for corner visibility. If OpenCV misses any corner, you lose the whole picture. The problem here is straightforward — all these corners are the same, nameless. What if we had metainformation about each corner? We could interpolate some missing data, couldn’t we?

The solution is named the ChArUco pattern. It has some QR-codes combined with the chessboard:

A nice explanation from the OpenCV documentation.

Now it’s possible to calculate the distortion coefficient and the camera matrix more robustly. I won’t present the code in the article, but it’s available here.

Also, it’s not necessary to take pictures. You can make a video and process it frame by frame. Even better — you can use a robot that moves the pattern around your camera and makes the video. Just automate it!

I calibrated the camera on my smartphone and prepared a video. In the video, I augmented some information using obtained distortion coefficients and camera matrix. The video isn’t smoothed because that’s not the presentation's goal, but it illustrates the result quite well:

--

--

Alexander Stadnikov
0 Followers

Software Developer. I’m interested in the next human evolution through technologies.