Requirement: write code including a series of steps (pipeline) that identify and draw the lane lines on a few test images. Once you can successfully identify the lines in an image, you can cut and paste your code into the block provided to run on a video stream.
Start code: https://github.com/udacity/CarND-LaneLines-P1
1. hsv vs gray
In this project, especially in challenge video, using hsv is much easier since gray image doesn’t capture the light yellow from background very well, gray image is just trying to get the diff with specified threshold. While HSV proves to work better. However, if there were a video that has other yellow or white color noise lines in interested area, same issue would show up, for example, if you are driving on road covered by white snow.
2. average slope vs linear regression
hough transforms returns variable numbers of small lines with various position and slopes. At first I tried to statistically reject outliers based normal distribution, then get average of the rest, but sometimes hough transform returns so few lines that it makes it hard to get enough samples to get a reasonable result. Eventually I used linear regression(polyfit) to do the draw line function. I think it’s still possible to use average value method if you use a more tolerant hough transform threshold to always get more candidate lines.
3. weights in linear regression points
Lines from hough transform have different length, however, hough transform function gives just two points for each line as response. Longer lines should have more weight than short line in linear regression. As a result, Based on the length of a line, I generate more points for longer lines to linear regression input argument x, y points array to give it more weight, so a small noise line would not affect the slope too much.
4. Parameter selection process
We need to make a lot of choices of parameters throughout this project. Instead of using gut feeling to experiment endlessly, it makes sense to build functions/pipelines that visually print out the image for each middle step. For example, show in video the lines selected by hough transform, look at the lines that you think should not be selected, then adjust parameters accordingly.
5. Temporal smoothing
Results from direct calculation tend to get jittery. Try to average it with previous N frame’s line value to get a smoother result. A low pass filter is what it really is.
The result looks good for these videos, but I think it’s still far from being usable. The code would easily break when driving at night or in rainy, snow weather, or when my car is close to other cars that the lanes won’t even be visible from camera. On a different note, the efficiency of algorithm/implementation is also important due to real time constraint of driving.
Compare this with how a human would detect a lane, I think a human would have a mental model of what a line looks like: yellow or white long thin lines, cars would drive in between these lines, so if I don’t see lines, I can just use the line formed by cars to deduct it. This already involves complex feature recognition of cars, and lines formed from cars contour, and ability to reason through different elements.