In this AI project I built something that can recognise a path. It will turn something like this:

Into something like this:

The red is the left border, and the yellow a line of best fit for that, and the green is the right with the purple as best fit.
The way this works is as follows. First, the machine takes a small cluster of pixels from the centre of the image using a library called PIL/Pillow. Then, it will go through each pixel in the image, row by row. The reason for this is because the path will run towards the y-axis, so the edges have to be on the x. For each row, we compare it to all of the test data and ask it if its RGB (Red, green, blue) colour values are within say 20 of the test data. If the condition suddenly becomes true, this is the left edge of the path, and after that, as soon as it is false, it recognises the right side of the path. After this it takes all of the points and constructs a line of best fit for each line.
Test around with different images by changing the value on line four to anything in the training folder.
Hopefully, one day, this may be used in a robot to help steer itself. For example, a delivery bot that uses streets to move could use this to determine if it should turn left or right to stay on the path by performing a comparison on the gradients of the best fit lines. For now, however, it is limited to a python program. Perhaps for good!