The international hand sign for „Look it’s steering by itself, but I’m pretty sure this will go south any second“
Yeah, yeah, I know. We don’t need another article about „how to train some DNN on a bite-sized Kaggle dataset, and then never do anything with it“, right?
That’s not what this is about. We’ll put this in a real car.
Here are the steps:
- Build all the hardware
- Hook it up to a real car
- Collect our own dataset
- Do all the analysis, cleaning and preprocessing
- Ok… train a Neural Net. Briefly.
- Deploy it to our car
- See what it does when it’s in charge of the steering wheel on the road.
Cool? Cool. Let’s go!
-> Part 1 – Hardware build log
Hardware: nVidia Jetson Nano, Intel RealSense D435, some generic 7″ HDMI monitor, CAN USB interface, Arduino Nano clone, uBlox Neo 6M GPS receiver, DRV8825 stepper driver, some stepper I stole from an old 3D printer
Software: Ubuntu Linux, Docker, Python, Arduino C, Jupyter Notebooks, Keras/Tensorflow 2, TensorRT, OpenCV, Matplotlib, pandas etc… shoulders of giants or something.
I encourage you to do everything mentioned, except for the last part. Hooking up an actuator to a car’s steering wheel is not safe. We’ll build and test it in a manner that is as safe as reasonably possible within the scope of a fun project, but that’s still far from safe. Let alone street legal.
I can’t stress this enough.
We will only test this in scenarios where even the worst possible behaviour will not endanger anyone. But the whole thing is still irresponsible. Ok? Ok.
Another disclaimer, concerning IP.
While I came up with the fundamental approach on my own, I’m by no means the first one to try something like this, nor is my attempt the most successfull one.
I would not have started to actually work on this idea if nVidia and Comma.ai hadn’t shown that this is not a completely hopeless and naive thing to try and details of this project are definitely inspired by their work.