This project represents the integration of embedded systems with advanced artificial intelligence, specifically focusing on autonomous driving capabilities using a Raspberry Pi. The core of the system relies on a dual-AI pipeline: a custom Convolutional Neural Network (CNN) for steering angle prediction and YOLOv5 for real-time object detection.
The pipeline operates by capturing real-time video feed from a camera module, processing the frames through two parallel AI models, and translating the predictions into physical motor control actions to navigate the environment.
RPi.GPIOFirst of all, for training, I manually make driving road using paper and collect image for training.
ã„´ This is the raw images before preprocessing used for training driving model.
ã„´ This is the images labelled manually for training YOLO.
I paid attention to maintaining a balanced image distribution to avoid driving model bias.
The system uses OpenCV to capture video frames in real-time. To maintain a high frame rate and responsiveness, the pipeline utilizes Python's threading module.
The steering logic is driven by a custom PyTorch Convolutional Neural Network, inspired by the NVIDIA End-to-End Autonomous Driving model.
200x66 pixels and converted to grayscale.cv2.THRESH_BINARY_INV) to highlight lane features.transforms.Normalize.
While the CNN handles where the car should turn, a YOLOv5 model acts as the car's "eyes" for understanding its environment.
crossroad, emptyBottle, and laneBottle.Based on the combined inferences from the CNN and YOLOv5, the system calculates the final speed and steering angle. Using the RPi.GPIO library, PWM (Pulse Width Modulation) signals are dispatched to the Left and Right motors. Duty cycles are adjusted to execute physical maneuvers smoothly, completing the end-to-end autonomous driving loop.
This project successfully bridges the gap between high-level AI algorithms and low-level hardware control. By decoupling the steering prediction (CNN) from the object detection (YOLO) using multi-threading, the embedded system achieves real-time autonomous navigation, speed adaptation, and dynamic obstacle avoidance.