TRAFFIC SIGN DETECTION FOR VISUALLY IMPAIRED
Abstract
Traffic signs are essential elements of road safety, providing warnings, directions, and regulations to drivers and pedestrians. However, these signs are primarily visual, making them inaccessible to blind and visually impaired individuals. The lack of awareness about traffic signs exposes visually impaired people to significant risks while navigating roads. This project proposes a deep learning–based assistive system designed to detect and interpret traffic signs in real time. The system leverages object detection models such as Convolutional Neural Networks (CNNs) and YOLO (You Only Look Once) to recognize traffic signs from live video feeds. Once a sign is detected, the result is converted into speech using a Text-to-Speech (TTS) engine, thereby providing immediate auditory feedback to the user. The framework is designed for deployment on lightweight, portable platforms such as smartphones and Raspberry Pi, making it affordable and practical. By combining deep learning with accessible technologies, this system improves navigation safety and independence for visually impaired people.
Refbacks
- There are currently no refbacks.
Copyright © 2013, All rights reserved.| ijseat.com

International Journal of Science Engineering and Advance Technology is licensed under a Creative Commons Attribution 3.0 Unported License.Based on a work at IJSEat , Permissions beyond the scope of this license may be available at http://creativecommons.org/licenses/by/3.0/deed.en_GB.
Â


