ASSISTIVE OBJECT DETECTION FOR VISUALLY IMPAIRED USING DEEP LEARNING
Abstract
Vision loss affects over 200 million people worldwide, making everyday tasks challenging and reducing independence. To address this, we propose an Android-based assistive application that leverages deep learning for real-time object detection and auditory feedback. The system integrates You Only Look Once(YOLO) for fast detection and (Single Shot Detector(SSD ) for accurate recognition on mobile devices. Using TensorFlow Lite APIs, the models are optimized to run efficiently on smartphones, enabling complex machine learning tasks in resource-constrained environments. Once objects are identified, the TextToSpeech API provides immediate audio feedback, allowing visually impaired users to understand their surroundings. This combination of deep learning and auditory assistance enhances safety, mobility, and independence in daily life.
Refbacks
- There are currently no refbacks.
Copyright © 2013, All rights reserved.| ijseat.com

International Journal of Science Engineering and Advance Technology is licensed under a Creative Commons Attribution 3.0 Unported License.Based on a work at IJSEat , Permissions beyond the scope of this license may be available at http://creativecommons.org/licenses/by/3.0/deed.en_GB.
Â


