ANY TYPE OF OBJECT DETECTION USING DEEP LEARNINGID: 2853 Abstract :This Project Presents A Real-time Object Detection And Voice Assistance System Using Deep Learning And Computer Vision Techniques. The System Utilizes The YOLOv3 (You Only Look Once) Algorithm Implemented Through OpenCVs DNN Module To Detect Objects From Live Video Streams Captured Via A Webcam. Once Objects Are Identified, Their Labels Are Converted Into Speech Using The Google Text-to-Speech (gTTS) Engine, Providing Audio Feedback To The User. The Primary Goal Of This System Is To Assist Users—especially Visually Impaired Individuals—by Enabling Them To Understand Their Surroundings Through Real-time Audio Descriptions. The System Integrates Object Detection, Image Processing, And Speech Synthesis Into A Unified Pipeline, Ensuring Efficient And Interactive Performance. Additionally, Multi-threading Is Used To Handle Audio Playback Without Interrupting The Video Processing Stream. This Solution Can Be Applied In Areas Such As Assistive Technology, Surveillance Systems, Smart Environments, And Robotics, Offering An Intelligent And User-friendly Interface For Real-world Object Recognition. In This Paper, We Propose A System That Combines Real-time Object. Detection Using The YOLOv3 Algorithm With Audio Feedback To Assist Visually Impaired Individuals In Locating And Identifying Objects In Their Surroundings. The YOLOv3 Algorithm Is A State-of-the-art Object Detection Algorithm That Has Been Used In Numerous Studies For Various Applications. Audio Feedback Has Also Been Studied In Previous Research As A Useful Tool For Assisting Visually Impaired Individuals. |
Published:24-4-1-2026 Issue:Vol. 26 No. 4-1 (2026) Page Nos:888-897 Section:Articles License:This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. How to CiteA. LAKSHMI SIRISHA, Dr. M. VEERA KUMARI, ANY TYPE OF OBJECT DETECTION USING DEEP LEARNING , 2026, International Journal of Engineering Sciences and Advanced Technology, 26(4-1), Page 888-897, ISSN No: 2250-3676. |