Vision substitution with object detection and vibrotactile stimulus
Tipo
Artigo de evento
Data de publicação
2019
Periódico
VISIGRAPP 2019 - Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications
Citações (Scopus)
2
Autores
Ribani R.
Marengoni M.
Marengoni M.
Orientador
Título da Revista
ISSN da Revista
Título de Volume
Membros da banca
Programa
Resumo
Copyright © 2019 by SCITEPRESS – Science and Technology Publications, Lda. All rights reservedThe present work proposes the creation of a system that implements sensory substitution of vision through a wearable item with vibration motors positioned on the back of the user. In addition to the developed hardware, the proposal consists in the construction of a system that uses deep learning techniques to detect and classify objects in controlled environments. The hardware comprise of a simple HD camera, a pair of Arduinos, 9 cylindrical DC motors and a Raspberry Pi (responsible for the image processing and to translate the signal to the Arduinos). In the first trial of image classification and localization, the ResNet-50 model pre-trained with the ImageNet database was tried. Then we implemented a Single Shot Detector with a MobileNetV2 to perform real-time detection on the Raspberry Pi, sending the detected object class and location as defined patterns to the motors.
Descrição
Palavras-chave
Assuntos Scopus
Controlled environment , Learning techniques , Object class , Real-time detection , Sensory substitution , Single shots , Vibration motor , Vibrotactile