Share Email Print

Proceedings Paper

Conception of a touchless human machine interaction system for operating rooms using deep learning
Author(s): Florian Pereme; Jesus Zegarra Flores; Mirko Scavazzin; Franck Valentini; Jean-Pierre Radoux
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Touchless Human-Computer Interaction (HMI) is important in sterile environments, especially, in operating rooms (OR). Surgeons need to interact with images from scanners, rayon X, ultrasound images, etc. Problems about contamination may happen if surgeons must touch a keyboard or the mouse. To reduce the contamination and to give the possibility to the surgeon to be more autonomous during the operation, different projects have been developed in the Medic@ team from 2011. In order to recognize the hand and the gestures, two main projects: Gesture Tool Box and K2A; based on the use of the Kinect’s device (with a depth camera) have been prototyped. The detection of the hand gesture was done by segmentation and hand descriptors on RGB images, but always with a dependency on the depth camera (Kinect) to the detection of the hand. Additionally, this approach does not give the possibility that the system adapts to a new gesture demanded by the end-user, for example, if a new gesture is demanded, a new algorithm must be programed and tested. Thanks to the evolution of NVDIA cards to reduce time processing algorithms for CNN, the last approach explored was the use of the deep learning algorithms. The Gesture tool box project done was to analyze the hand gesture detection using a CNN (pre-trained in VGG 16) and transfer learning. The results were very promising showing 85% of accuracy for the detection of 10 different gestures form LSF ( French Sign Language) and also it was possible to create a user interface to give autonomy to the end user to add his own gesture and to do the transfer learning automatically. However, we still had some problems about the real time delay (0,8s) recognition and the dependency of the Kinect device. In this article, a new architecture is proposed, in which we want to use standard cameras and to reduce the real time delay of the hand and gesture detection. The state of the art shows the use of a YOLOv2 using Darknet framework as a good option with faster time recognition compared to other CNN. We have implemented YOLOv2 for the detection of the hand and signs with good results in gesture detection and with 0.10 seconds on gesture time recognition in laboratory conditions. Future work will include reducing the errors of our model, recognizing intuitive and standard gestures and doing tests in real conditions.

Paper Details

Date Published: 24 May 2018
PDF: 6 pages
Proc. SPIE 10679, Optics, Photonics, and Digital Technologies for Imaging Applications V, 106790R (24 May 2018); doi: 10.1117/12.2319141
Show Author Affiliations
Florian Pereme, Altran Technologies (France)
Jesus Zegarra Flores, Altran Technologies (France)
Mirko Scavazzin, Altran Technologies (France)
Franck Valentini, Altran Technologies (France)
Jean-Pierre Radoux, Altran Technologies (France)

Published in SPIE Proceedings Vol. 10679:
Optics, Photonics, and Digital Technologies for Imaging Applications V
Peter Schelkens; Touradj Ebrahimi; Gabriel Cristóbal, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?