During my fourth year of college I took an image processing class, the challenge was to develop a device (model) that would use the techniques learned in class and could help solve a need of a disabled person.
After discussing with my group of three I decided to make a model of a wheelchair that could move with the eyes and using a camera system could detect obstacles and their distance from the wheelchair.
Since we wanted to work with people paralyzed from the neck down, our first problem was to follow the movement of the eyes and convert them into instructions for the model.
For this, we used the Tobii Eye Tracker, a device that fulfills just this function, plus it has a well-documented API which gives examples of how to know what part of the screen is being observed, this API is focused on game development, which is why in this part of the development we used Unity and C#.
Through the API we were able to segment the screen into 5 parts, a window located in the middle of the screen “rest zone” where no action is performed,
Then another 4 divisions in each vertex of the screen to be able to rotate the model left or right and go forward or backward.
A calibration routine was created which allowed adopting the Tobii to a transparent critical screen.
For this part of the project, Python was used due to the ease of doing image processing in real-time. First, two-color labels were placed on the car, to mark the front and rear, in addition to this, obstacles were identified, making recognition of edges and centroids, to facilitate the calculation of distance between the car and the obstacle and to predict a possible collision.
Through Python, a seria
To simulate the wheelchair we used a Pololu 3pi Robot, which was great for simulating the wheelchair as it has a wireles module and the motors are easy to control via serial communication.
l port was opened to send the commands to the motors to generate the movement.