This past semester I’ve had the opportunity to work on a really neat robotics research project.
I was adopted into the software team, which was working on breaking the current programs into multiple ROS nodes. The goal was to build a solid base of code for motor control and other low level tasks, so that somebody who wanted to program Jimmy in the future didn’t need to know all the details about his servos.
We then moved on to the interaction space, to showcase the low level work we had done. I soldered up an IMU, attached it an Arduino, and plugged it into Jimmy’s computer. The Arduino was running a script the posted three dimensional tilt data form the IMU over the serial port. From there, a ROS node grabbed that data, and published it so that any other ROS node could listen to the IMU and move servos accordingly.
I implemented face detection in a similar manner. Jimmy has to cameras for eyes. We used one to record a live stream of the room, and analyzed the incoming frames for faces using an open-cv haar cascade library. The coordinates of the center of the uppermost face that it detects are broadcasted over another ROS topic. From there a subscriber controls the head and neck servos, using the feedback of the coordinates of the face, to position the head such that it appears to the person that Jimmy is looking at them. This face tracking is a very striking demonstration of how human-like jimmy can appear.