This blog started 2 years ago with an old good Raspberry Pi Model B. Since, models B+ and 2.0 were released, Raspbian evolves a lot and the Raspicam is now part of the “normal” environment.
I still receive a lot of technical questions on this blog, especially about difficulties to compile. I guess this blog is now depreciated with new OS versions and follow these steps could be difficult, especially for beginners or people who just want a ready-to-go solution. In this case, I recommend UVRL driver (here) or this spanish site that apparently is much more easy to use (I didn’t test them).
Have fun !
Summary
This painting was exhibited in Berlin during Pictoplasma 2014 Festival. Eyes are following visitors (if many, change randomly). It’s powered by a Raspberry Pi, C++ Code and Open CV (using Face Recognition)
Plywood structure with servo and animatronics by Tordal
Once painted… nice eyes isn’t-it ?
During Pictoplasma @Berlin, May 2014
Technical Stuff – Face Recognition
Software is based on Raspberry Pi and strongly inspired from my previous “OpenCV and Pi camera” posts.
For face recognition, software is very easy : it only detects faces, without recognizing people. It’s fast. face_cascade.detectMultiScale(imageMat, faces, 1.1,3,CV_HAAR_SCALE_IMAGE,Size(80,80)); g_nVisitors = faces.size(); // number of visitors found
Once faces are detected, instructions are sent to servos to move eyes. Actually, it looks like a physical version of XEyes, for old good X11 geeks 😉
The code implements a lot of funny behaviors :
if unique visitor, follow him
if many visitors, randomly change
if a new visitor appears, focus on him
if nobody, search visitors with funny eyes moves
sometimes, take a nap, close eyelids.
…
The tricky part was how to remember which visitor is where, frame after frame. And, when the face is lost while few seconds, be able to remember this visitor was already here (it’s not a new one, no need to focus on him/her).
Code detect faces and servo moves (H/V)
Technical Stuff – Servo Control
Eyes are controlled by 2 servos (horizontal and vertical moves). Eyelid is controlled by a third servo. Thanks to ServoBlaster, control servos with Raspberry Pi is really very easy. To move servo using shell (for testing), just type : (0 = 1st servo, % = between 0-100) echo 0=50% >/dev/servoblaster echo 0=10% >/dev/servoblaster
and in C++ (fflush is important !) FILE * servoDev; servoDev=fopen(“/dev/servoblaster”,”w”); fprintf(servoDev, “%d=%d%%\n”,ServoId,ServoPosition); //will be in a function fflush(servoDev); // will be in a function fclose(servoDev);
I used :
Servo 0 (Horizontal Move) = GPIO 4 = Pin 7
Servo 1 (Vertical Move) = GPIO 17 = Pin 11
Servo 2 (Eyelid Move) = GPIO 18 = Pin 12
GND pin is connected to gnd servo power
Acknowledgements
Thank you Bard (from 3753% Tordal) to give me opportunity to work on such funny project ! and to show our work during Pictoplasma Festival in Berlin ! (Next : Mexico…)
Thank you Richard for your wonderful and easy-to-work-with ServoBlaster Daemon.
And Thank you Nat for your patience during all these sundays, spent in front of my Raspberry… 🙂
Thank you Dennis, you provided us a collection of 40 males each 10 pics (100×100 pixels).
He wrote in its comments “Now I get much much better results. I thought I share the pics and csv file with you (csv with that many pics is quite some work).Oh, the csv file has only 50 pics, the facerec software could not handle 400 of them. I works nicely with 50. You have to edit the cpp file, I changed lise in “others” and display a text “unidentified other male person”. Enjoy!”
“I have OpenCV code using cvCreateCameraCapture and cvQueryFrame. Can I use it with the Raspberry Pi Camera? Well yes, read on to know how.” …
Based on information on this blog, Emil Valkov wrote an API, very similar to OpenCV, to directly use Raspberry Cam thru raspiCamCvCreateCameraCapture, raspiCamCvQueryFrame, RaspiCamCvCapture. (instead of cvCreateCameraCapture, cvQueryFrame and CvCapture)
espeak works fine but voice is creepy (especially in french !). This post explains how to use the google voice api to get a wonderful female clear voice.
first, create speech.sh. It contains a function “say()” which call translate.google.com with parameter $* (parameter of your shell)
Except some new #include statements and some global variables, all modification are in the callback function video_buffer_callback
For face recognition, gray pictures are required. Thus, once we get the I420 frame, we don’t need to extract color information. This is a great new, since we saw on last post, that this step takes a lot of cpu !
Make it simple : we forget the pu and pv channels, we only keep the “py” IplImage (gray channel) and convert it to a Mat object.
The face detection is made by the detectMultiScalefunction. This call requires most of the cpu needed in a loop, it’s important to optimize it.
Let’s use the LBP cascade (Local Binaries Patterns) instead of the Haar cascade file (haarcascade_frontalface_alt.xml). Modify the fn_haar variable to link to lbpcascade_frontalface.xml. Response time is much more faster but less accurate. Sometimes, (and you can see exemple on the video), the soft gives wrong predictions
Let’s increase the size of minimum rectangle to search size(80,80) instead of size(60,60) as last parameter of the call.
I read on the “blog de remi” a way to optimize this function, using an alternative home-made function smartDetect. Unfortunatly, but I didn’t notice any improvment. Thus, I removed it. (perhaps I did a mistake or a misuse ?)
Results
With a 320×240 frame, I’m between 8 and 17 FPS with almost no lag (17 FPS = no face to detect and to analyse. 8 FPS = analyse face at each loop)
With a 640×480 frame, I’m around 4-5 FPS whith a small lag (1 s)
Conclusions
For me, these results are very good for a such affordable computer like Rasperry Pi. Of course, for a real-time use like RC robot or vehicle it’s too slow (need to detect quickly an obstacle, except if you build a RPCS (Raspberry Pi Controlled Snail) ;-).
But for most of others uses like domotic or education it’s fine.
Anyway, it’s far better than my USB webcam : I was even unable to do face recognition in 640×480 !
Download the source code here (http://raufast.org/download/camcv_vid1.cpp). It’s really a quick&dirty code, don’t be offended by non-respect of C++ state-of-the-art coding rules !
At this stage, you should be able to detect Mona Lisa, in case of she rings at your door tonight 😉
Congratulation !, you’ve got your new rasperry pi camera ! Isn’t-it cute ?
But after first try, you discover that it’s not an usb-webcam. 😦 Thus, OpenCV doesn’t work natively. (forget cvCaptureFromCAM for example and all your wonderful apps you’ve thought up !)
However, some nice apps (such as raspivid or raspistill) controls the pi camera using MMAL functions.
The idea is to modify source code of such apps, use buffer memory of the camera to feed OpenCV image objects. Pretty easy (said like that).
It’s quite easy to install your new Pi Camera. Installation procedure is very well described on raspberrypi fondation website here : http://www.raspberrypi.org/archives/3890
Unfortunatly, cases are not today designed for the camera cable and your new toy. Hard to say, but I did a hole in my nice white plastic case for cable path.
Once your webcam is installed, test it with this command (show pictures till 10 seconds)
raspistill -t 10000
At this stage, you should do a backup because nobody knows where the next meteor will fall down.
MMAL library and raspivid/raspistill source code are found in Userland folder (@GitHub, here). First of all, we need to compile the whole package before doing anything else with OpenCV.