2 years ago…

Pi2

This blog started 2 years ago with an old good Raspberry Pi Model B. Since, models B+ and 2.0 were released, Raspbian evolves a lot and the Raspicam is now part of the “normal” environment.
I still receive a lot of technical questions on this blog, especially about difficulties to compile. I guess this blog is now depreciated with new OS versions and follow these steps could be difficult, especially for beginners or people who just want a ready-to-go solution. In this case, I recommend UVRL driver (here) or this spanish site that apparently is much more easy to use (I didn’t test them).
Have fun !

Art, Design & Raspberry Pi


Summary 
This painting was exhibited in Berlin during Pictoplasma 2014 Festival.
Eyes are following visitors (if many, change randomly).
It’s powered by a Raspberry Pi, C++ Code and Open CV (using Face Recognition)

Software written by Pierre Raufast (France)
Animatronics, Design and Built by 3753% Tordal (Norway)
More infos : tordal.no, raufast.org, thinkrpi.wordpress.com

Pictures

smallCAD

CAD by Tordal

smallView2

Plywood structure with servo and animatronics by Tordal

smallPainting

Once painted… nice eyes isn’t-it ?

smallBerlin

During Pictoplasma @Berlin, May 2014

smallView1smallGeneral

Technical Stuff – Face Recognition

Software is based on Raspberry Pi and strongly inspired from my previous “OpenCV and Pi camera” posts.
For face recognition, software is very easy : it only detects faces, without recognizing people. It’s fast.
face_cascade.detectMultiScale(imageMat, faces, 1.1,3,CV_HAAR_SCALE_IMAGE,Size(80,80));
g_nVisitors = faces.size(); // number of visitors found

Once faces are detected, instructions are sent to servos to move eyes. Actually, it looks like a physical version of XEyes, for old good X11 geeks 😉
The code implements a lot of funny behaviors :

  • if unique visitor, follow him
  • if many visitors, randomly change
  • if a new visitor appears, focus on him
  • if nobody, search visitors with funny eyes moves
  • sometimes, take a nap, close eyelids.

The tricky part was how to remember which visitor is where, frame after frame. And, when the face is lost while few seconds, be able to remember this visitor was already here (it’s not a new one, no need to focus on him/her).

Code detect faces and servo move (H/V)

Code detect faces and servo moves (H/V)

Technical Stuff – Servo Control

Eyes are controlled by 2 servos (horizontal and vertical moves). Eyelid is controlled by a third servo. Thanks to ServoBlaster, control servos with Raspberry Pi is really very easy. To move servo using shell (for testing), just type  : (0 = 1st servo, % = between 0-100)
echo 0=50% >/dev/servoblaster
echo 0=10% >/dev/servoblaster

and in C++ (fflush is important !)
FILE * servoDev;
servoDev=fopen(“/dev/servoblaster”,”w”);
fprintf(servoDev, “%d=%d%%\n”,ServoId,ServoPosition); //will be in a function
fflush(servoDev); // will be in a function
fclose(servoDev);

I used :

  • Servo 0 (Horizontal Move) = GPIO 4 = Pin 7
  • Servo 1 (Vertical Move) = GPIO 17 = Pin 11
  • Servo 2 (Eyelid Move) = GPIO 18 = Pin 12
  • GND pin is connected to gnd servo power

servo

 

Acknowledgements

Thank you Bard (from 3753% Tordal) to give me opportunity to work on such funny project ! and to show our work during Pictoplasma Festival in Berlin ! (Next : Mexico…)
Thank you Richard for your wonderful and easy-to-work-with ServoBlaster Daemon.
And Thank you Nat for your patience during all these sundays, spent in front of my Raspberry… 🙂

 

40 “anonymous” pictures for a better recognition

Thank you Dennis, you provided us a collection of 40 males each 10 pics (100×100 pixels).
He wrote in its comments “Now I get much much better results. I thought I share the pics and csv file with you (csv with that many pics is quite some work).Oh, the csv file has only 50 pics, the facerec software could not handle 400 of them. I works nicely with 50. You have to edit the cpp file, I changed lise in “others” and display a text “unidentified other male person”. Enjoy!”

Zip file can be download here : http://raufast.org/download/100×100.zip

malecollection

 

 

Use directly with a library ?

“I have OpenCV code using cvCreateCameraCapture and cvQueryFrame. Can I use it with the Raspberry Pi Camera? Well yes, read on to know how.” …

Based on information on this blog, Emil Valkov wrote an API, very similar to OpenCV, to directly use Raspberry Cam thru raspiCamCvCreateCameraCapture, raspiCamCvQueryFrame,  RaspiCamCvCapture. (instead of cvCreateCameraCapture, cvQueryFrame and CvCapture)

All information are on his blog  (https://robidouille.wordpress.com/2013/10/19/raspberry-pi-camera-with-opencv/)

Thank you Emil for this nice and usefull API, it will help many developers to write simpler code 🙂

 

Change the voice of your magic mirror

 

 

 

espeak works fine but voice is creepy (especially in french !). This post explains how to use the google voice api to get a wonderful female clear voice.

first, create speech.sh. It contains a function “say()” which call translate.google.com with parameter $* (parameter of your shell)

#!/bin/bash
say() { local IFS=+;/usr/bin/mplayer -ao alsa -really-quiet -noconsolecontrols “http://translate.google.com/translate_tts?hl=en&sl=en&tl=fr&ie=UTF-8&oe=UTF-8&multires=1&otf=1&ssel=3&tsel=3&sc=1&text=$*” 2>/dev/null;}
say $*

Notice that this url will speak french. If you want to speak english, just change “tl=fr” by “tl=en” (german: tl=de, spanish: tl=es, etc.)

Change attribute of your sh file to allow execution and try it. Notice that this URL only accepts sentence less than 100 letters.

chmod u+x ./speech.sh
./speech.sh “Bonjour. Je veux manger du fromage.”

Now, you can use it in your C file.

void speak(char *sLine)
{
    char sCmd[255];
    sprintf(sCmd,”./speech.sh %s”,sLine);

    // if (TRACE) printf(“[i] say : %s”,sLine);
    system(sCmd);

}

To read a whole text (length > 100 letters)

char sToBeRead[100];
f = fopen(fileName,”r”);
if (f != NULL)
{
    // read all text
     while (!feof(f))
     {
        if (fgets(sToBeRead, 100, f)!=NULL)
        {
           speak(sToBeRead);
        }
     }
    fclose(f);
   return 1;
  }
return 0;
}

At this end of this post, your wife/girlfriend should not like the new voice of your magic mirror… 😉

c3PO

OpenCV&Pi Cam – Step 7 : Face recognition

This step is easy : we reuse the source code of previous step 6  and we add the OpenCV face recognition treatment of step 6 of “Mirror Magic”.

Watch this video to see result (http://www.youtube.com/watch?v=yzYIxNgDZu4).

monalisa2

Source code modification

Except some new #include statements and some global variables, all modification are in the callback function video_buffer_callback
For face recognition, gray pictures are required. Thus, once we get the I420 frame, we don’t need to extract color information. This is a great new, since we saw on last post, that this step takes a lot of cpu !
Make it simple : we forget the pu and pv channels, we only keep the “py” IplImage (gray channel) and convert it to a Mat object.

The face detection is made by the detectMultiScale function. This call requires most of the cpu needed in a loop, it’s important to optimize it.

  • Let’s use the LBP cascade (Local Binaries Patterns) instead of the Haar cascade file (haarcascade_frontalface_alt.xml). Modify the fn_haar variable to link to lbpcascade_frontalface.xml. Response time is much more faster but less accurate. Sometimes, (and you can see exemple on the video), the soft gives wrong predictions
  • Let’s increase the size of minimum rectangle to search size(80,80) instead of size(60,60) as last parameter of the call.
  • I read on the “blog de remi” a way to optimize this function, using an alternative home-made function smartDetect. Unfortunatly, but I didn’t notice any improvment. Thus, I removed it. (perhaps I did a mistake or a misuse ?)

Results

  • With a 320×240 frame, I’m between 8 and 17 FPS with almost no lag (17 FPS = no face to detect and to analyse. 8 FPS = analyse face at each loop)
  • With a 640×480 frame, I’m around 4-5 FPS whith a small lag (1 s)

Conclusions

For me, these results are very good for a such affordable computer like Rasperry Pi. Of course, for a real-time use like RC robot or vehicle it’s too slow (need to detect quickly an obstacle, except if you build a RPCS (Raspberry Pi Controlled Snail) ;-).
But for most of others uses like domotic or education it’s fine.
Anyway, it’s far better than my USB webcam : I was even unable to do face recognition in 640×480 !

Download the source code here (http://raufast.org/download/camcv_vid1.cpp). It’s really a quick&dirty code, don’t be offended by non-respect of C++ state-of-the-art coding rules !

monalisa1

At this stage, you should be able to detect Mona Lisa, in case of she rings at your door tonight 😉

OpenCV and Pi Camera Board !

Congratulation !, you’ve got your new rasperry pi camera ! Isn’t-it cute ?
But after first try, you discover that it’s not an usb-webcam. 😦 Thus, OpenCV doesn’t work natively. (forget cvCaptureFromCAM for example and all your wonderful apps you’ve thought up !)

However, some nice apps (such as raspivid or raspistill) controls  the pi camera using MMAL functions.

The idea is to modify source code of such apps, use buffer memory of the camera to feed OpenCV image objects. Pretty easy (said like that).

This could be done in 7 steps, because of 7 :

picture taken with Pi cam and displayed with opencv !

picture taken with Pi cam and displayed with opencv !

Enjoy !

OpenCV&Pi Cam – Step 1 : Install

It’s quite easy to install your new Pi Camera. Installation procedure is very well described on raspberrypi fondation website here : http://www.raspberrypi.org/archives/3890

Unfortunatly, cases are not today designed for the camera cable and your new toy.  Hard to say, but I did a hole in my nice white plastic case for cable path.
Once your webcam is installed, test it with this command (show pictures till 10 seconds)

raspistill -t 10000 

camcv02

At this stage, you should do a backup because nobody knows where the next meteor will fall down.

OpenCV&Pi Cam – Step 2 : compilation

MMAL library and raspivid/raspistill source code are found in Userland folder (@GitHub, here). First of all, we need to compile the whole package before doing anything else with OpenCV.

  1. get source code (zip file)  here : https://github.com/raspberrypi/userland
  2. unzip the file and copy the directory under /opt/vc
  3. go to opt/vc and type : sed -i ‘s/if (DEFINED CMAKE_TOOLCHAIN_FILE)/if (NOT DEFINED CMAKE_TOOLCHAIN_FILE)/g’ makefiles/cmake/arm-linux.cmake
  4.  create a build directory and compile (it takes a while) 

sudo mkdir build
cd build
sudo
cmake -DCMAKE_BUILD_TYPE=Release ..
sudo
make
sudo
make install

Binary should be under /opt/vc/bin

Go to /opt/vc/bin and test one file typing : ./raspistill -t 3000

At this stage, you should be able to modify this software to include OpenCV calls.  Congratulation ! Now, all nexts steps are piece of cakes….