In this step, we will learn how to diplay a video from the camera board, using OpenCV display (and not the native preview GPU window).
At the end of this step, you should be able to capture frames from your camera board, and use them directly using OpenCV ! Enjoy, creativity will be your only limit (and perhaps CPU a little bit)
This tuto is based on this file (http://raufast.org/download/camcv_vid0.c). Download it and read explanations below. (don’t forget to change the CMakeLists.txt). I found many technicals difficulties to write it, thanx to Matthieu Tardivon (a brillant student) for his precious hint and help. I appreciate.
We start from raspivid.c (the camera app) but we need to remove all useless lines, not linked with capturing frames.
We delete
– all lines related to the preview component,
– all lines related to the encoder component.
– all lines related to inline command parsing and picture info…
We change :
– add the callback directly to the video_port (line 286)
– create and attach the pool (to get/send message) to the video port… (line 320)
– change format encoding to ENCODING_I420 in line (268) (instead of OPAQUE)
Result : the callback is called with the right FPS (around 30fps/s) during the capture. (FPS without OpenCV treatment).
The Buffer variable contains the raw YUV I420 frame which needs to be converted in a RGB format to be used with OpenCV.
To do it, understand the I420 format : read some cryptic pages like http://en.wikipedia.org/wiki/YUV and http://www.fourcc.org/yuv.php
I wrote few lines to convert the picture in the callback function (line 141)
– read the buffer and copy it by parts in 3 differents IplImage starting with Y component (full size), continue with U (half size) and finish with V component (half size)
– merge the 3 IplImage (YUV) into one (line 170)
– convert with the right color space (RGB) (line 171)
– and display it !
Warning ! : cvMerge, cvCvtColor are slow functions. If you want to increase FPS rate, you can stay with gray picture (the first Y channel). You’ll double your FPS doing that. (parameter graymode=1, line 124). Line 118 set the timeout variable : it’s the period to capture (ms)
- 320×240 color : FPS = 27,2
- 320×240 gray : FPS = 28,6
- 640×480 color : FPS = 8
- 640×480 gray : FPS = 17
At this stage, you should be able to use your camera board with OpenCV. Frame rate is still not perfect (no HD possible) but it will be enough to play with face recognition with a far better rate than our old USB webcam ! That’s what we’ll see on step 7.
Enjoy !
Pingback: OpenCV and Pi Camera Board ! | Think RPI
Loving the info on this website, you have done outstanding job on the content material.
verry good job.I’m waiting the next step .
Can you use ENCODING_RGB24 instead of ENCODING_I420 so that the GPU does the conversion to RGB instead of OpenCV having to do it?
@Kevin did you use the GPU? any improvement?
I found that there are a few other encodings that work, but they are planar and have 2×2 downsampled red and blue planes. These formats would still need to be converted by opencv in a similar way to the I420 format.
excellent! I like how the video streams just as well remotely doing this.
Sorry how should I edit the CMakeLists.txt file, sorry for the naive question and how can I confirm that it works and what file am I executing.. Thanks I am almost to the end
i edit out camcv.c to be camcv_vid0.c
and got this error
pi@raspberrypi ~/camcv $ ./camcv
mmal: main: Failed to create preview component
mmal: Failed to run camera app. Please check for firmware updates
solved same as Gabriel said in Step 5 comments
I just changed
” else if (!raspipreview_create(&state.preview_parameters)) ” line in main to
” else if ( (status = raspipreview_create(&state.preview_parameters)) != MMAL_SUCCESS) ”
Wow!! Great!! Works for me…
Someone knows how can I create a video file using MMAL? Thanks…
I’m using OpenCV…and when I use VideoWriter to make a video file, it’s necessary to define CV_FOURCC. But RPi doesn’t have codecs as MJPG, H264, etc.
Error in code source line:
…
record( “filename.avi”, CV_FOURCC(‘M’,’J’,’P’,’G’), fps, True);
…
Someone?
Thanks!
sorry…correcting…
record( “filename.avi”, CV_FOURCC(‘M’,’J’,’P’,’G’), fps, Size(width, height), True);
I’m still getting this message:
(camcvWin:11274): Gtk-WARNING **: cannot open display: :0
So yes basically the video doesn’t want to display. Thing is I have a monitor connected and raspivid works just fine. Any ideas ? 😦
So yes to get this to not show, you need to have X forwarding on. Either do a ssh -X on linux (virtual machine here) or some really complicated thing via putty I never manged to get to work. Just having a screen attached to the Pi trough HDMI won’t help since the program tries to show the image on the host that ran it (your PC trough ssh) not on the Pi’s screen. 😦
I had the same problem. When I run the program directly from the raspberry pi terminal (without ssh) it gives me the opengl error, the software runs but it can’t create the window and display the images.
I have this working running the Pi from an ssh -X bash shell – the window shows up on the linux workstation I connect to the Pi from. Basically, the error message you’re getting is telling you that the software cannot connect to an X11 display.
If you’re running on the Pi and you have X running, the display you’re using can be found using
at a command prompt.
This is also true within an SSH shell IIF you forward the X display setting from your local machine to the software running on the Pi. This also has to be allowed by system policy on the workstation — I.E. if it’s disabled in /etc/ssh you’ll need root to enable it…
In EITHER CASE if you get nothing back from the
command then x-client software will fail with a message like the one you’re getting.
Furthermore, I just found a situation where I was working within a screen session on the Pi and the X11 forward across the SSH tunnel was lost between connections t othe screen session, and I had to stop and restart screen to re-establish the tunnelled DISPLAY connection….
Hope this helps.
There is another possibility – it’s giving you a display number in the error message (“:0”) which means that it /might/ have a DISPLAY setting and just not be able to connect to it – E.G. insufficient permissions, etc. You need to establish the value of the DISPLAY environment variable in order to debug this much more accurately, I think.
Hello,
I’ve tested your code. Great job!
I’ve only a small problem. When the camera is stopped (after the “error” label), the camera is still working and the function “video_buffer_callback” is still called. So I’ve the following error :
“mmal : buffer null”.
I know that problem exist because the “video_buffer_callback” is still called. So, to have a program whitout bug(s), I would like to stop correctly the program. The interessant fact is the stopping method is exactly the same as the “raspivid.c” code. But with the broadcom code, I’ve no error…
I’m the only one to have this error? Does someone found a solution to solve that?
Best regards
well, it’s a best practise to let in your code an indentified bug. At least, you know where it is… 🙂
For your question, think error comes to the video_buffer_callback function.
this test ” if (buffer->length)” is only done at the beginning, and not repeated outside the first block ( if (pData))
try do add a control, before following lines, like : mmal_buffer_header_mem_unlock(buffer);
Hi everyone!
First thanks to Pierre for his great job!
Second, I’m getting only 3fps with the default code (only changed “else if (!raspipreview_create(&state.preview_parameters)”). I’m looking the output through ethernet cable.
Hi ! Yes same here. It’s the imshow() that is the cause of it. We are actually pasting each frame manually on the screen and over X forwarding (most likely as well). If you remove imshow() then you get around 29 fps !
What I’m trying to say is that if you are doing some real image processing (running in the background, then being write to a file maybe even a stream), the performances should be quite good since you don’t need to always show the images on the screen. For example if you are tracking something in particular and only want an image to appear once you see that object. 🙂
Try removing imshow(), add a few simple pixel modifications with :
for(/*all pixels*/) gray.at(row,col)[0]=42;
You’ll see that the performance is way better then 2 fps. More like 20 ? Depends on the current CPU usage.
Hi void/henrique
I have the same problem as henrique had initially. I am getting apporx 2fps rendering. I tried to find and reduce calls to imshow() function but I couldn’t find that function called at any place in the camcv_vid0.c file! Is it my mistake or has the c file been updated now?
Please help.
The performance could be improved if we could write directly to the frame buffer like raspivid does originally (since the video appears when we put a screen on the Pi trough HDMI directly without running an X server) but I’m to inexperienced with Linux, C++ and the camera firmware / buffers to be able to pull that off. The source code is just to complicated. Yes I tried reading it… 😦
Yes Void!
I commented imshow() and the rate is 29fps for me too. As you said, I’ll just need the image to be processed by opencv, so imshow() is not necessary. I’m changing to C instead of C++ to see if there are more improvements. At least the compile time is reduced from 35sec to 11sec.
Could you please post an example .c file with the changes you mention above ?
Download the source code on this page. Open the file and search for:
else if (!raspipreview_create(&state.preview_parameters))
…and change to…
else if ( (status = raspipreview_create(&state.preview_parameters)) != MMAL_SUCCESS)
Compile and enjoy it!
Excellent! thank-you this works
As QUENAZ said, it’s just one line. Also one advice, you will get a lot better performances if you minimise calls to imshow(). So yes your performances will improve a lot if you don’t print out that many test screens…
My bad, realized the camcv_vid0.c has the changes mentioned. Speeds are quite inconsistent though. With 160×120 I get between 15 and 29.5 fps (no changes, just running the program a few times)
Try to change the CPU frequency clock (700Mhz to 1000Mhz) but you MUST use a headsink on CPU and GPU. With 320×240 I got 28.8fps (continuous) at 1000Mhz.
Hi,
I appreciate your work and i implemented the camcv_vid0.c but i got the following error.
mmal: main: Failed to create preview component
mmal: Failed to run camera app. Please check for firmware updates
1382592532 seconds for 0 frames : FPS = 0.000000
Hi everyone ! Here is a link you should look at:
http://www.linux-projects.org/modules/sections/index.php?op=viewarticle&artid=16
(See Example 4: opencv)
Don’t get me wrong, Pierre’s code is great and it helped me out a lot when I started out with OpenCV and the Pi cam but if you are looking for a more “clean” way to do this then the driver above is the way to go. It will enable you to do the standard frame capture method with OpenCV, same as with a USB webcam. 🙂
Is there any opencv’s built in function which will support Analog camera?
cvCapture() and videoCaputer() both are opencv’s built in function but they doesn’t support analog camera.They support USB camera. The third party library videoInput() is available and it works well with analog camera but i wanna use opencv’s built in function so please tell me if any function is there.
Hi Pierre,
thank you so much for your great tutorial on the pi camera. Your work was very useful for my private project of a photo-camera. I changed some lines of code and implemented it in my Qt project so that I can arbitrary start and stop the video capture.
Though, during my own work, I found two memory leaks coming from the mmal framework. The leak is growing during creating and destroying a component. When you create and destroy the camera component, the function mmal_component_destroy() leaves 128 bytes of allocated memory, and for the preview component another 176 bytes. Not much, but also not nice.
I tested this by getting the mem info with malloc_stats() before calling the mmal_component_create() function with consecutive calls to malloc_stats(), mmal_component_destroy() and malloc_stats() afterwards. Calling these functions in the order described.
Every time on destroying the camera component 128 bytes were not released and destroying the preview component 176 byte stay unreferenced in the heap. As the code does not need the preview component, I totally cut it out. I got rid of:
>>>>>>>
else if ( (status = raspipreview_create(&state.preview_parameters)) != MMAL_SUCCESS)
{
vcos_log_error(“%s: Failed to create preview component”, __func__);
destroy_camera_component(&state);
}
else
<<<<<>>>>>
raspipreview_destroy(&state.preview_parameters);
<<<<<<
It was neccessary to set the variable status to MMAL_SUCCESS to prevent the function raspicamcontrol_check_configuration() issuing "Failed to run camera app. Please check for firmware updates" on closing the camera.
Just want to tell others from my experiences with the mmal framework.
Thank you Sascha, it’s a great new and valuable for all readers !
Hi Pierre,
Very nice tutorial. I had no problem to run your code on my Raspberry Pi.
After having problem with a too low framerate, I searched for solutions to gain computation time.
So first of all I tried to avoid the use of the memcpy, since I do not see the benefit of it.
So I simply changed
memcpy(py->imageData,buffer->data,w*h); // read Y
to
py->imageData = buffer->data;
in the hope to not lose time for copying any data.
Against all my expectations, the framerate got even lower, then it already was.
Do you have any idea, why this is slower, even though I do not copy the buffer?
Thanks for your help.
Tabea
Hi Pierre,
Thx for your tutorial, now I can finally get opencv running on my PI. But I am a newbie in Programming, I cannot understand totally what the code is doing. Can you explain to me in more detail about how this video capture progress is working and how did you manage to get the frame from the buffer?
Pingback: How to upload image from raspivid with curl?CopyQuery CopyQuery | Question & Answer Tool for your Technical Queries,CopyQuery, ejjuit, query, copyquery, copyquery.com, android doubt, ios question, sql query, sqlite query, nodejsquery, dns query, updat
The video frames are cropped. I set the size to 1280×960 and it gives me a cropped image of that size. If I take a picture of the same size the resulting image is not cropped. Is there a way to get not cropped frames (as when you take pictures?)
Hello Pierre.
I used your camcv_vid0.c and didn’t change anything except the if-statement in main() “else if ( (status = raspipreview_create(&state.preview_parameters)) != MMAL_SUCCESS)”.
When I run ./camcv I get no errors but the following output:
–> init done
–> opengl support available
–> 5 seconds for 0 frames : FPS = 0,000000
The red light of raspicam is on but there’s no camcvWin window appearing.
I use OpenCV 2.4.5 and until step 6 everything worked fine.
I’m searching for a solution for several hours but I don’t find one. Can anyone help?
Thanks
Hello guyz,
i am facing same issue, no display appears where as program seems to run fine.
Thanks
I have same problem, program says:
init done
opendgl support avaible
and thats all, notking more happens
same proplem? Someone know how to solve it?
So what exactly are we suppose to be doing in this step? Sorry for such a newbie question, i’m new to this stuff. From what I understand we just put camcv_vid0.c file in the project directory, modify CMakeLists to include camcv_vid0.c?Any help would be appriciated
I usually just gave the files from a the older tuts some new name that reflected what they did and renamed the current tut camcv.c. Worked fine for me every time.
how can i edit CMakeLists.txt file???
plz tell me how to execute this steps bcoz i m newbie….
Why am I only getting 1.090909 FPS?
How to compile? Which directory? Can you give more information…
Any idea how to do recording of video from here? tried using opencv videowriter, but the frame rate drop from 30 to 13, which is really not nice. I have also tried to write the buffer directly to file, but it is not playing in any video player. The size did indeed grow, but not playing.
I tried to record a video using opencv videowriter (FOURCC) – without success. Please, could do you give me a solution for that or a source code to record an opencv video with success on Linux? Thanks, Vince!
when i use the 160×120 for the size of video i get a video with half of it green shaded. how can i fix this
You should capture a frame in 320×240 and then use “resize” function (opencv).
i couldn’t get it work in 160×120
Great tutorial! I have a question about using this code with the Raspberry Pi’s built in image effects.
Does anyone think it would be possible for the code to be modified such that OpenCV could receive the video feed from the camera after the nice image effects had been applied? I know that the Pi does the effects using the GPU, and since I can’t really use the GPU for anything else with what I’m working on, I could get the nice edge finding from the “emboss” effect essentially for free.
I tried doing a couple of things to the code, but I don’t really know what I’m doing:
Changed:
vcos_status = vcos_semaphore_create(&callback_data.complete_semaphore, “RaspiStill -sem”, 0);
to:
vcos_status = vcos_semaphore_create(&callback_data.complete_semaphore, “RaspiStill -sem -ifx emboss”, 0);
Also added to create_camera_component(…):
raspicamcontrol_set_imageFX(camera, MMAL_PARAM_IMAGEFX_EMBOSS);
I’m not sure if this is even possible due to how this setup works, but I’d very much appreciate any help I could get.
Double post, sorry.
I solved my own problem!
For anyone interested, this is how you can get the RaspiCam’s built in filters to work with the code from these tutorials.
Just add the following line above “raspicamcontrol_set_all_parameters(…)” in the create_camera_component(…) function, so that it looks like this:
state->camera_parameters.imageEffect = MMAL_PARAM_IMAGEFX_[desired_effect];
raspicamcontrol_set_all_parameters(…);
The names for all of the parameters can be found from lines 32-84 of RaspiCamControl.h. This should work for setting other parameters as well, just replace “.imageEffect” with the desired member from the RASPI_CAMERA_PARAMETERS struct defined on line 113 of the same file.
Interesting thanks for this post – I too am looking to leverage the GPU for some stuff – I noticed that there is a ‘-x’ param to raspivid which causes the software to “output vectors” – I tried it and got a binary file which I didn’t investigate further … any idea what these “vectors” are – and specifically: Are the related to object detection and tracking? Apologize if this is answered elsewhere, but I have yet to see it mentioned… thanks again.
… answering my own question about the vectors, I just found this:
http://picamera.readthedocs.org/en/release-1.6/recipes2.html#recording-motion-vector-data
what should i edit in the CMakeLists.txt ??? plz help me out.
Its urgent. Thanks in advance
Change every instance of ‘camcv’ in CMakeLists.txt to ‘camcv_vid0’…same from ‘camcv.c’ to ‘camcv_vid0.c’
hi,
I made it till here and I am very glad. I have two questions:
1. I get this message:
mmal: mmal_vc_port_parameter_set: failed to set port parameter 64:0:EINVAL
mmal: Argument is invalid
Does anyone know why?
2. Though I get the opencv windows with video, it only lasts for few seconds
11 seconds for 170 frames : FPS = 15.454545
How can make it go continuously?
Thanks for the tutorial!
Koke
Your comment is awaiting moderation.
The file is compiled but at execute i have this message with no display:
“init done
opengl support available”
and RPI camera LED is ON
Hello Pierre,
I work for an nature conservation NGO and am contemplating methods for monitoring wildlife and feral animals. Do you think your system would have the capability to differentiate between (for example) foxes, dogs, cats, possums, quolls etc. that are enticed to investigate a monitoring station (stake in the ground with a camera attached at approximate eye height)? Possums look quite similar to quolls. And the camera would need to capture in the infrared as these species are primarily nocturnal.
Thanks
Hello Craig,
No this software only recognizes human faces… because it uses a “haar cascade XML definition file” (= kind of template of human face) dedicated to human faces. Anyway, your project could work if you find (or build) XML file to define what looks like a dog, cat or possum face.
The Link: http://raufast.org/download/camcv_vid0.c gives me “Forbidden
You don’t have permission to access /download/camcv_vid0.c on this server.”
Can someone reupp the file please?
Hi Pierre,
I used your camcv_vid0.c and it works, but no display appears.
There is only this message:
init done
opendgl support avaible
but nothing then happend. The camera LED shows a red light, so it seems to work.
I don´t use SSH, I´m using my Pi with HDMI.
Any suggestions?
same issue
same here
same proplem. Does someone found a solution to solve it?
Hello !
It took me very long to get allthe Linker Deps up.
You can find my Solution in the Attachment.
But i have a Question.
I try to get the Raw PixelData out of the Callback Buffer
(buffer->data) in MMAL_MMAL_ENCODING_RGB16 mode.
Where can I get Information about the data orientation ?
I want to Display the VideoStream on an CBerry Display.
Which natively is RGB16.
A simple Buffer Copy won´ t work.
—
cmake_minimum_required(VERSION 2.8)
project( nightVision )
find_package( OpenCV REQUIRED )
SET(COMPILE_DEFINITIONS -Werror)
include_directories(/opt/vc/include/host_applications/linux/libs/bcm_host/include)
include_directories(/opt/vc/include/interface/vcos)
include_directories(/opt/vc/include)
include_directories(/opt/vc/include/interface/vcos/pthreads)
include_directories(/opt/vc/include/interface/vmcs_host/linux)
add_executable(nightVision RaspiCamControl.c RaspiCLI.c RaspiPreview.c nightVision.c tft.c RAIO8870.c)
target_link_libraries (nightVision
/opt/vc/lib/libmmal_core.so
/opt/vc/lib/libmmal_util.so
/opt/vc/lib/libmmal_vc_client.so
/opt/vc/lib/libvcos.so
/opt/vc/lib/libbcm_host.so
pthread
m
bcm2835
${OpenCV_LIBS}
)
—
Hello,
Can the camera board do blob recognition and some thresholding with a better FPS than using the camera on the main RB PI? If it can’t what does this limitation mean?
It would be great to split video processing and computational work from applications using the data it generates on the main board which would require splitting the work load between the normal and camera board. With the current FPS of the camera board it seems logical to stay with a single board solution for machine vision??
Thanks in advance
In other words at this point what would be faster for blob detection? Raspberry Pi with camera module and opencv or Raspberry PI coupled with video board?
Hi,
Thanks for the code to get a image from the camera through the callout.
At this point my raspberry can detect blobs in a picture and pass the center of the blob and other data… at 1920×1080 at 23 fps!!!!
With the new 4 core rpi I will send one quarter of the picture to every core to reach 60 fps at full-hd. I think this is possible because without the blob detection I have 60 fps on the callout.
Thanksss
Great !
need a help…I managed to work all 5 steps, and on this step it still caprures a picture like in step 5, so, there is no video
I changed “camcv.c” into “camcv_vid0.c”, also changed in CMakeLists.txt, and when I do: “./camcv” it captures picture and saves as “foobar.bmp”
can anyone help, or say what I forgot to change or something….?
http://www.linux-projects.org/modules/sections/index.php?op=viewarticle&artid=16
And read my comment above where I give this same link.
there is an error when I do:
pi@raspberrypi ~ $ g++ -lopencv_core -lopencv_highgui -L/usr/lib/uv4l/uv4lext/armv6l -luv4lext -Wl,-rpath,’/usr/lib/uv4l/uv4lext/armv6l’ opencv_test.cpp -o opencv_test
it says:
/usr/bin/ld: cannot find -luv4lext
collect2: ld returned 1 exit status
any ideas?
its becouse I have problem with installing UV4L when it comes to adding the following line to the file: /etc/apt/sources.list
deb http://www.linux-projects.org/listing/uv4l_repo/raspbian/ wheezy main
it says that I have no write permission
I know it’s very late, but have you tried logging in as root before ? I mean it’s always a good idea to go root when installing stuff like drivers and such. Not a linux expert though so I’m sorry I can’t be of much help but root should have write permission everywhere so I should at least make you go past that particular error.
First you run:
cmake .
Than you run
make
Finally you run:
./camcv_vid0
Everything should be ok afterwards 🙂
Yeah I vaguely also remember having used cmake at some point lol. Can’t remember exactly as it was 2 years ago…
I Did all the steps but it only shows “init done, open gl support available.” It does not display anything. Does anyone have an idea how to solve this? i am using opencv 2.4.9. Displaying an image in step 5 worked perfectly btw.
Pingback: 树莓派:使用OpenCV调用自带的摄像头. - IT大道
Hi,
You can post camcv.c and CMakeLists.txt complet now ? ^^
I have this problem:
CMakeFiles/camcv_vid.dir/camcv_vid.c.o: nella funzione “cvPointFrom32f”:
camcv_vid.c:(.text+0x7ac): riferimento non definito a “cvRound”
camcv_vid.c:(.text+0x7c4): riferimento non definito a “cvRound”
CMakeFiles/camcv_vid.dir/camcv_vid.c.o: nella funzione “cvReadInt”:
camcv_vid.c:(.text+0x1360): riferimento non definito a “cvRound”
CMakeFiles/camcv_vid.dir/camcv_vid.c.o: nella funzione “cvEllipseBox”:
camcv_vid.c:(.text+0x167c): riferimento non definito a “cvRound”
camcv_vid.c:(.text+0x169c): riferimento non definito a “cvRound”
collect2: error: ld returned 1 exit status
CMakeFiles/camcv_vid.dir/build.make:215: set di istruzioni per l’obiettivo “camcv_vid” non riuscito
make[2]: *** [camcv_vid] Errore 1
CMakeFiles/Makefile2:60: set di istruzioni per l’obiettivo “CMakeFiles/camcv_vid.dir/all” non riuscito
make[1]: *** [CMakeFiles/camcv_vid.dir/all] Errore 2
Makefile:76: set di istruzioni per l’obiettivo “all” non riuscito
make: *** [all] Errore 2
Can someone help me?
I have this problem:
CMakeFiles/camcv_vid.dir/camcv_vid.c.o: nella funzione “cvPointFrom32f”:
camcv_vid.c:(.text+0x7ac): riferimento non definito a “cvRound”
camcv_vid.c:(.text+0x7c4): riferimento non definito a “cvRound”
CMakeFiles/camcv_vid.dir/camcv_vid.c.o: nella funzione “cvReadInt”:
camcv_vid.c:(.text+0x1360): riferimento non definito a “cvRound”
CMakeFiles/camcv_vid.dir/camcv_vid.c.o: nella funzione “cvEllipseBox”:
camcv_vid.c:(.text+0x167c): riferimento non definito a “cvRound”
camcv_vid.c:(.text+0x169c): riferimento non definito a “cvRound”
collect2: error: ld returned 1 exit status
CMakeFiles/camcv_vid.dir/build.make:215: set di istruzioni per l’obiettivo “camcv_vid” non riuscito
make[2]: *** [camcv_vid] Errore 1
CMakeFiles/Makefile2:60: set di istruzioni per l’obiettivo “CMakeFiles/camcv_vid.dir/all” non riuscito
make[1]: *** [CMakeFiles/camcv_vid.dir/all] Errore 2
Makefile:76: set di istruzioni per l’obiettivo “all” non riuscito
make: *** [all] Errore 2
Can someone help me, please?