Hi Guys,
I need some help on how could I program Fanuc robot to accept a Vision output offset value if the vision is from third party like Cognex instead of iRVision.
Thank in advance!
Hi Guys,
I need some help on how could I program Fanuc robot to accept a Vision output offset value if the vision is from third party like Cognex instead of iRVision.
Thank in advance!
check out this thread. https://www.robot-forum.com/robotforum/fan…77106/#msg77106
I was able to get the robot and Cognex system communicating using those notes and the Cognex documentation. But, we had to use the CIO-Micro I/O for the project because the Cognex tools are limited when you use Ethernet/IP to communicate with the Fanuc Robot. It will send X, Y, and angle of a position very easily. I needed more options for that particular project.
Has anyone used a third party camera (cognex, dvt, etc) or sent a position from a pc to a fanuc robot for visual line tracking? Is it possible to synchronize the third party camera offset with the track sensor in the fanuc to know when the upstream picture was taken to know where the product is now? I was thinking possibly to use a photoeye to both trigger the camera and start the conveyor tracking, but since there could be many parts on the conveyor at a time this doesn't seem like the best option.
Thanks.
Has anyone used a third party camera (cognex, dvt, etc) or sent a position from a pc to a fanuc robot for visual line tracking? Is it possible to synchronize the third party camera offset with the track sensor in the fanuc to know when the upstream picture was taken to know where the product is now? I was thinking possibly to use a photoeye to both trigger the camera and start the conveyor tracking, but since there could be many parts on the conveyor at a time this doesn't seem like the best option.
Thanks.
There is no reason that it should not work.
run a set trig command on the line tracking
run your vision, if you find something go get it if not start over.
i have found you should run your set trig before run find (in irvision) for better results. if your vision process takes more than 500ms you will run into problems
Where would you apply the offset found by the third party camera (a user frame, an offset pick position, etc.) and how would you know which part in the conveyor tracking queue goes with which offset found by the third party camera by the time the part reaches the robot if there are multiple parts between the camera and robot?
I hope that makes sense.
Where would you apply the offset found by the third party camera (a user frame, an offset pick position, etc.) and how would you know which part in the conveyor tracking queue goes with which offset found by the third party camera by the time the part reaches the robot if there are multiple parts between the camera and robot?
I hope that makes sense.
When I did this using a Cognex camera, I created a FIFO stack on the robot. I had a background task that would push data onto the stack and move the push pointer, as each new vision snap occurred. I also had a pop pointer that would allow me to always pop the proper position data. This was called when I needed new vision data. This was a simple 10 position register setup for each of the X, Y, Z (from an external sensor) and R values. It may have been cleaner to use PRs, but I was in a hurry and didn't know better, so I used individual registers for the data.
I applied offset data to the user frame for the X and Y positions. I applied Rz data to the tool frame in R.
When I did this using a Cognex camera, I created a FIFO stack on the robot. I had a background task that would push data onto the stack and move the push pointer, as each new vision snap occurred. I also had a pop pointer that would allow me to always pop the proper position data. This was called when I needed new vision data. This was a simple 10 position register setup for each of the X, Y, Z (from an external sensor) and R values. It may have been cleaner to use PRs, but I was in a hurry and didn't know better, so I used individual registers for the data.
I applied offset data to the user frame for the X and Y positions. I applied Rz data to the tool frame in R.
Can i get the background task to push data onto the stack? or any information for to do it? thanks
Can i get the background task to push data onto the stack? or any information for to do it? thanks
[/quote]
This is actually a RUN, not BGLogic. R[321]-R[325] are the height values I was reading in from the sensor (analog light curtain).
1: R[320:HeightReg]=0 ;
2: R[321:CurHeight0]=R[281:SensorMin] ;
3: R[321:CurHeight0]=R[281:SensorMin] ;
4: R[322:CurHeight1]=0 ;
5: R[323:CurHeight2]=0 ;
6: R[324:CurHeight3]=0 ;
7: R[325:CurHeight4]=0 ;
8: R[319:HeightIndex]=0 ;
9: ;
10: LBL[1] ;
11: IF R[299:STOP_TASKS]=1,JMP LBL[999] ;
12: IF (F[18:CamSnapped]=ON AND F[15:VisPop]=OFF),JMP LBL[200] ;
13: IF (F[7:LC Trigger]=ON),JMP LBL[100] ;
14: ;
15: JMP LBL[1] ;
16: ;
17: LBL[100:init read] ;
18: R[283:LargestReading]=R[281:SensorMin] ;
19: LBL[110:read height] ;
20: IF (F[7:LC Trigger]=OFF),JMP LBL[190] ;
21: IF (R[282:SensorReading]>R[283:LargestReading]),R[283:LargestReading]=(R[282:SensorReading]) ;
22: JMP LBL[110] ;
23: ;
24: LBL[190:end read] ;
25: R[320:HeightReg]=321+R[319:HeightIndex] ;
26: R[R[320]]=R[283:LargestReading] ;
27: IF (R[319:HeightIndex]<4),R[319:HeightIndex]=(R[319:HeightIndex]+1) ;
28: JMP LBL[1] ;
29: ;
30: LBL[200:Pop Off Que] ;
31: R[321:CurHeight0]=R[322:CurHeight1] ;
32: R[322:CurHeight1]=R[323:CurHeight2] ;
33: R[323:CurHeight2]=R[324:CurHeight3] ;
34: R[324:CurHeight3]=R[325:CurHeight4] ;
35: R[325:CurHeight4]=0 ;
36: ;
37: R[319:HeightIndex]=R[319:HeightIndex]-1 ;
38: IF (R[319:HeightIndex]<0),R[319:HeightIndex]=(0) ;
39: WAIT R[287] ;
40: F[15:VisPop]=(ON) ;
41: JMP LBL[1] ;
42: LBL[999] ;
Hopefully this will help you a bit. May not be the best way, but it was a quick way to make it work.
If you want to swap the order of removal of data, set up another pointer (I used R[319] for the first one). You can write with one and read with the other (R[318] for instance).
Increment push and pop counters independently and you can retrieve data FIFO or FILO. If you need more help, let me know.
how is your program work with line tracking? dont you need to keep track of the encoder count too? and how do you trigger the line trackig?