Hello, we are doing a job that has an external (Cognex) camera and it is taking a picture of the part and then supposed to be generating my offsets based on the position of the parts. How do I calibrate the Cognex 3D camera to the Fanuc robot so that I can receive the data? I know how to do it in IRVision but the customer wanted to use the Cognex software instead. Thanks!
Calibrate a Cognex camera to a Fanuc robot
-
DG -
September 4, 2019 at 6:58 PM -
Thread is Unresolved
-
-
Hi
Note I'm assuming a lot of things here
Place a sheet of paper (lets say 11 by 17) on the FOV
Aligned the vision to the sheet and find three corners (origin, x, y) Dimensions are known
Teach a user frame on the same three points with the robot
At this point you have the vision coord matching the robot user coord
Teach the pick point based on the user frame
The values that you obtained, are the values you have to introduce in you vision coord
At this point you have the x, y from the common origin
Same principle with rotation
-
Depends on a lot of factors.
What kind of Cognex -- InSight, VisionPro, other? Is it robot-mounted, or fixed? Is the 3D multi-shot, stereo, etc?
There's also more than one way to do this.
Cognex camera calibration is usually carried out using a printable "grid" that you can generate from the Cognex software (usually InSight Explorer). If you use a grid with the "fiducial" option, you'll get a grid that has a "center" point (the "T" intersection of the fiducial segments) and the X and Y axes shown. So, for a fixed-overhead camera, one way would be to tape that grid down below the camera, calibrate the camera, then create a UFrame base in the robot that is aligned to the grid; probably by putting a pointer tool on the end effector (with a taught TCP) and touching the Origin, X+, and XY+ points on the grid. That would calibrate the camera and robot to the same coordinate system.
For a robot-carried camera, you might create a TCP aligned with the camera's internal coordinate system. Create a TCP with your best guess as to the camera's alignment. Put a piece of paper with a good circular dot below the camera (at the correct focus height), and begin moving the camera in Tool X and Y, checking where the camera finds the dot after the robot moves. Use trigonometry to calculate the Z-axis angle error from the difference between how the robot moved and what the camera observed. Rotate the TCP around its Z axis to correct the error and repeat, until the dot only moves in the X axis of the camera when the robot jogs in Tool X, ditto for Y. At that point, you'll have a decent TCP whose XYZ axes are at least parallel to the camera XYZ.
Then there's the issue of converting from pixels to mm. Usually the Cognex internal calibration tools are used for this (usually using the grid), but it's possible to perform the conversion on the robot side using the "jog and compare" method.
-
Ok, I will try that and see if that works. I am assuming that I will have to set up multiple user frames for this project because I already have one set for the table where I have to pick the part off of. Then I have to lift the part and display it to the Cognex camera to generate my offsets before assembly. Would that be an issue? I'm afraid that setting one user frame at the table height and then setting a different user frame in the air where I am displaying the parts would ultimately create some sort of internal conflict for the robot...or am I just over-thinking this?
-
Having multiple user frames shouldn't make any issues -- robots are designed to handle lots of them. It's the relationships that can trip you up -- it's entirely possible to "stack" frames atop each other to near-infinity, but you have to really keep track of the relationships.
(disclaimer: I'm pretty weak on Fanucs, but the underlying principles are pretty much brand-agnostic)
I can't really "see" your application -- it sounds like a fixed camera, but the robot needs to move the part in front of it? That's more complicated. If you're trying to use the camera to adjust for pick errors after the pick, you're probably going to want to use tool offsets, rather than base/uframe offsets.
You'd have to create an algorithm to adjust your TCP by the offsets from the Cognex. How exactly that would work depends a lot on how everything is set up, but at the end of it, if you introduce a deliberate error to the part position on the gripper, you should be able to run a move-measure-correct loop and have the robot bring the vision errors down to near-0.
I suspect Congex already has some sample code, or general guidelines, for how to calibrate a situation like this. My off-the-cuff thought, if I was doing this from scratch, would be to create a TCP that is co-located and aligned with the reference from of the Cognex, when the robot is at the Vision position, with zero offsets applied (and, of course, the Cognex should be configured to return an error frame of near-0 when the robot is at this position). At that point, something like WorkingTCP=GoldenTCP*CognexError or WorkingTCP=GoldenTCP*Inverse(CognexError) should be close to what you want. Basically, it should be a 6DOF frame multiplication. And, of course, you want to keep a permanent copy of your "golden" TCP to go back to at the start of every new cycle.
-
Hi
Note I'm assuming a lot of things here
Place a sheet of paper (lets say 11 by 17) on the FOV
Aligned the vision to the sheet and find three corners (origin, x, y) Dimensions are known
Teach a user frame on the same three points with the robot
At this point you have the vision coord matching the robot user coord
Teach the pick point based on the user frame
The values that you obtained, are the values you have to introduce in you vision coord
At this point you have the x, y from the common origin
Same principle with rotation
I did exactly same as you said.
L P[2:Pick Point] 1500 mm/sec FINE Tool Offset PR[60: Offset Data]
I can pick up part with out any problem If it move either X or Y or Rotate. But If Part rotate and move in either X or Y It can not Pickup.
Any Suggestion ?
-
First of all, dont use tool offset, use just Offset
Put the part on on the FOV, take a picture and look at the value of PR60
Move the part and check the value again Is PR60 changing ?
How are you passing the values from the camera to the robot ?
-
First of all, dont use tool offset, use just Offset
Put the part on on the FOV, take a picture and look at the value of PR60
Move the part and check the value again Is PR60 changing ?
How are you passing the values from the camera to the robot ?
Yes PR[60] Value change.
Camera transfer data to PLC and from PLC to Robot
-
HI
I made a mistake explaining
When you say L P[2:Pick Point] 1500 mm/sec FINE Tool Offset PR[60: Offset Data], this is wrong it should be
L PR[60:Pick Point] 1500 mm/sec FINE
The values of you camera go directly into your point. YOu will get x,y rotation . The other axis are constant
Do the values make any sense ?
If you follow what I wrote on the first post do this.
Put the part at 10 cm from the corner on the x direction and 10 cm from the corner on the y direction.
If you take a picture now, you should get 100 mm offset on the x and on the y from your PLC, if not, there's something wrong on your set up.
Please understand the concept that the sheet paper represent a plane. You take a picture of that plane and you know the dimensions. On your Cognex you have to convert the 8 1/2 inches by 11 in pixels, and then you will know how many pixels 1 inch is
Also on the same sheet of paper teach a user using the same origin and coord you define on the vision. If you put the robot is user coord frame a jog it to the origin you should be at 0,0 which is the 0,0 of your vision.
11 x 2.54 = 279.4 mm This means if you go to the corner of your sheet you should have 279.4 mm in one of the position coordinates AND you should also be able to measure 279.4 in pixels value
Same for 8 inches
Once you get this, you are set, any part on that sheet of paper should be found by your vision with numbers that you can easily measure from the origin