Yaskawa and Cognex Calibration Problem

  • Hi all,


    I'm currently working with Yaskawa Motomini with a YRC1000micro controller and Cognex In-Sight Explorer. The project I have to create is a palletizing program that places 1 inch cubes on the pallet with 12 grid spaces. The robot picks up the cubes from a conveyor in the same location. The problem starts when the robot decides where to place the cube on the pallet. I know its normal for cameras to have distortion (or fisheye effect), but in my case, the further the grid space is from the center of the camera's FOV, the worse the place position is.

    In In-Sight Explorer I've used Calibration Grid -> TransformImage -> Pattern. I tried to do this both in EasyBuilder and Spreadsheets. The camera detects the grid center point perfectly every single time, and the same in MotoSight 2D. But the ACTUAL place position during the execution of the program is as described above.


    In-SIght and camera's software version is 6.1.3. The camera is IS7802c.


    Is there something that I'm missing? What other tools or functions I can use to fix the distortion and place position problem? I've tried to search on the forum, but did not see the solution.


    Thanks in advance.

  • Lemster68

    Approved the thread.
  • Couple ideas come to mind.


    1) The robot calibration could be off.


    2) Camera calibration. What calibration did you use? Scale is the worst. Using a Grid with the checkerboard or dots is much better. Using with the Fuducial would also be beneficial. The Grid will help eliminate the lens distortion (fish-eye) and a camera not mounted perpendicular, amongst other things.


    Vision is great when it works. When it doesn't it can be a nightmare because you don't always know what went wrong.


    It is start from scratch.


    Good calibration on the robot.

    Good tcp.

    Good positions taught in the robot job.

    Good user frame taught.

    Good camera calibration.

    Good training of the part.



    As I write this I just thought of something else. Are you finding the exact center of the cube or only using a PatMax or Pattern tool to find the cube? Using only the PatMax or Pattern tool will return the center of the tool not necessarily the center of the cube. If you use the edge tools to find the 4 edges you can find the corners. A line can be projected from opposite corners to find the mid-point and report the center of the cube. I'm not sure where you are using vision on this cell, the pick or the place. A problem with the method I described is if the part can move far enough in the field of view to see the bottom edge of the part instead of the 4 top edges. It can give a false center.

    I know a thing or two, because I’ve seen a thing or two. Don't even ask about a third thing. I won't know it.

  • Thanks for the answer,


    I've been using Grid with fuducial in all programs, but still it wasn't working as intended.


    I used PatMax and Pattern, but I'll try to do this with the Edge finding tools. I guess in my case it should work just fine.


    Also, I've tried this on two identical cameras that we have here, but the issue is the same.

  • Here's something I would check then. Using what ever grid size you are using, enter that number into the X of a position variable. In a job write a line IMOV Pxxx V=xxx UF#(xx). The Pxxx is the position variable you type the grid spacing into. V= is something slow. The UF#(xx) is the user frame used for calibration.


    Take the robot to the origin of the User Frame used for calibration. Interlock-Test Start through the job multiple times. See if the robot:


    A) Followed the user frame accurately.

    B) Moved the correct distance each time.


    If B doesn't work correctly the accuracy of the robot may be in question.

    I know a thing or two, because I’ve seen a thing or two. Don't even ask about a third thing. I won't know it.

  • Hey Kawaki,

    I have had this issue before when I was testing at 7802 with an ABB robot. We found a couple of ways to get around this problem.


    1. Place your fuducial grid on top of the cubes. As long as you are only performing operations on one level it will work fine.


    2. Would be to change out the lens on the camera to a telecentric lens. Allows the view to be straight down. Takes the angle out of the target object.


    I used #1 for my project. We did not try #2 . Figure that would work if our project went further.

  • Hey Kawaki,


    Avoid pattern recognition. Rely more on edge recognition, faster and more accurate. Also as many have said before, calibration is very important. Always check that when running the cal operation, a proper green mark appears at all square intersections. Light is absolutely crucial for identifying all the green visible crosses. Then do what 95devils said: perform short, medium and long measuing/positioning routines and check that the robot moves accurately. If this falis, no real operation will then be successful.

    Cheers!

Advertising from our partners