FANUC LR Mate200Di iRVision

  • Hi,

    I am working with FANUC LR Mate200Di educational cell's (E122350). The Vision-based pic and place task is working but I need to know one thing, please see the attached doc file for pictures,

    during each "snap and find" operation why the camera sees a different target orientation? though the target stays in the same position for each "snap and find" operation, which causes some unnecessary different movements in the robot tool to pick an object from the same place. How to resolve this orientation issue?

    For vision-based pic and place tasks i created a new uframe and utool frame, Do these new frames are causing that problem?


    Furthermore, when i used the settings that are predefined by FANUC for the camera and GMP locator tool and perform multiple "snap and find" operations for a target the green boundary around the target stays the same

    for each "snap and find" operation, why this boundary is not static for my camera settings?

    obj_orientation.docx


  • The problem is that the camera doesn't see exactly the same thing every snap, a pixel here or there can change orientation.

    It is difficult to see from the images you provided, but a symmetric object will not have a good rotation control. This should be a check in your vision process setup as you teach the GPM locator. I'm away from my roboguide at the moment, but can upload a picture when I get a chance.


    I believe you can take the vision offset, and strip out the rotation if the picking in different rotations is a problem, but it has been a while since I've had to do something similar.

  • You do have an orientation option under the DOF you will allow, and right now if it is set like pictured, its going to take anything. I would try turning that off and it will likely give you more steady results. I cannot test here, as I don't have a robot with vision installed, just simulation. Good luck!

  • You're only look at a singular round hole. You haven't given the vision process anything to reference the angle off of. If you can, utilize part of one of the holes to either side, and look at part of those edges. You'll be giving the process a reference point to base orientation off of. Until you have something to reference the circle off of, you can't correctly interpret the orientation.

  • Thanks alot,

    I fix this issue, not sure whether this is correct or not but I uncheck orientation in GPM tool.


    Also can I use the same UTool for multiple UFrames, mean same UTool is used for teaching multiple UFrames like one UFrame for Camera calibration and another for placing the part on the table?

Advertising from our partners