Fanuc Vision Accuracy Help

  • Hello All,

    Let me start by saying I am here in this debug process currently: :loudly_crying_face::pouting_face:

    I am attempting to use two LR-Mates to assemble two parts, one flexible, and one rigid, with specific matching icons to one another with a tolerance of 0.25mm for XY and rotational placement of the outside profiles. Each robot has a foam EOAT that shifts the part upon picking which has me using a second snap with a tool offset which helps account for the flexible part rotating but not consistently enough to not call in adjustment offset. I currently have a vision process with two parent tools established, one looking at the icon for XY accuracy, the other looking at the outside profile of the part to get rotational accuracy. I have two model ID's taught, each with their own offset data taught to the vision. In my programming I am sending the GET.OFFSET and FOUND.POSITION data for my XY/icon to VR2 and VR3, as well as sending the FOUND.POSITION data for rotation/outside profile to VR4, then extracting the received VR data from all components to their respected position registers where I call a matrix and inverse to compile the pieces of the offset data I need that is applied to the final application position. The flexible part has an adhesive backing that sticks to the rigid part in the application process to which I set up a shared user frame between the two robots for this handshake.
    The above process holds fantastic rotation but compromises my XY icon alignment accuracy each time. I have also exhausted the adjustment offset feature which did help but unfortunately did not resolve the issue.


    In the past I had a vision process setup where I had one parent tool looking at my XY icon, and a child tool that was looking at the overall rotation of the outside profile to which I then called in a positional adjustment tool for rotation/angle. In this process I was able to achieve the XY assembly to the needed tolerance but my outside profiles were rotated off of one another and I had to reject those parts.

    I am wondering if it would be possible to do a combination of both the listed vision processes above, extract the XY data from the second vision process listed, and take an average from the two sets of rotational data from each process to help keep the outside profile within the other outside profile? Or if there is a way to weight or take a percentage of how much rotational offset data I should use.
    I'm not sure if anyone had any success extracting vision data in the matter that I am and manipulating it mathematically but I'm happy to include my programs for reference if anyone needs it to help with this issue.

    If there are any other suggestions or recommendations someone can offer me I would sincerely appreciate it!

    Thank you for your time,

    Maria.

  • Hi Maria

    Welcome to the robot forum


    First of all, seems to me that you know exactly what you are doing


    If there is a lot of rotation, lets say 180 we use two model ID per part

    We learnt that 165 degrees rotation is "different" than -15. The error produce on the tool rotation is minimal because one of reference position is now only 15 degree way from you taught point and not 165 degrees


    On your third paragraph "In the past ...." you described a method that I used in the past. I done something similar but taking two pictures. I would you the first shot to find the part, then position the camera better based on the part found and then take my second shot


    As far as positional adjustment, well we have to leave with that because of the fish eye. You might be able to write an equation to do the compensation or just assume that if yo are a certain distance , add/subtract a value


    For your forth paragraph, i would create two model ID, They really helped in some parts

    Retired but still helping

  • Fabian,

    Thank you so much for the quick and detailed response, I will be sure to try those suggestions listed above and see if that improves overall accuracy.

    Maria.

Advertising from our partners