Shifted User Frame using Vision

  • Everyone..


    I would like to open a discussion on how to shift the user frame for the robot offset on the camera process.


    Main problem: Robot is inaccurate when locating a product

    Goal: To locate the product using a camera without adding VOFFSET.VR on the program (usually 2D vision Fanuc has a program like this L P[1] 100mm/sec FINE VOFFSET.VR[1])


    Currently, I have done some logic in the program :

    1. first Logic

    Vision RUN_FIND VISION1

    VISION GET_OFFSET VISION 1 VR[1] JMP,LBL [99]

    PR[20]=VR[1].OFFSET (getting offset from vision)

    PR[30]=UFRAME[2] (this is a static user frame, I was set it up since the beginning)

    CALL MATRIX (20,30,40)

    UFRAME[3]=PR[40] (uframe from the matrix result)


    UFRAME_NUM=3

    UTOOL_NUM=1

    L P[1] 100mm/sec FINE

    L P[2] 100mm/sec FINE

    L P[3] 100mm/sec FINE


    I Still have miss accurate picking from the robot when applying to the robot, the robot is not accurate


    2. Second Logic

    Vision RUN_FIND VISION1

    VISION GET_OFFSET VISION 1 VR[1] JMP,LBL [99]

    PR[20]=VR[1].OFFSET

    CALL INVERSE (20,21)


    L P[1] 100mm/sec FINE OFFSET PR[21]

    L P[2] 100mm/sec FINE OFFSET PR[21]

    L P[3] 100mm/sec FINE OFFSET PR[21]


    Actually, this logic is good enough when the part only moving in the X or Y direction, but when it goes to rotate and Change some R, the accuracy is really bad.


    is there anyone who has a similar application?, that would get an accurate picking without using VOFFSET.VR in the program? maybe by using PR offset or shifting the USERFRAME, or is there any insight from you if there is some mistake in my logic.



    PS: i cannot using VOFFSET.VR in the program for specific reason

  • Hi

    Your PS reads that you can't use voffset but on you two examples you are using it. Indirectly but still, using it

    I don't understand your process. You are using vision but you can not use the solution that Fanuc offers you

    What about sending results to a PLC using a different camera (do not use iRVision) or even the sending the voffset values to the PLC ( that will be against your specific reason)

    Retired but still helping

  • Those methods should all produce the same result. If accuracy is an issue then it is probably not related to how you structure those commands.


    You could try redoing the vision calibration. Verify Z Height is accurate.

    Improve robot mastering.

  • Hey,


    If you're receiving values of offsets straight from the vision system in GIs, and you're looking for a way to program it without VOFFSET, here's a simple way.


    //Check beforehand if you have to convert unsigned results into signed values


    GI[1] = R[1: X OFFSET]

    GI[2] = R[2: Y OFFSET]

    GI[3] = R[3: Z OFFSET]

    GI[4] = R[4: W OFFSET]

    GI[5] = R[5: P OFFSET]

    GI[6] = R[6: R OFFSET]


    // Check that each R[x] value is within the tolerance you need here, otherwise, abort


    !If all offset values are within tolerance

    PR[1, 1] = R[1: X OFFSET]

    PR[1, 2] = R[2: X OFFSET]

    PR[1, 3] = R[3: X OFFSET]

    PR[1, 4] = R[4: X OFFSET]

    PR[1, 5] = R[5: X OFFSET]

    PR[1, 6] = R[6: X OFFSET]


    !Pick position with offset

    L P[1] 100m/sec FINE Offset,PR[1]


    Additionally, look into whether you need the $OFFSET_CART variable to be true, as this affects how offsets are applies, and depends on how your specific vision system provides offset results.


    Hope that helps.

  • Dear All


    i have done this problem, here I share you some of my conclusion


    1. My system is miroring robot, 2 robot that facing each other on the same workpiece. Only robot 1 that has a camera and need to share the offset to the Robot 2. the problem is we don't have any software for Robot 1 to share the offset to Robot 2.

    For this problem i use GI and GO to share the Vision Register to the robot .


    2. The main problem of my question is about the accuration, expecially on R orientation, after some discussion and trial. i found that Robot offset the user frame on Vision is according to the World orientation of the robot, so i miss understanding about user frame and World frame of the robot. it is really high recomendation to make user frame of callibration grid as same as robot world.

  • Hey,


    I am super unsure on whether I understand your questions fully, but I'll still attempt to answer some of your questions. The vision systems I have worked with are for a different application so someone else might be able to correct me if I'm wrong.


    It's common to use one robot as the measuring robot and then pass of results to other robots in the cell which might need the offsets as well. For applications like this, you set up a common user frame so that when the measuring robot obtains (and transfers) the vision results, it's applicable for all the robots. To do this you just need to establish repeatable points which can be reached by all the robots and teach a user frame in the 3-point method (or whatever method you prefer). This ensures that all the robots are moving in the same direction and with the same rotation, accurately.


    A common base is also much more precise, because it's easier to teach a base that's common to all robots, than to try to mount the robots so their World coordinates are in the same exact angles.


    Once you have a common base, you can just transfer the offsets through GIs and GOs, and apply them respectively.


    I'm not sure what vision system you're using, but normally you create a user frame for your application (which is also separate from a calibration frame), and this is the user frame that the vision system uses as a reference. This is a pretty vital step.


    Hope that helps.

Advertising from our partners