Teaching a new UserFrame. How to get UserFrame into a PR?

  • We use Fanuc 3D Multi View vision to locate a car’s door, hood, and deck lid panels as it sits on a skid. We do the HEM-Flange seal on these panels. It has been hard to get a consistently good HEM on the Deck lid. We did a 3 view find on the rear surface of the trunk. Sometimes the quality of the HEM up along the back window of the Deck lid was of poor quality.


    It was decided the Deck lid is in two planes, a horizontal plane and a vertical plane. We were only doing our vision on the vertical plane and perhaps this was why the offsets for the horizontal plane were off and we were getting poor quality HEMs.


    So - I am trying to teach a User Frame for the horizontal plane of the Deck lid. I have a 10” pointer I screw into the Robot End Effector. I tried the 3 point method and when I finished I found when I jogged the Robot in the newly taught USER frame the Robot moved in the opposite directions that it did before. If I moved in a +X direction the Robot moved in a –X direction. If I moved in what should be a –Y direction the Robot moved in a +Y direction. The User is set up for +X moves to the front of the vehicle and +Y is in the direction of the Drivers side of the car.


    Also when I finish teaching the new User Frame correctly(I hope that happens) - how do I put it in a Position Register? That is how we Call our User Frames. User Frame 1 = PR[129]. How do I get my newly taught User Frame into the Position Register?
    Thanks in advance.

  • The attachment depicts the 3 points I chose to use for teaching my new UserFrame for the Deck Lid. I do not know if I should have taught the +Y from the Origin or if the way I did it is acceptable. Any comments appreciated.


    Also, I tried typing the new UserFrame values directly into the Position Register - is this the proper way to do it? And I chose to save as Cartesian. Does this sound proper?

  • Y+ should be taught out from the origin. Remember to use the Right Hand Rule. You can store a uframe into a PR by using: PR[x] = UFRAME[x]. It should be in Cartesian representation and you may need to toggle the system variable $PR_CARTREP=1.


    Sent from my VS985 4G using Tapatalk

    Edited once, last by HawkME ().

  • As you can see the car’s Deck lid is not a perfect, flat, horizontal plane. When I taught the 3 Point User Frame I tried to touch the center of the back edge of Deck lid for my Origin (It is physically the highest point on the trunk). For +X I tried to touch the center, front edge, of the Deck lid (slightly lower than the Origin) and for +Y (my lowest taught point) I tried to touch the driver’s side of the rear of the trunk. Why I picked these three points was not scientifically calculated - it just seemed the easiest for me. If I remember correctly, after I finished teaching my User Frame, when I jogged the robot it moved properly but I think because my +Y point was lower than the Origin and the +X point my plane was slightly skewed. When I moved in a –Y direction the robot moves slightly in a +Z instead of a perfect horizontal line. My question is should I just teach a User Frame that is in a perfect horizontal plane? Could I touch the highest point for the Origin and +X and +Y are then taught at the same +Z dimension? We are using this User Frame in conjunction with our Fanuc 3D Multi View software to get offsets so we can HEM Flange seal the edges of the Deck Lid.


    As you can see I do not fully understand what I am doing (but I am trying). :uglyhammer2: I think we use the User Frame as a reference and we teach our points to that job. After that we reference this original job to get our offsets for every job after. I looked at the Fanuc Manual example and it does not tell me a lot. Any comments or suggestions are welcomed. :help:

  • Honestly, there is not much value gained from teaching a user frame this way because you have no way to reteach it and get it exactly the same results. There are two main purposes of the User Frame, 1: all of your points are relative to a coordinates system, if the robot or fixture ever moves, you can recover the program without re-teaching all of your points, and 2: it help align your tool to the fixture. In your case there is no way to repeat what you are doing and you are not using it to align your tool because you are working with a curved surface.


    Here is some food for thought:


    1. You should always have a user frame and it should be meaningful and repeatable. In your case that may mean teaching it off reference points on the skid or having a reference car assembly with fixed points bolted to it that you can teach and re-teach the user frame from if anything happens. If the user frame isn't repeatable, then it more or less worthless.


    2. How did you calibrate the vision system??? Not sure how old your robot is, but if you have a newer robot and are not using the robot generated grid calibration then you are missing out on a huge benefit. The robot generated grid will calibrate the camera very accurately and also generates a user frame that you can use in your application. Then you don't need to do the 3 point method, you got your user frame automatically, although it is not relative to the car, just the vision system.

  • Thank you HawkME for your clarifying and enlightening answer! You opened my eyes and I have a better understanding of what User Frames are and there purpose.


    In response to, “How did you calibrate the vision system?” It was calibrated by Fanuc when installed about 6 years ago. We have permanent calibration grids mounted in the Cells. I have recently run the calibration check and some of the checks fail. I believe, over time, robot mounted cameras and lasers get bumped and slightly moved, running two shifts, different people interacting with robots, stuff happens. When I ask if this should be part of our PM, be checked, and tuned-in on a scheduled basis I am told we are running 6 different model cars and there are Degrade paths too and it would be a massive effort (“nightmare” I think was the word) to maintain it all. As long as we are running and spitting out cars (numbers, numbers, numbers) management is happy. If you mention working a weekend on this kind of stuff - you will get laughed out of the office with a size 12 stuck up you know where. :icon_eek:

  • This is unfortunate. Having an accurate Camera calibration is critical to the accuracy. It may take some work to do it the right way, but it would be easier to maintain in the long run.


    Sent from my VS985 4G using Tapatalk

  • Thanks again - I agree with you and will continue to make the argument of taking the time, getting the proper expert help, and doing and maintaining this stuff correctly. Hopefully if we take these steps it will help our quality and repeatability.

Advertising from our partners