# Position Transform

• I'm trying to figure out to to convert a cart pos from one frame to another.

I have an application which is using a rob mounted cam to locate a product on tray. The tray is a calculated user frame, so the uframe can change based on the tray. The vision process is setup to use world for the offset frame. The issue is that the robot is told where to go on the tray by a parent system and it needs to report back the found pos to the parent. My issue is in trying to convert the found pos in world to the tray uframe. I was trying to take the inverse of the uframe and matrix against the world found pos but am not having any luck.

Does anyone know of a solution for this? I need a nudge in the right direction but I'm not opposed to just telling how to do it either Any help is appreciated!

Here's the current logic.

The part is located in UF2 but needs to be pick in UF3

PR[228]=VR[1].OFFSET (Found Pos Vis Proc)

PR[230]=UF[3] (UF3 Changes based on cart geometry)

PR[230],13=0 (Extended Rail 7th Axis)

CALL INVERSE (230,230)

PR[233]=UF[2] (Static, never changes)

PR[233],13=0

CALL INVERSE (233,233)

PR[234]=PR[230]-PR[233]

CALL MATRIX (234,228,60,160)

I'm off in the X axis and not sure why

PS. The reason inspecting this in world is due to error in the original setup. I changed the vis process offs frame to a static frame that doesn't change.

• You say the the vision offset is in world, then later you state the part is located in UF[2]. Which is it?

In your vision process it should be set to Found Position mode in world frame, not a fixed frame offset. Then make sure to change your PR[228] to be Found_POS, not Offset.

If you are using found position relative to world frame, and you need to transform that to UF[3] I believe your code would look like this:

Code
``````PR[228]=VR[1].FOUND_POS[1]

PR[230]=UF[3]

PR[230],13=0

CALL INVERSE (230,230)

CALL MATRIX (230,228,60,160)``````
• \$MNUserFrame

I think if you added the current user frame, then subtracted the desired user frame. Haven't ever done this, but I think that would work. Let me know if it does....

• You say the the vision offset is in world, then later you state the part is located in UF[2]. Which is it?

Sorry, it's a work in progress. It was world but I changed it to UF2. World was giving me fits, robot is upside down on a rail and rotated 90 from the UF's.

I had offset only due to being lazy, it is a 2d found pos process so its still spits out found pos in offset.

\$MNUserFrame

I think if you added the current user frame, then subtracted the desired user frame. Haven't ever done this, but I think that would work. Let me know if it does....

Thats what I'm doing, taking the difference between the two frames and then multiplying against found pos.

• Just chain-multiply Inverse of UF3 by UF2 and finally the vision PR Offset

The result is the a PR Offset in UF3 coordinates

• I can confirm that HawkME 's code gives the correct position coordinates. Make sure that the configs match with the solution and the rest of the positions though. When I did it the Matrix gave the config NDB, I don't if there are sysvariables that can change this or not.

• So just to update, it turns out the math was/is correct. The X axis error was due to inconsistencies in the locating. I was using GEDIT to find circles, however the vision process cannot find the true center automatically. The org was drifting and not repeating from snap to snap. This was confirmed with FANUC as well. For whatever reason the gedit processes differently than a "taught" image with actual contrast lines.

• Another thing is to make sure you set the correct Z height of the part. I have forgotten this setting plenty of times at it always causes some pretty bad error in the positions, which gets worse the father away the parts are from the original taught position. It is especially annoying because you have to possibly reteach the part if you moved it from the original taught point (pro tip don't do that).