KUKA base transformation not accurate enough

  • Hi there!


    I work with a KUKA KR 16 R2010 and I am having an issue with setting a base.


    My task is to devise an algorithm that would process two sets of input points – one of points expressed in the coordinate system of the robot, and one of the same points expressed in the global coordinate system - and output a transformation between the two coordinate systems. This transformation would be defined by a set of XYZ-ABC values and would be used to ‘teach’ the robot a base that would enable it to move to points expressed in the global coordinate system. I managed to do that, but the problem is that the distance between the points where the robot is instructed to go and the points it actually goes to (the average Euclidian distance, which I will be referring to as ‘error’) is too large. The steps of my process are the following:

    • I set the global coordinate system in a total station
    • I move the robot to some arbitrary points and store their coordinates in the robot’s coordinate system
    • I measure the position of each point with the total station and store its coordinates in the global coordinate system
    • using Ordinary Least Squares regression, I obtain a translation matrix and a rotation matrix
    • I compute the angles of rotation A, B, C with the following algorithm:
    • After I obtain the angles A, B, C, I use them to compute a quaternion by following these steps (first image from attachment, or https://doc.rc-cube.com/v21.07/en/pose_format_kuka.html )
    • I use the resulting quaternion to compute a new rotation matrix using this formula (second image from attachment, or https://doc.rc-visard.com/v21.…ernion-to-rotation-matrix)
    • To test the transformation, I apply it to the points expressed in the global coordinate system and compare the result (now expressed in the robot coordinate system) with the points that were not transformed (which were originally expressed in the robot’s coordinate system, stored when the robot was moved, not measured with the total station).


    sb = R(3, 1);


    cb = sqrt(1 - sb^2);


    ca = R(1, 1);


    sa = -R(2, 1);


    cc = R(3, 3);


    sc = -R(3, 2);



    A = atan2(sa, ca) * -180 / pi;


    B = atan2(sb, cb) * -180 / pi;


    C = atan2(sc, cc) * -180 / pi;



    where R is the rotation matrix



    If the next steps show that the transformation is accurate enough, these A, B and C would be passed to the robot (manually, via the HMI) together with the translation found at step 4 to teach it the base that would enable it to go to points expressed in the global coordinate system.





    The error is calculated as the average Euclidian distance between points (third image from attachment). If the transform is good, they should be very close.



    Right now, the average error that I get is 4.85 mm, and I would like to have an error of 2 mm or less.



    Notes:

    • From what I’ve seen, what I call ‘robot coordinate system’ is usually referred to as ‘world coordinate system’
    • I mastered the robot, which made the error decrease from 5.67 mm to 4.85 mm
    • I use 6 points; measuring more did not help, and we are looking for a solution that would work with only 4 points
    • The total station measures in a left-handed coordinate system; I compensate for this by multiplying the y-coordinates with -1 before computing the transformation
    • I assume that once the robot is given a transformation XYZ-ABC, it computes the quaternion and the rotation matrix the same way I do
    • One thing I forgot to mention is that I don't work with KRL. I use a Matlab script which I will later use to create a TcCOM object and that's where the transformation will take place.


    If you have any suggestions or know a resource that I could use to solve my issue, I would greatly appreciate it.



    Thank you!

    Edited 2 times, last by PauBot: had to add something at the end ().

  • Right now, the average error that I get is 4.85 mm, and I would like to have an error of 2 mm or less.


    ....


    I mastered the robot, which made the error decrease from 5.67 mm to 4.85 mm


    first things first - prerequisite is system that is measured correctly.

    you have to:

    1 have the robot mastered correctly

    2 have the pointer tool measured correctly. if you do not have tool that is precise enough (any kind of gripper, spot welding gun, etc.) then add another even if it is a temporary one. you cannot get precision using tool that looks like boxing glove...

    3. verify that TCP for pointer tool is correct - jog robot in space at max speed in T1, using only rotations A,B,C and if all is good, tip of the tool must stay in same point while robot is moving around it. if you cannot get this to work, your system need to be measured again.


    then you can use that pointer tool to guide robot to three other points in space that are defining the base. and one of those three points must be the origin of new base. (if that is not the case, you have no choice but to use indirect base measurement and that requires 4 points and more work.)


    next you just need to use one of the base calculation examples already shared on this forum or check math in KUEWEG.SRC.


    other users and i have posted code that was used in many applications and the error is negligible. even there, the biggest factor is not the math but the human error when position robot to measure the three points and that is on the order of 0.1mm or so.


    here it is again

  • With the angles A, B and C you can also calculate rotation matrix R.

    You must actually get the same values as using quaternations to calculate a rotation matrix


    If the values are different then you did something wrong


    the zip file given by panic mode contains also the proper way to calculate the angles (MAT_TO_RPY).

    I this case you also have to use the ARCTAN2 from the zip file.


    You got the minus sign for free? Because you are using a lot of them

    Are they all properly set

  • In order to check your results you may also use the build in function:

    Calibration - Base - Numeric input

    The idea behind:

    You have four test points with given values coming from a cad system

    Move the robot to the test point

    get the robot values

    insert the cad values

    At the end kuka will calculate the base values and the average error


    33494-pasted-from-clipboard-png


    As you can see here the rotation matrix Ez(A), Ey(B), Ex(C) gives the same result.

    Taking these values p1, p2, p3 and p4 should match.

  • The error is calculated as the average Euclidian distance between points (third image from attachment). If the transform is good, they should be very close.

    What's the residual when you perform the transformation?

    Right now, the average error that I get is 4.85 mm, and I would like to have an error of 2 mm or less.

    4.85mm... over what distance? In what directions? Relative to what? Are you using anti-backlash techniques? How far is the robot being stretched? How rigid is your EOAT? How good is your TCP? How much is the TCP orientation changing across your test sample? How large is your test sample -- how many points across what spatial volume?


    Every one of these can contribute to accuracy errors, and isolating and testing for each one is required to get good performance. Robots are not accurate devices -- they are built for precision/repeatability, but sacrifice accuracy to achieve their expansive motion envelopes and low price point compare to CNC machines of comparable working volume.

    Download: Move, Measure, Correct eBook
    In this eBook, David McMillan shares how to get the most accuracy out of your robot, gleaned from his decades of experience.
    info.appliedmfg.com

  • With the angles A, B and C you can also calculate rotation matrix R.

    You must actually get the same values as using quaternations to calculate a rotation matrix

    You mean calculate the Euler angles? Yes, I checked and that returns the same values. One thing I forgot to mention is that I don't work with KRL, so I can't use the zip files from panic mode. I use a Matlab script which I will later use to create a TcCOM object and that's where the transformation will take place.

  • provided code is shared working example. how can it not be useful, specially for someone aiming to write code? also it is in a plain text and easy to translate to any other language. and it is commented too...

    in other words - a perfect job for a student...


    vectors are simply declared as FRAME but only X,Y,Z values are used (A,B,C is ignored)


    VEK_LAENGE = vector length (magnitude), in MATLAB that is norm(A)

    KREUZ_PROD = cross product of vectors, in MATLAB that is cross(A,B)

    SK_PROD = dot product of vectors, in MATLAB that is dot(A,B)

    NORM_VEK = normalize vector, in MATLAB that is A/norm(A)

    ATAN_EPS = a constant of sufficiently small value, value here is chosen for 32-bit computation, in MATLAB you could also make is smaller


    How to normalize vector to unit length
    how to normalize vector of features to unit length to generate a probability density function (pdf) also what the normalization can do for the vector?
    www.mathworks.com

    Cross product - MATLAB cross

    Dot product - MATLAB dot

    Array Indexing - MATLAB & Simulink

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • Yes, you are right, I should be able to use it. I did try to understand it, but I got stuck at the point where a function is called, but the output is not stored to a variable, nor a global variable seems to be modified (the line NORM_VEK (P_X[], 3)). A colleague explained what that and other parts of the code do, so I think I got it.

    This being said, I still have a question.

    I compute the transformation between two coordinate systems using points expressed in both of them - the world points (expressed in the robot coordinate system) and global points (same points in space, but expressed in the global coordinate sysytem, measured with the total station).

    In your algorithm, you only use one set of points - the coordinates of the points in only one coordinate system. Is it correct if I assume that the three input points for your algorithm are considered to be exactly in the origin, on the X axis and on the Y axis, of the new coordinate system?

    If that is true, it will be difficult for me to use your algorithm, since I cannot measure the origin of the world coordinate system with the total station because that point is inside the robot. I also cannot measure the origin of the global coordinate system with the robot because the robot cannot reach that point.

    Maybe my assumptions are wrong, but I hope I express them clearly.

    Thank you

  • KRL uses "IN" and "OUT" transfer types but that is just the usual ByVal and ByRef


    yes the code mimics what KSS does when measuring base:

    1st point is origin

    2nd point is on the X axis

    3rd point is on the XY plane to define positive Y axis


    I am not sure what you mean by measuring WORLD... you do not measure WORLD, this is predefined and there is no need for robot tool to reach WORLD origin. this is why base is practically always measured using 3-point method.


    that is a favored method - all one need to do is tell robot which tool and base are being used (base in this case is $nullframe or WORLD), then jog robot to three points in space that line up with the NEW base that is being measured. then pass the values of those three points to calc_base() to get the FRAME result.


    since the result is obtained by measuring points in WORLD, the computed FRAME is a child frame of the WORLD.


    but... it is possible to also measure frame using 4-point method (in KSS this is called "indirect" method). This could be used when one or more of the 3points are not accessible. in that case one would enter known values of the 4 points (one set of data), then jog robot with measured pointer tool to each of the points to get the second data set, and then calculate... you seem to favor this method but your results are not satisfactory so you asked for something that is already proven. did i miss anything?

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • Thank you for your reply.


    By 'measure WORLD' I mean measuring using the total station, which is necessary for the algorithm I described.

    I cannot use the transformation done in KSS because our use case requires this step to be automated, this is why I am using Matlab. I will still try the KSS 3-point calibration to see if the error is coming from something else other than my algorithm.

    One question about your algorithm: how do you deal with translation? The only outputs are the A, B, C angles, but when I set a base in the KUKA HMI, for example, I also have to specify a translation matrix (X, Y, Z).


    Thank you!

  • did you even read the code?


    line 28 puts ABC values into NEW_BASE

    lines 30,31,32 put XYZ values into NEW_BASE

    then NEW_BASE is returned


    line 19 is redundant since cross product is done on two already normalized vectors. but... it does not hurt...


    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • Thank you, I don't know how I missed that :smiling_face:


    I am confident I implemented the algorithm exactly as it is in your file, but I still get an average error of 9.584 mm. Moreover, I used the 3-point calibration method available in the KUKA HMI, and with that I get an error of 9.481 mm. With the indirect method (4 points), I get an average error of 16.071 mm.).


    Do you have any ideas/suggestions?


    Thank you for your help so far.

  • What's the residual when you perform the transformation?

    4.85mm... over what distance? In what directions? Relative to what? Are you using anti-backlash techniques? How far is the robot being stretched? How rigid is your EOAT? How good is your TCP? How much is the TCP orientation changing across your test sample? How large is your test sample -- how many points across what spatial volume?


    Every one of these can contribute to accuracy errors, and isolating and testing for each one is required to get good performance. Robots are not accurate devices -- they are built for precision/repeatability, but sacrifice accuracy to achieve their expansive motion envelopes and low price point compare to CNC machines of comparable working volume.

    https://info.appliedmfg.com/mmc-ebook

    I'm not sure what you mean by 'residual', given that I am not comparing my measurements with predicted values or a mean, but to the actual true values. To answer your questions:

    - my error is just the average distance in space (so sqrt((x1-x2)2 + (y1-y2)2 +(z1-z2)2) ), so not in a specific direction

    - the distance is relative to where the points SHOULD be. I work with the assumption that the robot has a neglijable error when it stores the coordinates of its tool. I compare these coordinates (x, y, z) to the coordinates measured with the total station (I take these by pointing the total station at the tool tip - a reflector, in my case, leica gmp 101)

    - I don't know what anti-backlash techniques are

    - the 6 points measured are never farther from the robot's base than 1.7 m on any axis

    - I calibrated the TCP recently, but I don't remember what was the error for that

    - it is changing a lot, and I prefer it that way becasue I want to have a less than 2 mm error for any orientation (or sequence of changes in orientation)

    - I measure 6 points; I will add the 2 sets here, in case anyone would like to try finding a XYZ-ABC base that maps from one to the other:


    XYZ_GLOBAL = [720.9, -1608.7, 1140.1; 316.4, -1599.9, 1649.1; 444.3, -1284.2, 509.8; 626.6, -1041.0, 956.1; 632.7, -538.3, 1349.3; 220.2, -519.3, 1241.7];


    XYZABC_ROBOT = [542.573792, -1015.0, 1139.8335; 944.124817, -1015.0, 1652.27429; 823.826416, -689.0, 512.620239; 645.906067, -447.0, 959.713867; 645.90625, 53.4078636, 1356.29163; 1058.31787, 67.6461945, 1251.17578];

  • your errors are excessive so you need to


    a) confirm that correct MADA is used - post picture of robot nameplate and screenshot of menu Help>Info>Robot


    b) confirm that there is no backlash or play in the robot arm, robot mounting or tooling. robot has to be rock solid and anchored to a solid foundation - not sitting on wobbly pallet on which it was delivered or something rickety like a picnic table. it has to be solid with no play. same is with tooling. it needs to be mounted firmly, use dowel pin to prevent slip and rock solid. what type of pointer is used? is it solid?


    c) go back to message #2 and state how exactly each step is verified. for example how exactly is the is robot mastered... using EMD? is your EMD ok? what do you get for axis error if doing check mastering? please don't tell me aligned marks and used dial or reference mastering. what type of pointer is used? are you SURE correct tool is selected? did you do the TCP check as suggested?

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • how accurate are your XYZ_GLOBAL values?


    XYZ_GLOBAL:

    720.9, -1608.7, 1140.1

    316.4, -1599.9, 1649.1


    XYZABC_ROBOT:

    542.573792, -1015.0, 1139.8335

    944.124817, -1015.0, 1652.27429


    XYZ_GLOBAL: Y-values offset app. 9 mm

    XYZABC_ROBOT: Y-values offset 0 mm

  • i thought the same and since busy, only now did simple difference check. it looks like two data sets are within about 2mm which means they match up pretty nicely although they are obviously in two different frames, hence the sloped Z values.


    But...


    This still does not explain huge error when using 3point method on smartPad which is why i question if the robot is setup (or used) correctly.



    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • I'm not sure what you mean by 'residual', given that I am not comparing my measurements with predicted values or a mean, but to the actual true values. To answer your questions:

    It can vary, but the most basic is checking the 3D distance between each pair of points in the reference set, against the same in the "found" set.

    I work with the assumption that the robot has a neglijable error when it stores the coordinates of its tool.

    Bad assumption. Especially when making large changes to TCP orientation. TCP orientation changes kill accuracy much faster than moving the robot through space.

    - I don't know what anti-backlash techniques are

    That's part of what that hyperlink was for. Short version: move to each checking point with the same TCP orientation, and using the same approach from at least 10mm out. "Hunting" back-and-forth over a target point is the best way to inject all of the robot's backlash issues into your measurement.

    - it is changing a lot, and I prefer it that way becasue I want to have a less than 2 mm error for any orientation (or sequence of changes in orientation)

    See above.

Advertising from our partners