Calculate TCP using centre of sphere from 4 points

  • Hello,


    I am trying to calculate a TCP using 4 points. I am using cartesian data. These four points are all defined using the same reference point.
    I know how to calculate the centre of a sphere. I don't understand how to use this information to calculate a TCP.


    Currently I align the first point in a Rx 180 Ry 0 Rz 0 orientation. This way I can just take the first point minus the centre of the sphere, this gives me the TCP. I would like the first position to be in a random orientation, just like the 3 other points.


    How can I use the orientations of the 4 points to calculate a TCP using the centre of the sphere?

  • Thank you for your answer SkyeFire. However, I already know how to calculate the center of a sphere. I am looking to calculate the TCP using the center of the sphere and 4 points on its surface. I don't understand how I can calculate the X, Y, Z values of the TCP. I already know how to get X, Y, Z values of the center of the sphere.

  • Some years ago this question was posted. I answered and provided a code snippet from Staubli's VAL3 language to do this precise calculation. I suggested that the algorithm could be used in the user's language-of-choice.


    The most recent upgrade to Robot-Forum seems to have deleted any posts from that period.
    I no longer have that code snippet.
    But I found that code snippet on the Staubli Robotics Customer Support Portal (registration is required). Perhaps you can look there.

  • In the Val3 code, they are using an inverse sphere. Below is the code I mentioned. If I understand correctly, then =! means that trSphere is the opposite of itself. trSphere is a cartesian value (x, y, z, Rx, Ry, Rz) of a point on the sphere. What would be the opposite of a cartesian value like this?
    They also mention an inverse sphere. There are no calculations in the file for an inverse sphere. Is there a function in the Staubli controller that does this, or does this happen somewhere else?


    //Inverse sphere * center
    for l_nI=0 to 3
    trTempTool[l_nI]=!trSphere[l_nI]*trCenter

  • Sorry, I am going to be lazy and claim "I don't know." When I first saw that code some years ago, it made my head hurt. It appears to be an algorithm for solving four equations with four unknowns. Someone over at Staubli is a fanatical programmer. I'm too old for that stuff. :icon_wink:


    The prefix tr indicates that the variable is a coordinate transform, or the TRSF data type of the VAL3 language.
    The ! operator is the inverse transform operator.


    So that code statement is taking the transform of the center location (of the sphere, I assume?) and making it relative ("*" operator) to the inverse of the transform of the Sphere location, then setting the result equal to trTempTool[index]. Hope this helps.


    You should also be able to download a VAL3 Language Reference Manual to help with the code.

  • From what I've been able to piece together, the key appears to be finding the intersection of spheres. Technically, this could be done with 3 spheres, but that would make the algorithm fail for even slight amounts of measurement error. So the general technique used by most robot brands uses 4 points.


    Even without a TCP set, the robot always knows where the "zero tool" (usually, the center of the wrist tool-mounting flange, sometimes the intersection of A4/A5/A6) is. So if you mount an unknown physical tool to the flange, and touch the tip of that tool to a fixed point in space from multiple orientations, Tool Zero will have been located at four points in space that can be used to define a sphere. The center of that sphere generates the location of your fixed point, relative to the robot base. After that, it's a simple matter of picking one of the four Tool Zero locations, and calculating the 6DOF transform from that location to the fixed point.


    Kawasakis, long ago, used to have a "shortcut" method that almost no one ever used, that skipped over the 4-point sequence entirely. Basically, it required touching the center of the A6 flange to a fixed point in space (the demo used a pointer set up on a tripod), then mounting the tool to the robot and touching the tool tip to that same fixed point. Of course, since no one wanted to keep taking heavy tooling on/off the robot to do this, this method was rarely used. But it did exactly the same thing as the last part of the method I describe above, just skipping the entire "4-point-sphere" step.


    The basic math appears to be fairly simple. The real "trick" here is dealing with the small amounts of error in each measurement, and averaging them out into an acceptable figure of merit.

  • Quote

    Kawasakis, long ago, used to have a "shortcut" method that almost no one ever used, that skipped over the 4-point sequence entirely.


    Well those of us with Staubli experience used that method exclusively. Stubby arms were never big enough to have to fight with heavy EOATs. But the method required an understanding of coordinate transform math. Which, at last count, there were only about 17 people on the planet who actually understood this. :icon_wink:


    The 4-point method is fine for many things. But is subject to positioning error and calculation rounding errors.


    I found a procedure sheet that I made years ago documenting the method using the V+ language which is very similar to the old VAL language and the current Kawasaki AS language. With good, sharp, young man's eyes, and an accurately-made pointer tool, and practice, one could produce a very accurate (a couple microns) tool transform in a short time.


    TEACHING A TOOL TRANSFORM IN V+


    1. Attach a workcell reference pointer to the workcell in a convenient location.
    a. Not necessary to be "square & orthogonal" to workcell fixtures
    b. Attach in a convenient location
    c. Provide for robot motion and access
    d. Think ahead...place in cell in order to achieve tool orientation that is required for the arbitrary robot tool to be taught.
    2. Attach a robot reference pointer of known dimension to the robot ( reftool ).
    3. Invoke the robot reference pointer reftool:


    TOOL reftool


    4. Teach the location of the cell reference pointer using TOOL reftool .
    a. Maintain the orientation necessary to define the X-Y-Z directions correctly on the arbitrary tool.
    b. Align the robot reference pointer to the cell reference pointer, workcell fixtures, or any other convenience datum surface.


    HERE cellref


    5. Remove the robot reference pointer and replace with arbitrary workcell tool.
    6. Invoke the null tool:


    TOOL NULL


    7. Position the arbitrary workcell tool to the cell reference pointer.
    a. Align the arbitrary workcell tool to maintain desired X-Y-Z orientation.
    b. This may take extensive experimentation to do this step to high accuracy.
    8. Teach the tool transform of the arbitrary workcell tool:


    DO SET arb_tool = INVERSE(HERE):cellref


    9. Invoke the new arbitrary workcell tool transform:


    TOOL arb_tool


    10. Test the new arbitrary workcell tool by performing rotations about Joint4 and Joint5 in Tool Mode motion.


    END OF PROCEDURE

  • Yep, that looks about right. I'm guessing that the ":" is a cross-multiplication of two 6DOF matrices (X,Y,Z,Rx,Ry,Rz), just like in KUKA KRL.


    So, put your unknown TCP physicallyat a known location in space, activate the Null Tool (mounting flange TCP), then perform a matrix inversion on the "current position" of the Null Tool, then cross-multiply the inversion result by the reference position. That should result in the 6DOF delta between the Null Tool and the unknown TCP, which also just happens :zwink: to be the transform from the mounting flange to the unknown TCP (which is no longer unknown). Stick that transform into your active tool register, and voila!


    For hyper-precise TCPs, I had an iterative process (and support programs) using a laser tracker and a target mounted at the TCP to tweak the TCP down to minimal error. What I generally found was that it was possible to get very good XYZ, but tuning the orientations tended to cross-talk each other -- that is, you could tune in Rx near perfectly, but after you tuned in Ry, you'd find that Rx had drifted again. In the end, it turned into an exercise in trying to average out the error as evenly as possible.


    The bigger problem was that even the hyper-precise TCP would start drifting as the robot moved, especially in orientation changes. So we ended up needing to create an "average 6DOF" position relative to the entire working volume we needed the high accuracy in, and had to do our TCP tuning in that pose. It worked, but if the requirements had been any tighter, we probably would have needed to create multiple TCPs, tuned in different poses, and switch between them based on what general "pose zone" we were working in.

  • I figured it out. What was mentioned above is correct. The tr means that the value is a transform and the ! means that the transform is inverted. The transform that is used here is a homogeneous transformation matrix. I attached an example.


    After I figured out how the transform worked, I needed to know what values the matrix consisted of. I attached a sketch where I explain which transforms fills the matrix. The most important thing in this sketch is that the matrix for trSphere and trCenter use the same orientation. This means that the rotation part of both matrices is the same before trSphere is inverted. The last collum of the matrices need to be filled with the location of a point on the sphere for trSphere and the location of the reference point for trCenter.


    Finally, trSphere is inverted and is multiplied with trCenter. This gives back a matrix trTempTool. The last column of the matrix gives the temporary TCP. In the Staubli code this is done four times. The 4 calculated values for the TCP are then added up and divided by 4. This gives the average of the 4 temporary TCP's. This average is the final TCP.


    I attached a Matlab file with the calculator I made for determining the TCP.


    //Inverse sphere * center
    for l_nI=0 to 3
    trTempTool[l_nI]=!trSphere[l_nI]*trCenter


    Thank you for helping me solve this problem.

  • This thread was extremely helpful in calibrating a robot held external camera system to a known target. I used the 4 point sphere method to get an accurate TCP. Iterating a few times got me a very accurate TCP.


    The robot in question was a Fanuc. I converted the Staulbi code from VR to Karel. It is attached. Fairly hacked together, but it works. There is a short description on how to use it at the top of the .kl file.


    Oddly enough, I was testing this method along side of Fanuc's 4 point TCP creation method, and on the first iteration Fanuc's method created a much more accurate TCP (about 40mm better, on a TCP with about a 600mm standoff from the faceplate). Once I did a few more iterations, the error died down and it is looking good now.


    I'm curious on how Fanuc does their 4 point method.

  • a while ago i was developing my own math library including extensive matrix manipulation. using this i was able to solve system of N linear equations with N unknowns. but i also wanted to create a more compact code for this problem specifically so i would not need to deploy the library. so far never got around this but using sphere equation and Cramer's rule should get result in a straight forward fashion without iterations.


    for example:

    http://www.ambrsoft.com/TrigoCalc/Sphere/Spher3D_.htm


    i do not understand why robot software does not expose functions for things like this or computing frame from three points as a standard function set... but then, where would all fun be?

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

Advertising from our partners