January 17, 2019, 07:26:59 PM
Robotforum | Industrial Robots Community

 Robot absolute accuracy

Author Topic:  Robot absolute accuracy  (Read 2660 times)

0 Members and 1 Guest are viewing this topic.

July 26, 2017, 01:10:08 PM
Read 2660 times


Hi guys!

I'd like to measure a robot's absolute accuracy but I stucked in a step. Exactly I dont know how to measure the robot base frame.

Why is it important for me? Because the measured points are in the sensor's own coordinate frame but the robot coordinates logically
in the base frame. So somehow I have to transform every point into the another coordinate system. And this is my question, how
could I get the transformation matrix? Or do I do totally wrong?

I read a lots of literature in the theme but all of them only theorical and do not detail this part of the measurement.

Thanks for the answers!

Linkback: https://www.robot-forum.com/robotforum/index.php?topic=23916.0
  • Like    Thanks

Today at 07:26:59 PM
Reply #1



July 26, 2017, 02:04:33 PM
Reply #1


Global Moderator
Well, it depends on what brand of robot you're using, and on what degree of precision you require.

Cartesian position and orientation are generally represented by a combination of XYZ coordinates, and a set of Euler angles or Quaternions.  Normally, to perform mathematical transforms on these positions, you convert them into a specific type of 4x4 matrix -- Google "NOAP Vectors" for a starting point. 

The good news here is that only the conversion is brand-specific.  Once the conversion is done, the matrices are identical for ABB, KUKA, Fanuc, etc, and can be worked on using identical means.  This is the best book I've found covering this subject: 

The transform between two different position/orientation frames is generally obtained by taking the cross product of the two matrices.  Keep in mind that this is non-commutative -- reversing the order gives different results.  A-cross-B will generate the position/orientation of B, treating A as the frame of referenceB-cross-A will generate A's position with B as the frame of reference.  These calculations are all relative.

Now, you didn't provide any details about your sensor, so I'll have to speak generally.  But you need a good way to measure several points in space, in both coordinate systems (robot and sensor).  This usually requires some kind of metrology.  Three points is the absolute minimum, but I strongly suggest using more.  Once you have two matching "point clouds" in both reference frames, you can perform a best-fit between them to establish the transforms between the two reference frames.
  • Like    Thanks

July 26, 2017, 05:22:26 PM
Reply #2


My problem is not mathematical, the transformation between the reference frames are clear for me.
What I dont know is the measurement method. I use a laser tracker with a precision of 0,05mm.

I try to describe more practical. So I have two reference frames, the sensor, called {S} and the robot base called {B}.
When I measure in the workspace, that means I read the data from the sensor and the robot controller too, so I will
have the theorical (aka. computed) and the real (measured) position.
The only problem that I cant compare the two points, because the computed one comes from the controller in {B}, the measured
comes from the sensor {S}.
In the practice it looks like:
P1B = [1600 0 320]
P1S = [5478,485 2184,974 216,324]

So I have to find a transformation matrix between the coordinate frames {B} and {S}. Unfortunately I can't measure the robot base
directly, because it's inside the robot somewhere.
A coordinate system can be defined by an origin and two perpendicular vectors. So my idea was I rotate the first joint, measure lots of points and fit a circle onto them. I will get the center of the circle and a normal vector which is pointing upward. After I do the same with the second joint. (Im speaking about a 6DOF robot like PUMA 560)
Now I have two perpendicular vectors (which is not exactly 90°, it's another problem) and an origin point. This new common frame is known from {B} and {S} too, so I can transform and compare the computed and measured points. The sad part is that I think this method is not enough accurate. And here comes my second question mark, how could I validate my measurement?  :icon_confused:
In the end the differences are circa 2-3mm, but I dont know how real is this result.
  • Like    Thanks

July 26, 2017, 08:04:42 PM
Reply #3


Global Moderator
Well, you're running into one of the root issues with using articulated robots for applications dependent upon accuracy, as opposed to precision/repeatability.

Articulated robots, compared to (for example) CNC machines, trade accuracy and rigidity for lower cost and a much larger working envelope.  So you need to understand before starting that any attempt to calibrate a robot with a laser tracker will have a certain degree of error.

There are, however, ways to reduce that error somewhat.  I'll get to that in a moment.  Your concept of generating the origin of the robot base frame by intersecting the axes of the Axis-1 and Axis-2 motion arcs is good, although generally the origin of the robot's "Base 0" is not located exactly at that intersection, but at some distance from it (usually along the axis of Axis 1).  That distance can generally be obtained from the CAD model of the robot.  That generally works well enough, but any as-built tolerance errors in the robot's construction will be the limiting factor on how well it works.

The method I've generally used is to build a grid of points shared by the robot and the sensor, to do a best-fit calculation on.  To minimize the error in the measurement, I use the following best-practice techniques:
1.  Properly configured payload data in the robot.  Most robots attempt to compensate for gravity effects of the payload (given the robot lacks rigidity), so making this accurate (mass and CG) is important
2.  Limited volume.  I make the volume I'm attempting to calibrate as small as possible for the process I am trying to make accurate.
3.  Limited orientation changes.  Ideally, every measurement point should use the same orientation of the TCP.  Orientation changes throw off the robot's accuracy faster than almost anything else
4.  Avoid positions with different gravity effects as much as possible.  For example, positions with Axis 2 forward of vertical have different gravity effects on the backlash ("lost motion") of Axis 2, compared to positions that have Axis 2 behind vertical (also depends on payload, and Axis 3-6, so this becomes potentially quite complex)
5.  Minimize axis backlash.  Since every axis has some degree of "lost motion" which cannot be reduced, minimizing the effects of that backlash is the next best thing.  General practice for this is to perform an anti-backlash move at each measurement location.  This generally consists of arriving at the "nominal" measurement position, then performing a small anti-backlash motion on all 6 axes -- for example, rotate all axes +0.1deg, then -0.1deg.  This does not make the robot more accurate, but it reduces the randomness of the backlash effects by "biasing" the backlash in a consistent direction at each measurement position.
6.  Thermal stability.  This usually consists of making the measurements fairly quickly, so that temperature changes over time (either ambient temperature, or simply the waste heat of the robot servo motors).  Also, avoid  generating excessive waste heat in the robot as much as practical -- use low speeds and accelerations
7.  Position holding.  Most robots perform energy-saving operations when not in motion, by shutting down power to the motors and engaging the internal motor brakes.  On some robots, in some poses, this "handover" from "servo holding" to "brake holding" can cause the robot to sag or twitch very slightly.  Best practice is to keep the servos energized after reaching each measurement point (and performing the anti-backlash move) while taking the laser tracker measurement.  That way, the measurements are all taken in a consistent context.
8.  Target calibration.  This related back to #3.  The dimensional relationship between where your sensor target is mounted to the robot, and where the robot thinks it is mounted, will directly drive errors in your comparative measurement.  With common TCP orientation at every measurement point, this error will be very nearly static in Cartesian space.  However, if you have any orientation changes, the error will rapidly become highly parametric and grow on an exponential curve.  So bringing your sensor, and your robot, into agreement on where the target is mounted can range from important (no TCP orientation changes) to critical (substantial orientation changes).  Exactly how to do this calibration would make a good PhD thesis paper, but I've gotten reasonably good results (for industrial applications) by performing the calibration in the same limited volume (see Rule #2), and performing iterative motions of the TCP while correcting the TCP values in the robot until the robot and sensor agree on how the target is moving.

It's not always possible to follow all of these rules, but you need to be aware that, for every rule you bend or break, your measurement error increases.  Breaking two rules doesn't double your error, but more likely quadruples it (or more).  I would recommend setting up some practice tests in circumstances where you can obey all the rules, then experiment with breaking one rule at a time to get a reasonable figure of merit for how each rule contributes to accuracy (or lack thereof).

  • Like    Thanks

July 27, 2017, 12:32:24 PM
Reply #4


Thank you for your exhausting answer.

You said:
"The method I've generally used is to build a grid of points shared by the robot and the sensor, to do a best-fit calculation on."
How do I imagine it? There are well-known points/reflectors in the space respects to frame {B} and {S}?
  • Like    Thanks

July 27, 2017, 03:54:50 PM
Reply #5


Global Moderator
Well... it's a similar problem to that solved in geographical surveying (before GPS and other similar navigational aids).  If a surveyor has only a map, and a theodolite, they must first establish the location of the theodolite on the map.  This is done by measuring several known landmarks (often mountain peaks) with the theodolite.  This generates the coordinates of the landmarks, relative to the theodolite's internal reference frame.  The landmarks' positions relative to the reference frame of the map is known because the landmarks have been surveyed previously.
Now, the surveyor has two data sets:  the locations of the landmarks in the map reference frame, and the locations of the same landmarks in the theodolite's internal reference frame.  Performing a best-fit algorithm between these two data sets generates the relative transform between the theodolite and map reference frame origins, which allows the surveyor to mark the map with the position of the theodolite.  At that point, it will be possible for the surveyor to measure new landmarks and add them to the map.
With laser trackers, the process is often called "alignement," or sometimes "bucking in": 
https://youtu.be/WaetM5PU-9Y]=https://youtu.be/WaetM5PU-9Y]https://youtu.be/WaetM5PU-9Yhttps://www.cmsc.org/stuff/contentmgr/files/0/2bdcf766d9d5daf6e892c46153c591d3/misc/cmsc2011_thur_gh_0800_leica.pdf  All metrology software provides some means for performing this function, although it is also possible to perform the mathematics oneself.

Let us assume that a laser tracker target has been attached to the robot tool faceplate at a known location with high precision.  The robot is programmed to move the target to ten locations in space relative to the robot's Base reference frame, halting at each location.  When the robot arrives at each position, the laser tracker measures the position of the target.
This generates two data sets for the same locations in space:  Positions 1-10, measured relative to the robot Base, and the same positions, measured relative to the tracker base.  A 6-DOF best fit between the two data sets will generate the relative relationship between the to bases.

If, by chance, both the tracker base and robot base had identical orientations, then the transform between the two would be a simple matter of subtraction along the three Cartesian axes (and averaging out the noise).  But since this is extremely unlikely, let us look at an extremely simplistic method of finding the orientation differences.

Select two points along the robot base's X axis and measure them with the tracker.  This will generate two non-parallel lines.  To a first approximation, one could treat this as a two-dimensional problem in the XY plane, and establish the relative rotations between the two bases' Z axes.  Repeating this process for points along the Y and Z axes of the robot base would allow one to find all three relative rotations to a reasonable degree of accuracy.  The best-fit algorithm simply performs the location and orientation calculations in a single operation, and (depending on the sophistication of the algorithm) averages out the random factors.

An empirical process for this is sometimes used in industry when precise metrology is either not available, or not required.  This consists of iteratively adjusting the robot's base or TCP frame and checking the resulting motions against the external sensor until the two match to a sufficient degree of accuracy.  Imagine a simple 2-D vision camera, mounted to the robot end effector.  One would move the robot until a precise target was visible in the camera's field of view.  Assuming that the camera's Z axis and the TCP Z axis are reasonably parallel, one would begin jogging the robot along the TCP X and Y axes, comparing the robot's motion to the cameras' measured changes in target position in the FOV.  The programmer would examine the differences between the camera-measured motion and the robot motion, and iteratively make adjustments to the TCP Z axis rotation until minimal error is achieved.
(as an aside, it would also be necessary to create a conversion between measurement units -- the camera most likely measures in pixels, while the robot likely moves in mm.  As such, it is necessary to create a mm/pixel scaling factor, as well as adjusting the TCP to align its X&Y axes with the camera X&Y axes).

In 3-D or 6-DOF applications, the same process can be used.  Generally one picks an axis to begin with, and iteratively adjusts the Base orientation until that axis is well-aligned between the two reference frames (sensor base and robot base).  Then the Base is rotated iteratively around that axis until all three axes are parallel.  Often, this step has side effects upon the first axis, so it is necessary to go back to the first axis and fine-tune it again.

Due to the time-consuming nature of this task, most industrial robots include a simple means of performing this alignment to a rough degree, which is generally sufficient for most industrial applications.  It is called by many different names, but is usually a simple menu-driven function, and usually limited to only 4 points.  The process consists of moving the robot TCP to these four points, one at a time, and recording both the robot location and the location in the other reference frame (either metrology, or CAD data -- for instance, the TCP might be a pointer with a sharp tip, carefully touched to alignment points on a work piece, with the location of the alignment points being known in the CAD file).  Once all 4 points have been recorded, the robot will generate a Base within the robot that will be aligned with the other reference frame.  This will be a rough alignment, since the robot is inherently inaccurate, but is generally sufficient for most industrial applications.

Potentially helpful:
  • Like    Thanks

August 03, 2017, 11:31:51 AM
Reply #6


First of all, sorry for the late answer.

I think your method's weakest point is the alignment, just like in my case.

"Select two points along the robot base's X axis and measure them with the tracker.  This will generate two non-parallel lines. [...] Repeating this process for points along the Y and Z axes of the robot base would allow one to find all three relative rotations [...]"
The three vectors will not orthogonal, because the angles between the vectors are not exactly 90°. I don't know how will it affect to the measurement accuracy.

Another perception, your base calibration method depends on the robot's dynamic parameters, like the bending. So when you move the TCP to the known locations, the controller already will show a false position and the fitting will give a bad result. But I don't know how bad is it, maybe the error is negligible.
Must try it out and compare the results of mine base calibration method and yours. Sounds easy, but how to compare? How could I know, which result is the better? Well, it's a hard task  :icon_smile:
  • Like    Thanks

Today at 07:26:59 PM
Reply #7



August 03, 2017, 04:33:11 PM
Reply #7


Global Moderator
As I explained, these issues are the reason that robot accuracy is a difficult issue, and the various "best practices" are used as much as possible to mitigate the problem. 

If large-volume spatial accuracy is required, then about the only way to achieve it is to build a high-resolution map of the robot positional error throughout the 6-DOF volume and generate correctors with an interpolation.

The SARCA appliance from New River Kinematics is a solid off-the-shelf solution for doing this.  Alternatively, it is possible to create your own solution, but it will be rather mathematically intensive.
  • Like    Thanks

August 08, 2017, 09:43:12 AM
Reply #8


Hi, I am curiuos about your experiments results.
Your method is correct, but you only need to rigidly attached a magenitc sphere mount "hockey puck" on link 2
and make all measurents with that reference: moving only J1 track the sphere to find J1 axis of rotation
then move only J2 to find the axis of rotation and the intersection is the WORLD frame origin
You can repeat this with different postures to verify the result, incline for example J2

You also need to fix threepucks to the base so you can find the position in relation with the robot base or raiser

You can then move the tracker and calibrate your measurements with the 3 base references
and repeat the measurements to find the world origin
  • Like    Thanks

August 12, 2017, 11:21:54 PM
Reply #9


Global Moderator
That method works to measure the physical aspect of the robot.  But unfortunately, the robot's internal model (how it thinks it is built) may differ from the actual physical-world construction.  I've generally found that measuring the arcs of the axes only works to a certain level of accuracy.  Beyond that, I've needed to create a volumetric "point could" and build a sort of "reference table" and weighted-sum algorithms across the calibrated volume.
  • Like    Thanks

Share via facebook Share via linkedin Share via pinterest Share via reddit Share via twitter

Absolute accuracy

Started by Plc_User on Fanuc Robot Forum

1 Replies
Last post September 14, 2014, 08:46:50 PM
by Tomas Kabourek
Absolute accuracy package

Started by vvelikov on KUKA Robot Forum

19 Replies
Last post July 14, 2017, 08:45:57 AM
by vvelikov
50316: Absolute accuracy not activated

Started by knowledgesharing on ABB Robot Forum

3 Replies
Last post September 05, 2018, 05:50:28 PM
by knowledgesharing
KUKA KR420 absolute accuracy

Started by majorRob on KUKA Robot Forum

16 Replies
Last post November 10, 2016, 12:37:25 PM
by SkyeFire