 # Orientation interpolation (KUKA doesn't use slerp)

• Hi all!

I have welding KUKA KR8 R2100 and I am generating the motions outside from the robot. I've done experiments and understand that KUKA doesn't use SLERP for interpolation of orientation in LIN motions. It's something similiar but while long complex 90 deg rotation it has 8 deg difference.

Of course I can generate more intermediate points and minimize the error. But it's interesting what interpolation of orientation algorithm is used by KUKA and why it is not a slerp.

• ... It's something similiar but while long complex 90 deg rotation it has 8 deg difference.
...

Can You describe how You measured that? And what do You mean by "long complex 90 deg rotation"? Desribe the motion a bit clearer.

Is the TCP correct?

Is the robot mastered correct?

• Can You describe how You measured that? And what do You mean by "long complex 90 deg rotation"? Desribe the motion a bit clearer.

Is the TCP correct?

Is the robot mastered correct?

Do you think that KUKA should use slerp? May be I have been mistaken...
But I found only info for XYZ in docs:

"In the case of a linear motion, the KR C... calculates a straight line from the current position (the last point programmed in the program) to the position specified in the motion command."

And no information about kind of interpolation of orientation. But I don't have KUKA confidential docs, only public. And there no info about it.

I use quaternions and slerp from ROS package (it is library for robots) and also checked it from https://en.wikipedia.org/wiki/Slerp. And slerp is correct.
Robot is mastered. TCP is correct.

I send robot from start pose to goal (only 2 poses) with LIN motion. Distance between them is ~3m and ~90 deg (complex rotation A+B+C angles). No smoothing. Velocity is 150 mm/sec.
I get ~midpoint from robot while motion to goal pose. And also I calculated all trajectory outside the robot (C++ program). Linearity of XYZ is the best. But orientation interpolation is similiar to slerp but not identical (max difference for my case is 8 degrees).

Edited 2 times, last by Vlad222 ().

• I send robot from start pose to goal (only 2 poses) with LIN motion. Distance between them is ~3m and ~90 deg (complex rotation A+B+C angles). No smoothing. Velocity is 150 mm/sec.
I get ~midpoint from robot while motion to goal pose. And also I calculated all trajectory outside the robot (C++ program). Linearity of XYZ is the best. But orientation interpolation is similiar to slerp but not identical (max difference for my case is 8 degrees).

First, post actual program code.

Second, what exactly are you trying to measure? If a LIN motion is programmed for a 90deg rotation, then it will be rotated by 90deg at the end of the motion. Are you measuring actual motion in free space, or just looking at the position monitor on the pendant?

What "midpoint"? Are you claiming that the rotation distribution is not even across the linear motion, or that the TCP does not rotate 90deg in total?

• i am not sure i understand....

you are trying to use it as a "known" path segment? if the position or path is important why not use actual not approximated path segment?

as the name suggests approximated motion is ... approximated.

or am i missing something?

2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

• I think that Fubini gave the answer. But I should understand how to write formulas for it.

midpoint - I get current coordinates from robot in the ~middle of path between 2 points.

I can't post actual program code because we generate trajectory outside the robot. We check collisions, limits, singularities, etc. with small step and send trajectory to robot receiving it by EthernetKrl. We generate path with small step (1 mm or 0.1 deg) but send to robot only significant key points because there are a lot of points. Code at KRL is like this:

Code
``````...
SREAD(bytes[], state, offset, "%f %f %f %f %f %f", joints[i].A1, joints[i].A2, joints[i].A3, joints[i].A4, joints[i].A5, joints[i].A6)
...
LIN FORWARD(joints[i], error_id)``````

Everything is good, we weld huge beams. But our generated trajectory isn't almost identical with trajectory at KUKA because of different math law of changing orientation.

Good way is to send more points with small step to robot but it takes more time.

The question is:

Robot moves with LIN motion from pose P1 (X=500, Y=500, Z=500, A=20, B=40, C=60) to pose P2 (X=600, Y=600, Z=600, A=60, B=80, C=100) without smoothing. What is the law of changing orientation (ABC angles)? It isn't slerp (spherical linear interpolation).

• The positions in beetween are defined by a componentwise phase synchronous trapezodial velocity profiles with accelaration and decelaration defined by \$ACC.ORI1 and \$ACC.ORI2 and velocity in the constant phase by \$VEL.ORI1 and \$VEL_ORI2 and the result being filtered by \$FILTER in joint space to get rid of the discontinuties at the phase transistions. Also you have to take into account the phase synchronization with the velocity profile of the cartesian components defined by \$VEL.CP and \$ACC.CP and maybe the same for additional axes if any. If you also need blending it gets a bit more complicated.

Fubini