Posts by micahstuh

    KUKA has a built in function (At least I thought it was built in):

    Even if you aren't writing this code for KRL, its pretty easy to read this code and convert it to whatever language you are working with. It says that T[,] is an output, but thats only because kuka forces more complicated variables to be declared as outputs. Its actually being used as an input in this scenario. Last note, a lot of other programming languages use radians by default, but not KRL, so keep that in mind if porting this over into a different language.

    I have a robot working with a KUKA 3-axis rotary weld table. I've managed to program the 2nd and 3rd axis on the rotary table to do uncoordinated motion when facing the operator such that the operator can hit a jog +/- button to adjust their table to make loading/unloading easier. I've done this via use of:

    • $ASYNC_AXIS - Sets either E2 or E3 as asynchronous for the duration of the current program
    • $ASYNC_AX#_P - Jog the asynchronous axis positive
    • $ASYNC_AX#_M - Jog the asynchronous axis negative
    • $ZUST_ASYNC - Enable jogging of the asynchronous axis

    My problem now is with the velocity of the async, uncoordinated axis. The velocity refers to the jog velocity on the teach pendant (0-100%). Is there a variable that can modify this velocity, as $OV_PRO modifies the run velocity?

    If not, is there perhaps a way to force the jog velocity to 100% on boot-up?

    KR 16 L6

    KRC4 Extended

    KSS 8.3.14

    The only thing that depends on the angle values is the converter used to convert from angles to rotation matrix. In my Python code, the rot_to_mat function is specifically for Kuka's Tait-Bryan ZYX convention.

    Everything else is universal. All of the math above uses only XYZ coordinates and rotation matrices.

    I am sorry if this is not a good fit for this forum but I have solved the problem. It's a bit long-winded, but I've tested this and attached the code and it works quite well. In-fact I think its a bit more elegant than solving by sphere fitting.

    two point TCP definitely isn't possible. As panic mode suggested, executing this approach with only two samples results in an infinite number of solutions, all falling on a line.

    A TCP can be generated given only three samples. Each sample must contain the A6 flange position and rotation in order to derive the TCP. This is done by solving a system of linear equations, not by fitting points to a sphere, which would require four samples.

    Here is the math:


    P = teaching center point

    T = XYZ transform

    Givens (three of each, one per sample)

    O = sample coordinate (A6 flange XYZ position at given sample)

    R = rotation matrix of sample origin (A6 flange rotation matrix at given sample)

    3x3 rotation matrix R is in this form:

    [xx xy xz]

    [yx yy yz]

    [zx zy zz]

    A point P can be transformed by T in reference to an existing coordinate system with XYZ O and rotation matrix R:

    Rxx*Tx + Rxy*Ty + Rxz*Tz + Ox = Px

    Ryx*Tx + Ryy*Ty + Ryz*Tz + Oy = Py

    Rzx*Tx + Rzy*Ty + Rzz*Tz + Oz = Pz

    In this case, the three samples O, although in different positions and orientations,

    should result in the same position P when transformed by an XYZ offset T:

    R1xx*Tx + R1xy*Ty + R1xz*Tz + O1x = R2xx*Tx + R2xy*Ty + R2xz*Tz + O2x = R3xx*Tx + R3xy*Ty + R3xz*Tz + O3x

    R1yx*Tx + R1yy*Ty + R1yz*Tz + O1y = R2yx*Tx + R2yy*Ty + R2yz*Tz + O2y = R3yx*Tx + R3yy*Ty + R3yz*Tz + O3y

    R1zx*Tx + R1zy*Ty + R1zz*Tz + O1z = R2zx*Tx + R2zy*Ty + R2zz*Tz + O2z = R3zx*Tx + R3zy*Ty + R3zz*Tz + O3z

    P is no longer a factor when equating the samples. Further reduce to put into terms of O,

    and separate the T components:

    R1xx*Tx + R1xy*Ty + R1xz*Tz + O1x + R3xx*Tx + R3xy*Ty + R3xz*Tz + O3x = 2*R2xx*Tx + 2*R2xy*Ty + 2*R2xz*Tz + 2*O2x

    R1yx*Tx + R1yy*Ty + R1yz*Tz + O1y + R3yx*Tx + R3yy*Ty + R3yz*Tz + O3y = 2*R2yx*Tx + 2*R2yy*Ty + 2*R2yz*Tz + 2*O2y

    R1zx*Tx + R1zy*Ty + R1zz*Tz + O1z + R3zx*Tx + R3zy*Ty + R3zz*Tz + O3z = 2*R2zx*Tx + 2*R2zy*Ty + 2*R2zz*Tz + 2*O2z

    R1xx*Tx + R1xy*Ty + R1xz*Tz + R3xx*Tx + R3xy*Ty + R3xz*Tz - 2*R2xx*Tx - 2*R2xy*Ty - 2*R2xz*Tz = 2*O2x - O3x - O1x

    R1yx*Tx + R1yy*Ty + R1yz*Tz + R3yx*Tx + R3yy*Ty + R3yz*Tz - 2*R2yx*Tx - 2*R2yy*Ty - 2*R2yz*Tz = 2*O2y - O3y - O1y

    R1zx*Tx + R1zy*Ty + R1zz*Tz + R3zx*Tx + R3zy*Ty + R3zz*Tz - 2*R2zx*Tx - 2*R2zy*Ty - 2*R2zz*Tz = 2*O2z - O3z - O1z

    Tx(R1xx + R3xx - 2*R2xx) + Ty(R1xy + R3xy - 2*R2xy) + Tz(R1xz + R3xz - 2*R2xz) = 2*O2x - O3x - O1x

    Tx(R1yx + R3yx - 2*R2yx) + Ty(R1yy + R3yy - 2*R2yy) + Tz(R1yz + R3yz - 2*R2yz) = 2*O2y - O3y - O1y

    Tx(R1zx + R3zx - 2*R2zx) + Ty(R1zy + R3zy - 2*R2zy) + Tz(R1zz + R3zz - 2*R2zz) = 2*O2z - O3z - O1z

    Now form a system of equations A * x = B.

    [R1xx - 2*R2xx + R3xx, R1xy - 2*R2xy + R3xy, R1xz - 2*R2xz + R3xz]*[Tx] = [-O1x + 2*O2x - O3x]

    [R1yx - 2*R2yx + R3yx, R1yy - 2*R2yy + R3yy, R1yz - 2*R2yz + R3yz]*[Ty] = [-O1y + 2*O2y - O3y]

    [R1zx - 2*R2zx + R3zx, R1zy - 2*R2zy + R3zy, R1zz - 2*R2zz + R3zz]*[Tz] = [-O1z + 2*O2z - O3z]

    If working from python, you can use numpy.linalg.linalg.solve, or you can otherwise make a Gaussian elimination partial pivoting function. We are solving for x, which will return the X,Y,Z transform.

    I have attached the solution in Python.

    I think you're right. Even though 2 FRAMEs doesn't fully constrain a sphere, there should be only one reasonably sized sphere that results in the same XYZ transform from frame to sphere center for both frames.

    I'm going to think about this but hopefully someone has a neat algorithm for solving this on hand.

    Herse some more C# but it should be pretty easy to read.

    point has 3 elements XYZ, origin has 6 elements XYZABC, and return has 3 elements XYZ

    GetRange is taking a subset of a 1D array.

    MatNeg is negating a matrix.

    MatToRot is getting A,B,C values from a rotation matrix.

    Here is some code I wrote for a C# application. It may not be super helpful given the language you're working in but here's hoping.

    Yeah forgive me. I understand the value of investigating, but in most programming languages it's bad practice to have functions, structures, classes, etc without documentation that is completely exhaustive. The system variables document is pretty good but I saw that these structures didn't make the cut.

    I understand most of the parameters out of context, but some of these I can't figure out, such as:



    Can you help shed some light on precisely what each of these parameters do? Maybe even some units where applicable?

    I can't figure out the shortened names for many of the parameters in these STRUCs. Can someone give a quick definition to each, or point out a reference document that describes them?




    If this error is happening on the teach pendant, it might have something to do with your $CIR_TYPE or $ORI_TYPE parameters. I would make sure they each have a value assigned to them and that $ORI_TYPE should probably not be assigned the value #JOINT.

    I would also double check the start, end, and auxiliary coordinates for the circular motion and make sure they are all printing from RoboDK as expected.

    As far as I am aware, *.SUB files cannot execute motion internally. They can call functions or put programs into run mode, but the program would still have to be ran using the [deadman + go button] in T1.

    So Assuming you are in T1 and a *.SUB program is running, you COULD (though I wouldn't) have the *.SUB detect an external jog, then call a program, waiting for the user to run said program in order to make the external joints equal again.

    I think it would be best to track axis velocity in the *.SUB. $VEL_AXIS_ACT[7] will track the velocity of E1 and $VEL_AXIS_ACT[8] will track the velocity of E2. You will have to write code in the SBS.SUB to track when one is moving.

    If the external axis is being jogged, a flag is set. When the axis becomes still, the SBS.SUB calls a function that will move the other axis, AND the flag is reset.


    Great catch. I meant Turn Bit 0, which according to my System Integrator manual declares bit 0-5 refer to each joint in sequence (A1:bit 0 | A2:bit 1 | A3:bit 2 | etc) where a 0 indicates a positive or zero joint angle and a 1 indicates a negative joint angle.

    Thus, if a mathematically changed E6POS results in a negative -> positive A1 position or vise versa, WITHOUT updating the Turn bit 0, the KSS will yell at you and halt the program. This is my understanding of how the robot works, at least.

    I've ran into what Joe is experiencing a few times where a mathematically indexed position crosses through that 0 degree mark on Axis 1. Luckily, only the first position in a program needs to be explicitly declared with E6POS or E6AXIS. After that, you COULD start using FRAMEs since the KSS can interpret the next status & turn bits from the previous ON PATH position.

    Its great to revert to FRAMEs when the code is using math to modify coordinates.

    As hkaraarslan mentioned, you can force $OV_PRO to a set speed, but one issue with this is now you can't manually override the speed if you want to watch the robot work slower. Also someone COULD shut off the submit interpreter, thus allowing the speed to be overridden to a higher, potentially unsafe and untested speed.

    You can program the linear/circular speed limit using $VEL

    $VEL is a structure with three components:

     CP: Path velocity in [m/s]
     ORI1: Swivel velocity in [°/s]
     ORI2: Rotational velocity in [°/s]

    so in your SPS.SUB user loop, you can place this to prevent the speed from being above your desired 200mm/s in #EX mode:

    IF (($MODE_OP == #EX) OR ($MODE == #AUT)) AND ($VEL.CP > 200) THEN

    $VEL.CP = 200


    Keep in mind only linear and circular moves are expressed in mm/second. It also can be defeated if the SPS.SUB is stopped.

    PTP moves are expressed in degrees/second. PTP moves do not refer to the $VEL variable for speed. They refer to $VEL_AXIS which has a component for A1-A6 expressing the percent speed limit from 0 to 100.

    A few moves here. My favorite solutions first:

    1. If you aren't particular about your S&Ts of your edited position, then change your edited position variable from an E6POS to a FRAME (X, Y, Z, A, B, C), then, it will make the same S&T assumptions that a regular LIN move would make, except it will still make a nice PTP motion.
    2. Do a logic check to see if you Y component is positive or negative before you do the move. Depending on your $BASE and $TOOL, this can be a clean indicator of whether the Status BIT0 will be positive or negative. From there, you can make bitwise changes to Status. ( IF myPos.Y >= 0 THEN... )
    3. AXIS and E6AXIS position structures also ignore S&T.

    WorkVisual is incompatible with KRC2.

    The robot system has to be KRC4 or newer.

    Instead, you can connect a keyboard, mouse, and monitor to the robot controller to make changes to the system.

    I have a KP3 rotary table in front of a KUKA robot, all running KSS 8.3.14.

    I have configured the robot to follow each of the three external axis from WorkVisual, and I've taught the base frames and root points from the SmartPad. I taught E2 while E1 was at 0 degrees and I taught E3 while E1 was at 180 degrees.

    Now the robot only follows E2 or E3 correctly if E1 is at the position that E2, or E3 was taught in. For example, when I jog E2 while the SmartPad has E2's base frame selected, the robot follows E2 as if E1 was still at 0 degrees, even when its not. I would think the system would be a little smarter and essentially transform E2's base frame as E1 moves, kind of like with robotteam.

    Am I missing a step in chaining E2 and E3 to E1, or is this simply not a feature of these external kinematic systems?