1. Home
    1. Dashboard
    2. Search
  2. Forum
    1. Unresolved Threads
    2. Members
      1. Recent Activities
      2. Users Online
      3. Team Members
      4. Search Members
      5. Trophys
  3. Articles
  4. Blog
  5. Videos
  6. Jobs
  7. Shop
    1. Orders
  • Login or register
  • Search
This Thread
  • Everywhere
  • This Thread
  • This Forum
  • Articles
  • Pages
  • Forum
  • Blog Articles
  • Products
  • More Options
  1. Robotforum - Support and discussion community for industrial robots and cobots
  2. Forum
  3. Industrial Robot Support and Discussion Center
  4. KUKA Robot Forum
Your browser does not support videos RoboDK Software for simulation and programming
Visit our Mainsponsor
IRBCAM
Robotics Channel
Robotics Training
Advertise in robotics
Sponsored Ads

Math behind the 3-point method for TCP

  • machinekoder
  • March 9, 2021 at 3:41 PM
  • Thread is Unresolved
  • machinekoder
    Trophies
    2
    Posts
    2
    • March 9, 2021 at 3:41 PM
    • #1

    Does anyone know how the math behind KUKAs (and other robot vendors) 3-point method works? I found a paper that describes the method, however, the referenced formulas seem incomplete or at least I can't find a solution that works using the linear equation solver from SymPy and Mathematica.

    In theory, you only need 2 points to solve for the XYZ of the TCP if you use the rotations of the taught poses, but my results are bogus.

  • Go to Best Answer
  • panic mode
    Reactions Received
    1,262
    Trophies
    11
    Posts
    13,030
    • March 9, 2021 at 3:55 PM
    • #2

    determining TCP requires four points. points are on the sphere and center of the sphere is TCP.

    this was discussed several times, recently here:

    RE: Calculating TCP

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • machinekoder
    Trophies
    2
    Posts
    2
    • March 10, 2021 at 3:12 PM
    • #3
    Quote from panic mode

    determining TCP requires four points. points are on the sphere and center of the sphere is TCP.

    this was discussed several times, recently here:

    RE: Calculating TCP

    Yes, I know. The 4-point (aka Center-of-sphere method) is well known. However, some manufacturers have 3-point method to determine the tool offset. I was asking if someone does know the math behind this.

    Here is the solution for the 4-point method in Python, if anyone cares:

    Code
    def calc_sphere_center(wp1, wp2, wp3, wp4):
        x1, y1, z1 = wp1.x, wp1.y, wp1.z
        x2, y2, z2 = wp2.x, wp2.y, wp2.z
        x3, y3, z3 = wp3.x, wp3.y, wp3.z
        x4, y4, z4 = wp4.x, wp4.y, wp4.z
        u = ((1/2)*(-((-y1 + y4)*((-x1 + x4)*(-z2 + z4) - (-x2 + x4)*(-z1 + z4)) - (-z1 + z4)*((-x1 + x4)*(-y2 + y4) - (-x2 + x4)*(-y1 + y4)))*(((x1 - x4)*(y2 - y4) - (x2 - x4)*(y1 - y4))*((x1 - x4)*(x3**2 - x4**2 + y3**2 - y4**2 + z3**2 - z4**2) - (x3 - x4)*(x1**2 - x4**2 + y1**2 - y4**2 + z1**2 - z4**2)) - ((x1 - x4)*(y3 - y4) - (x3 - x4)*(y1 - y4))*((x1 - x4)*(x2**2 - x4**2 + y2**2 - y4**2 + z2**2 - z4**2) - (x2 - x4)*(x1**2 - x4**2 + y1**2 - y4**2 + z1**2 - z4**2))) + ((-y1 + y4)*((-x1 + x4)*(-x2**2 + x4**2 - y2**2 + y4**2 - z2**2 + z4**2) - (-x2 + x4)*(-x1**2 + x4**2 - y1**2 + y4**2 - z1**2 + z4**2)) - ((-x1 + x4)*(-y2 + y4) - (-x2 + x4)*(-y1 + y4))*(-x1**2 + x4**2 - y1**2 + y4**2 - z1**2 + z4**2))*(((x1 - x4)*(y2 - y4) - (x2 - x4)*(y1 - y4))*((x1 - x4)*(z3 - z4) - (x3 - x4)*(z1 - z4)) - ((x1 - x4)*(y3 - y4) - (x3 - x4)*(y1 - y4))*((x1 - x4)*(z2 - z4) - (x2 - x4)*(z1 - z4))))/((x1 - x4)*((x1 - x4)*(y2 - y4) - (x2 - x4)*(y1 - y4))*(((x1 - x4)*(y2 - y4) - (x2 - x4)*(y1 - y4))*((x1 - x4)*(z3 - z4) - (x3 - x4)*(z1 - z4)) - ((x1 - x4)*(y3 - y4) - (x3 - x4)*(y1 - y4))*((x1 - x4)*(z2 - z4) - (x2 - x4)*(z1 - z4)))))
        v = ((1/2)*(-((x1 - x4)*(z2 - z4) - (x2 - x4)*(z1 - z4))*(((x1 - x4)*(y2 - y4) - (x2 - x4)*(y1 - y4))*((x1 - x4)*(x3**2 - x4**2 + y3**2 - y4**2 + z3**2 - z4**2) - (x3 - x4)*(x1**2 - x4**2 + y1**2 - y4**2 + z1**2 - z4**2)) - ((x1 - x4)*(y3 - y4) - (x3 - x4)*(y1 - y4))*((x1 - x4)*(x2**2 - x4**2 + y2**2 - y4**2 + z2**2 - z4**2) - (x2 - x4)*(x1**2 - x4**2 + y1**2 - y4**2 + z1**2 - z4**2))) + ((x1 - x4)*(x2**2 - x4**2 + y2**2 - y4**2 + z2**2 - z4**2) - (x2 - x4)*(x1**2 - x4**2 + y1**2 - y4**2 + z1**2 - z4**2))*(((x1 - x4)*(y2 - y4) - (x2 - x4)*(y1 - y4))*((x1 - x4)*(z3 - z4) - (x3 - x4)*(z1 - z4)) - ((x1 - x4)*(y3 - y4) - (x3 - x4)*(y1 - y4))*((x1 - x4)*(z2 - z4) - (x2 - x4)*(z1 - z4))))/(((x1 - x4)*(y2 - y4) - (x2 - x4)*(y1 - y4))*(((x1 - x4)*(y2 - y4) - (x2 - x4)*(y1 - y4))*((x1 - x4)*(z3 - z4) - (x3 - x4)*(z1 - z4)) - ((x1 - x4)*(y3 - y4) - (x3 - x4)*(y1 - y4))*((x1 - x4)*(z2 - z4) - (x2 - x4)*(z1 - z4)))))
        w = ((1/2)*(((x1 - x4)*(y2 - y4) - (x2 - x4)*(y1 - y4))*((x1 - x4)*(x3**2 - x4**2 + y3**2 - y4**2 + z3**2 - z4**2) - (x3 - x4)*(x1**2 - x4**2 + y1**2 - y4**2 + z1**2 - z4**2)) - ((x1 - x4)*(y3 - y4) - (x3 - x4)*(y1 - y4))*((x1 - x4)*(x2**2 - x4**2 + y2**2 - y4**2 + z2**2 - z4**2) - (x2 - x4)*(x1**2 - x4**2 + y1**2 - y4**2 + z1**2 - z4**2)))/(((x1 - x4)*(y2 - y4) - (x2 - x4)*(y1 - y4))*((x1 - x4)*(z3 - z4) - (x3 - x4)*(z1 - z4)) - ((x1 - x4)*(y3 - y4) - (x3 - x4)*(y1 - y4))*((x1 - x4)*(z2 - z4) - (x2 - x4)*(z1 - z4))))
    
        return u, v, w

    Note that only the XYZ coordinates of our waypoints are used. However, since we also have the orientation, we should be able to use this info to get the sphere center with fewer waypoints. My math approaches so far have not leaded to a working result. Maybe someone here knows if there is a paper or technical description of the math behind the process.

  • micahstuh
    Reactions Received
    7
    Trophies
    3
    Posts
    31
    • March 10, 2021 at 4:20 PM
    • #4

    I think you're right. Even though 2 FRAMEs doesn't fully constrain a sphere, there should be only one reasonably sized sphere that results in the same XYZ transform from frame to sphere center for both frames.

    I'm going to think about this but hopefully someone has a neat algorithm for solving this on hand.

  • panic mode
    Reactions Received
    1,262
    Trophies
    11
    Posts
    13,030
    • March 10, 2021 at 6:27 PM
    • #5
    Quote

    Yes, I know. The 4-point (aka Center-of-sphere method) is well known. However, some manufacturers have 3-point method...

    btw. this is a KUKA forum. why are you asking for support here if other robot brand is used? and why not mention the exact brand(s) so we can try to relate?

    the terminology may be different too, you did not explain what "3-point method" is or how it is calculated.

    when looking for an idea or principles to solve something, it makes little sense to just look at a dump of some complex code that represents working solution. analysis requires understanding of principles that lead to that solution. this also means that definitions are very important:


    if the "point" is considered to be just a set of translation coordinates in space (without orientation) then all we have are X,Y,Z values.

    two such points can only define line in space.

    three such points can define plane, etc.

    so to define TCP using this, one needs at minimum 4 points and this is what KUKA uses. in this case points are located on the sphere.


    if by "point" you mean frame or larger structure so that it contains XYZ as well as orientation, then number of such elements ("points") can be lower than 4...

    for example frame is representing translation and orientation (6DoF) and can be seen as over-constrained plane (rotation about normal vector is not important). so relaxed interpretation (plane) requires less data than contained in frame and that is ok since it means we do not need all DoF.

    using that definition:

    intersection of two planes is a line (infinite number of points).

    intersection of three planes (could be) a point.

    another option is to consider intersection of three normals (find minimum distance since they will not necessarily intersect)

    and that is how i would start considering this...

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • micahstuh
    Reactions Received
    7
    Trophies
    3
    Posts
    31
    • March 22, 2021 at 8:00 PM
    • Best Answer
    • #6

    I am sorry if this is not a good fit for this forum but I have solved the problem. It's a bit long-winded, but I've tested this and attached the code and it works quite well. In-fact I think its a bit more elegant than solving by sphere fitting.

    two point TCP definitely isn't possible. As panic mode suggested, executing this approach with only two samples results in an infinite number of solutions, all falling on a line.

    A TCP can be generated given only three samples. Each sample must contain the A6 flange position and rotation in order to derive the TCP. This is done by solving a system of linear equations, not by fitting points to a sphere, which would require four samples.

    Here is the math:

    Unknowns

    P = teaching center point

    T = XYZ transform

    Givens (three of each, one per sample)

    O = sample coordinate (A6 flange XYZ position at given sample)

    R = rotation matrix of sample origin (A6 flange rotation matrix at given sample)

    3x3 rotation matrix R is in this form:

    [xx xy xz]

    [yx yy yz]

    [zx zy zz]

    A point P can be transformed by T in reference to an existing coordinate system with XYZ O and rotation matrix R:

    Rxx*Tx + Rxy*Ty + Rxz*Tz + Ox = Px

    Ryx*Tx + Ryy*Ty + Ryz*Tz + Oy = Py

    Rzx*Tx + Rzy*Ty + Rzz*Tz + Oz = Pz

    In this case, the three samples O, although in different positions and orientations,

    should result in the same position P when transformed by an XYZ offset T:

    R1xx*Tx + R1xy*Ty + R1xz*Tz + O1x = R2xx*Tx + R2xy*Ty + R2xz*Tz + O2x = R3xx*Tx + R3xy*Ty + R3xz*Tz + O3x

    R1yx*Tx + R1yy*Ty + R1yz*Tz + O1y = R2yx*Tx + R2yy*Ty + R2yz*Tz + O2y = R3yx*Tx + R3yy*Ty + R3yz*Tz + O3y

    R1zx*Tx + R1zy*Ty + R1zz*Tz + O1z = R2zx*Tx + R2zy*Ty + R2zz*Tz + O2z = R3zx*Tx + R3zy*Ty + R3zz*Tz + O3z

    P is no longer a factor when equating the samples. Further reduce to put into terms of O,

    and separate the T components:

    R1xx*Tx + R1xy*Ty + R1xz*Tz + O1x + R3xx*Tx + R3xy*Ty + R3xz*Tz + O3x = 2*R2xx*Tx + 2*R2xy*Ty + 2*R2xz*Tz + 2*O2x

    R1yx*Tx + R1yy*Ty + R1yz*Tz + O1y + R3yx*Tx + R3yy*Ty + R3yz*Tz + O3y = 2*R2yx*Tx + 2*R2yy*Ty + 2*R2yz*Tz + 2*O2y

    R1zx*Tx + R1zy*Ty + R1zz*Tz + O1z + R3zx*Tx + R3zy*Ty + R3zz*Tz + O3z = 2*R2zx*Tx + 2*R2zy*Ty + 2*R2zz*Tz + 2*O2z

    R1xx*Tx + R1xy*Ty + R1xz*Tz + R3xx*Tx + R3xy*Ty + R3xz*Tz - 2*R2xx*Tx - 2*R2xy*Ty - 2*R2xz*Tz = 2*O2x - O3x - O1x

    R1yx*Tx + R1yy*Ty + R1yz*Tz + R3yx*Tx + R3yy*Ty + R3yz*Tz - 2*R2yx*Tx - 2*R2yy*Ty - 2*R2yz*Tz = 2*O2y - O3y - O1y

    R1zx*Tx + R1zy*Ty + R1zz*Tz + R3zx*Tx + R3zy*Ty + R3zz*Tz - 2*R2zx*Tx - 2*R2zy*Ty - 2*R2zz*Tz = 2*O2z - O3z - O1z

    Tx(R1xx + R3xx - 2*R2xx) + Ty(R1xy + R3xy - 2*R2xy) + Tz(R1xz + R3xz - 2*R2xz) = 2*O2x - O3x - O1x

    Tx(R1yx + R3yx - 2*R2yx) + Ty(R1yy + R3yy - 2*R2yy) + Tz(R1yz + R3yz - 2*R2yz) = 2*O2y - O3y - O1y

    Tx(R1zx + R3zx - 2*R2zx) + Ty(R1zy + R3zy - 2*R2zy) + Tz(R1zz + R3zz - 2*R2zz) = 2*O2z - O3z - O1z

    Now form a system of equations A * x = B.

    [R1xx - 2*R2xx + R3xx, R1xy - 2*R2xy + R3xy, R1xz - 2*R2xz + R3xz]*[Tx] = [-O1x + 2*O2x - O3x]

    [R1yx - 2*R2yx + R3yx, R1yy - 2*R2yy + R3yy, R1yz - 2*R2yz + R3yz]*[Ty] = [-O1y + 2*O2y - O3y]

    [R1zx - 2*R2zx + R3zx, R1zy - 2*R2zy + R3zy, R1zz - 2*R2zz + R3zz]*[Tz] = [-O1z + 2*O2z - O3z]

    If working from python, you can use numpy.linalg.linalg.solve, or you can otherwise make a Gaussian elimination partial pivoting function. We are solving for x, which will return the X,Y,Z transform.

    I have attached the solution in Python.

    Files

    calculate3pointtcp.zip 1.62 kB – 105 Downloads
  • Online
    SkyeFire
    Reactions Received
    1,038
    Trophies
    12
    Posts
    9,373
    • March 23, 2021 at 3:45 PM
    • #7

    I'm guessing this solution assumes the KUKA Euler sequence of Rz-Ry'-Rx", at least for the conversion between robot coordinates and matrix (at the matrix level, all the Eulers are the same).

    How would you do this as a more general case?

  • micahstuh
    Reactions Received
    7
    Trophies
    3
    Posts
    31
    • March 23, 2021 at 4:04 PM
    • #8

    The only thing that depends on the angle values is the converter used to convert from angles to rotation matrix. In my Python code, the rot_to_mat function is specifically for Kuka's Tait-Bryan ZYX convention.

    Everything else is universal. All of the math above uses only XYZ coordinates and rotation matrices.

  • panic mode
    Reactions Received
    1,262
    Trophies
    11
    Posts
    13,030
    • March 23, 2021 at 4:11 PM
    • #9

    very nice and thanks for sharing.


    btw. about which forum to use... i would think it would have been better to have this topic in

    Robot Geometry, Linear Algebra, Forward and Inverse Kinematics

    rather than here, specially since it is not KUKA specific.

    we could move it there...

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • RoboUser123
    Trophies
    3
    Posts
    7
    • November 1, 2024 at 7:17 PM
    • #10

    Hello micahstuh,

    Thank you for your contribution. As a follow-up question I'm wondering if you might know.

    In KUKA a 4-point TCP method is used similar to the 3-point TCP method mentioned. I think the 4th point is used as a kind of redundant point, to get a kind of error estimation of the accuracy in TCP-position.

    Are you familiar with this, and perhaps the involved mathematics?

  • panic mode
    Reactions Received
    1,262
    Trophies
    11
    Posts
    13,030
    • November 1, 2024 at 7:44 PM
    • #11

    don't see how it is redundant...

    two points can define line in space

    three points can define plane in space, and anything you do in a plane is only a 2D pattern.

    but to get location of some unique point in 3D space (not just on plane but above or below plane) we need at least one more point. therefore general solution must require at least 4 points. and this works. here is a calculator to try out if robot is not at hand:

    Sphere Solver - Calculator finds the center and radius of a sphere given four points in Cartesian coordinates

    so can someone point to a 3 point method operation (never mind math behind it) - how does it work in practice? is there some other assumed constraint?

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • Hes
    Reactions Received
    39
    Trophies
    2
    Posts
    235
    • November 1, 2024 at 8:48 PM
    • #12

    Hang on.. with 3 points i think it should be possible to calculate a tool frame, but you do not get any error with only 3 points. With 4 you get the error of "fit" also. As long as they are in diffrent orientations, not paralell and so on.

    To get the location of a sphere 4 points is needed as 3 does not fully define its position. Holding a ball on 3 points still means the ball can be over over or under the points if you visualize the points laying in a horizontal plane.

    But if you do not calculate a sphere (the error of fit) but instead a single point. 3 poses logically seems doable, 2 poses would logically leave the possible toolframe as an infinite line. Cannot defend this all the way, but do a search for "three point perspective pose estimation" it is quite similar. Only which part you chose to solve. I know this is not exactly what was asked but i'm assuming and extrapolating from something i could find quickly.

    From the attached picture the assumption would be that S1=S3=S3 and to solve for that.

    The picture is taken from a paper called "Rotations, quaternions and pose estimation" by Filippo Senzani

    Images

    • Screenshot_20241101_213728_Adobe Acrobat.jpg
      • 34.71 kB
      • 360 × 800
      • 22
  • Online
    SkyeFire
    Reactions Received
    1,038
    Trophies
    12
    Posts
    9,373
    • November 4, 2024 at 4:04 PM
    • #13
    Quote from panic mode

    so can someone point to a 3 point method operation (never mind math behind it) - how does it work in practice? is there some other assumed constraint?

    My assumption has always been that the robot itself is the assumed constraint. AIUI, the 3pt and 4pt methods are trying to determine the center point of a sphere, with the sphere defined by the Tool 0 position at all 3/4 points. This is the fixed point that the TCP is being touched to. Then, from that point and the 3/4 Tool 0 positions, the measured TCP is reverse-engineered by finding the intersection of the spheres, with the disambiguation values of the robot pose used to throw out the redundant solutions.

  • Koppel
    Reactions Received
    19
    Trophies
    3
    Posts
    132
    • November 8, 2024 at 8:09 AM
    • #14

    I am just curious... The 3-point method micahstuh gave as a solution. Does this assume that the 3 points/frames all intersect in the one correct point?

    Because in reality if teaching the points they would never be exact, the vectors would never intersect in the one and only correct point and the equation would not solve.

    3 points on a plane always defines a circle... 4 points in 3D space defines a sphere. You can be very inaccurate but still get a result.

    But trying to find an intersection point could lead to an unsolvable situation.

    How many GPS sattellites you require as a minimum to get a location... its 4 right!

  • Online
    SkyeFire
    Reactions Received
    1,038
    Trophies
    12
    Posts
    9,373
    • November 8, 2024 at 4:50 PM
    • #15
    Quote from Koppel

    I am just curious... The 3-point method micahstuh gave as a solution. Does this assume that the 3 points/frames all intersect in the one correct point?

    It's been way too many years since I did Matrix Algebra, but IIRC these setups find the center of the uncertainty band. Whatever algorithm the robots are using definitely takes the uncertainty into account, as it generates the residual error.

    Quote from Koppel

    How many GPS sattellites you require as a minimum to get a location... its 4 right!

    No, 3 will do for a 2D position. The 4th is for error reduction and to generate 3D position. There are, I've been told, some tricks to get a low-accuracy 2D position with just 2 satellites, but that relies on assuming the Earth is a perfect sphere and you're standing on it.

  • panic mode
    Reactions Received
    1,262
    Trophies
    11
    Posts
    13,030
    • November 8, 2024 at 5:15 PM
    • #16

    determining TCP is finding values for translation portion of a frame - which is only X,Y,Z. and to determine values of three unknowns, it is required to have at least three linearly independent equations.

    so going from first principles, and using KRL notation...
    starting point for system of equations can be written as
    P1:T=P2:T=P3:T

    but... one must keep in mind that only X,Y, Z values of T are to be computed. A,B,C rotation values of T can be anything - as long as they satisfy the equations. rotation values A,B,C of frames P1,P2,P3 do matter since they are still factors in determining XYZ of T. with this one can write those equations out and see what it takes to solve it.

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • Puck
    Reactions Received
    3
    Trophies
    2
    Posts
    17
    • November 14, 2024 at 9:08 AM
    • #17

  • Puck
    Reactions Received
    3
    Trophies
    2
    Posts
    17
    • November 14, 2024 at 9:12 AM
    • #18

    Theoretically, three frames would also be sufficient, but with four frames, you can also calculate and specify the quality of the 4-point method.

  • panic mode
    Reactions Received
    1,262
    Trophies
    11
    Posts
    13,030
    • November 14, 2024 at 2:29 PM
    • #19

    the only thing is that this still uses 4 points and for that case problem has been solved. as the title of this topic suggests, question is how to do it with 3 points.... without r4 and R4

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • Puck
    Reactions Received
    3
    Trophies
    2
    Posts
    17
    • November 14, 2024 at 2:59 PM
    • #20

    Yes, that's right :winking_face: With the following system of linear equations, it also only works with three frames or “points”.

    Naively, one could now assume that two frames would also be sufficient to determine the three TCP coordinates. There is a “very mathematical” answer as to why this does not work. If you try it with just two frames, you quickly realize that you cannot determine a unique TCP, but only a line in space on which the possible TCPs would lie. Only with a third frame can the TCP be determined

Advertising from our partners

IRBCAM
Robotics Channel
Robotics Training
Advertise in robotics
Advertise in Robotics
Advertise in Robotics

Job Postings

  • Anyware Robotics is hiring!

    yzhou377 February 23, 2025 at 4:54 AM
  • How to see your Job Posting (search or recruit) here in Robot-Forum.com

    Werner Hampel November 18, 2021 at 3:44 PM
Your browser does not support videos RoboDK Software for simulation and programming

Tag Cloud

  • abb
  • Backup
  • calibration
  • Communication
  • CRX
  • DCS
  • dx100
  • dx200
  • error
  • Ethernet
  • Ethernet IP
  • external axis
  • Fanuc
  • help
  • hmi
  • I/O
  • irc5
  • IRVIsion
  • karel
  • kawasaki
  • KRC2
  • KRC4
  • KRC 4
  • KRL
  • KUKA
  • motoman
  • Offset
  • PLC
  • PROFINET
  • Program
  • Programming
  • RAPID
  • robodk
  • roboguide
  • robot
  • robotstudio
  • RSI
  • safety
  • Siemens
  • simulation
  • SPEED
  • staubli
  • tcp
  • TCP/IP
  • teach pendant
  • vision
  • Welding
  • workvisual
  • yaskawa
  • YRC1000

Thread Tag Cloud

  • abb
  • Backup
  • calibration
  • Communication
  • CRX
  • DCS
  • dx100
  • dx200
  • error
  • Ethernet
  • Ethernet IP
  • external axis
  • Fanuc
  • help
  • hmi
  • I/O
  • irc5
  • IRVIsion
  • karel
  • kawasaki
  • KRC2
  • KRC4
  • KRC 4
  • KRL
  • KUKA
  • motoman
  • Offset
  • PLC
  • PROFINET
  • Program
  • Programming
  • RAPID
  • robodk
  • roboguide
  • robot
  • robotstudio
  • RSI
  • safety
  • Siemens
  • simulation
  • SPEED
  • staubli
  • tcp
  • TCP/IP
  • teach pendant
  • vision
  • Welding
  • workvisual
  • yaskawa
  • YRC1000

Tags

  • tcp
  • 3-point method

Users Viewing This Thread

  • 1 Guest
  1. Privacy Policy
  2. Legal Notice
Powered by WoltLab Suite™
As a registered Member:
* You will see no Google advertising
* You can translate posts into your local language
* You can ask questions or help the community with your knowledge
* You can thank the authors for their help
* You can receive notifications of replies or new topics on request
* We do not sell your data - we promise

JOIN OUR GREAT ROBOTICS COMMUNITY.
Don’t have an account yet? Register yourself now and be a part of our community!
Register Yourself Lost Password
Robotforum - Support and discussion community for industrial robots and cobots in the WSC-Connect App on Google Play
Robotforum - Support and discussion community for industrial robots and cobots in the WSC-Connect App on the App Store
Download