Convert spatial coordinates into specific robot programming language

  • Hello robot experts,


    I hope someone will be able to explain me in general, how set of coordinates that describe a path in the space can be converted into coordinates to a specific industrial robot language.


    For example, KUKA KRC4, robot, I have seen YT videos in which Werner explains what is TCP, how to calibrate TCP, what is Home and World coordinate frame and so on.

    But for example, there is some other software tool that enables path planning, and at the output we can get a set of spatial coordinates as well as a reference coordinate frame with respect to which, the coordinates are calculated.


    Now, imagine I need to program real world robot to move on this path. Since, base and TCP frames should be calibrated, this means there will be a transformation matrix that will changed all the spatial coordinates that were produced by path planning software.


    So, is this done in KUKA robot (or in general industrial robot) automatically, or a programmer need to have TCP and and base coordinate frames, and then make transformation in the path planning software to obtain new coordinates before entering movement commands in the robot programming language?


    I hope you understand my question. During classes of theory, and in many robotics books (academic books) there are chapters about path planning that can be used to learn about algorithms. Once these algorithms are applied and coordinates produced, what to do next, how to enter this data to a real world robot?


    As an example, I have attached a picture of output of a path planning SW with coordinates and referent frame.


    Thank you.

  • I hope you understand my question.

    Not really.

    "Path planning" normally is done in the robot system. But there is also a kind of 'path planning' that the robot user/programer has to do: set some positions the robot has to move by, so that he can do his job and no collisions occur.

    The first one (done in the robot system) is calculating the robot axis positions during the movement to the various positions 'planned' by the robot user (as there aren't only linear movements, but also circles, splines ..) .

    how to enter this data to a real world robot?

    That depends on the robot system. No universal answer to that question.

  • OK, if you make path planning in another software, somes things have to be clear:


    1. the coordinate origin is different with each robot manufacturer: KUKA and ABB has it on Axis 1 center of base. FANUC has it on axis 2 in 0° position of axis 1
    2. the path planning is always structured with point to point, linear and circular movements
    3. then each manufacturer has different flyby zones, sometimes speed dependend

    If you figured that stuff out you can generate movement instructions based on your path planning. This has to be done in each robot manufacturers own language. ABB has RAPID, FANUC has KAREL, KUKA has KRL with inline forms and so on.


    Also don't forget orientations. I see in your path only points but no orientations.


    An example point for ABB looks like this:


    Code
    CONST robtarget pExample:=[[1648.54,-1438.01,1106.93],[0.709321,0.704869,-0.00352078,0.00333264],[-1,-2,0,0],[9E+09,9E+09,9E+09,9E+09,9E+09,9E+09]];

    with firsts bracket [x,y,z] as translation

    and second bracket [q1,q2,q3,q4] as rotation by quaternions

    and third bracket the configuration[c1,c2,c3,c4] because robots can reach a point in different poses (multiple pose solutions possible)

    and fourth bracket the external axis [e1,e2,e3,e4,e5,e6]

    Edited once, last by Eric ().

  • As others have stated, every robot brand uses a different "notation" for representing a 6DOF (XYZ and Rx, Ry, Rz) point in space. The free version of RoboDK is pretty good for dabbling in this. Most robots use a specific Euler rotation, although ABBs use Quaternions.


    So, for any 3rd-party software generating paths for robots, a postprocessor unique to that robot is required. This is similar to generating G-Code for CNC machines from CAD/CAM software.


    Now, if we limit the discussion to 3DOF (XYZ), it gets simpler -- you simply have to ensure your reference frames are aligned between the software and reality, and your units are correct. It's not hard to create a simplistic script to convert a series of XYZ points into a string of robot commands. But since one of the reasons for using a robot is to take advantage of its 6DOF flexibility, such simple scripts are of very limited utility.


    This gets more complex in execution, because while the software can generate a series of points, each robot decides to a degree on its own how to move between those points. The output of your postprocessor can (and should) include parameters for velocity, approximation, acceleration, etc, but the robot still gets a vote. The simplest example is speed: if you command the robot to move from Point A to Point B at 2m/s, but the robot cannot physically achieve that speed, it'll simply do the best it can.

  • if you command the robot to move from Point A to Point B at 2m/s, but the robot cannot physically achieve that speed, it'll simply do the best it can.

    Have you worked with Motoman robots before? My experience with "excessive segment" errors is somewhat contrary to that statement. If I am wrong, sorry. I have no particular love for those robots.

  • Have you worked with Motoman robots before? My experience with "excessive segment" errors is somewhat contrary to that statement. If I am wrong, sorry. I have no particular love for those robots.

    I haven't had the "pleasure" :puke: , and user testimonials like this reassure me that I'm not missing anything.

  • What you can do is pick an offline programming (OLP) software, brand specific or brand agnostic, feed in the data from your path planning SW and let the OLP confirm if the path is feasible from a physical stand point (reach, singularities, joint limits) and generate the actual robot program in a format "understandable" for the controller.

    You will have plenty of other "details" to worry about, you can skip the kinematic and post-processing aspect of it for a cost much lower than the time it would take to develop such tool yourself.
    But obviously, if your time has no "value" (it's a hobby let's say), nothing is impossible and you can create such tool.


    Jeremy

    RoboDK - Simulation and Offline programming software for industrial and collaborative robots.

    Visit us at RoboDK.com
    Take a look at our tutorial videos on our YouTube channel.

  • Thank you for your replies.

    I'm study the academic book about robotics. It can be found here, and it has its wiki page.

    Modern Robotics - Northwestern Mechatronics Wiki

    In this book there is a chapter about path planning and even there are examples such as trapezoidal profile etc. In such cases, the problem is solved by determining coefficients of polynomials used to describe paths.

    In any case the end result are coordinates on the path. The book I mentioned and the coursera courses associated with it use Python or Matlab as programming tools (for transformation matrix multiplication etc).

    So if I use Python to solve for coordinates on a specific path and then visualize them, I can save them.


    I have already played a bit with creating KUKA KRC4 program that defines movement (linear and PTP), and I saw how this is done on Werner's YT videos.


    But the real question for me is the following. Now, When I have coordinates (only positions, since no path planning example in the academic books use also orientation for points on the path) and I have a reference frame with respect these points are generated, how would I proceed for example with KUKA KRC4 robot?


    KUKA robot will have its own base and tool coordinates, so I wonder how to use this data to recalculate coordinates I previously saved.


    I have no really practical experience with industrial robots, I know what I could read in the academic books that treat subject of robotics.

  • if you know what robot joint positions are (and you know transforms of each joint) you can calculate resulting position at the end of arm. this is called forward transform (and it is more straight forward).


    to do the opposite, you need to use inverse kinematic transform. with this you can calculate joint angles when position is known. again, this requires knowing robot architecture because it has direct effect on all transforms.


    KSS has suitably named functions for this: forward() and inverse()

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • ... Now, if we limit the discussion to 3DOF (XYZ), it gets simpler -- you simply have to ensure your reference frames are aligned between the software and reality, and your units are correct. It's not hard to create a simplistic script to convert a series of XYZ points into a string of robot commands....

    The robot reference frames are stored in base_data[x], where x is the number of base frame. So you have to align one of the base frames to your path planning frame and are done.

    The same with the tool frame, they are stored in the variable tool_data[x].

    But you always need an orientation for every position, you can set it to a constant value.

    Activating a tool, or base frame is done by the command

    Bas(#base, x)

    Bas(#tool, x), where x is the number of the tool or base.

Advertising from our partners