camera's coordinates to pick object

  • Hello everyone,
    Kss: V4.1.5 Robot: Kr3 & controller: Krc3
    Up to now i am able to move the robot from the PLC (c#). So c# program send the move position to PLC and PLC send to kuka. Between PLC and kuka profibus is used.


    This post is related to attached image below.
    In short I have webcam attached to the stand. For now I am using the chess board as the working space (pick up zone). I want robot to move from the position shown in picture to pick the object (rectangle boxes in this case) when the object is touched. For that I am able to get the X and Y in mm from pixel (since the picture is 2D)when the object is touched.


    Now my difficulties are:


    1. How can I make robot to move to this X and Y coordinates? because coordinates (mm) of working space is different from the robot Cartesian coordinates. Should I have to make some kind of offset on c#(PLC ) side to make cordinates equal? If so how to make that? OR teach the points to robot? or do something with $base or $world? :wallbash:


    2. Since I need to pick the box as top part, I am taking the working space as 2D which means i have only two coordinates(X & Y in mm). How is this x & y related to robot Cartesian XYZABC? or is there way to convert to joint coordinate system or in degree?
    In short how to make the robot to know the x & y in mm of camera so that robot can move to that point? My problem is making the relation between camera coordinate to know by robot and it is not about the programming in kuka side or sending values to robot.


    I hope my description is not confusing. :icon_rolleyes: if it is confusing please ask :smiling_face:
    Thank you.

    Edited once, last by Captain ().

  • This is a typical issue in vision-based robotic systems.


    Basically, you have 3 different coordinate systems which must be aligned somehow: The camera, the "table" (your chessboard, in this case), and the robot.


    The camera's coordinate system consists of an XY grid of pixels, usually starting from one corner of the image(depends on your vision software). The safe assumption is that you will never get this perfectly aligned with any other coordinate frame by physical adjustment alone. Also, you have to create a conversion between pixels and mms, since that changes as the height and/or zoom of the camera is changed.


    The robot has its own internal coordinate system that is part of it's physical construction: $WORLD. Again, it's safe to assume you can never make it perfectly aligned with either of the other two simply by physical adjustment. Fortuantely, the robot comes pre-equipped to be taught other coordinate systems.


    Then there's the "table": your chessboard. Usually, the simple way is to treat one corner of the board origin of the coordinate system, and make X and Y aligned with the two edges that meet at that corner (which makes Z orthogonal to the surface of the board by default). Generally, the process is to mount your board in its ideal position, then teach the camera and robot the board's coordinate system.


    For the camera, this means finding the location of that corner in the camera's pixel coordinate system, and some points along the X and Y edges, at a known mm distance from the origin. This will give you enough data to build a trig algorithm that can convert the location of an object at X and Y pixels from the camera image origin, into a location X and Y mm from the chessboard origin.


    For the robot, it's quite simple. Temporarily mount a sharp-ish (pencil-sharp, not hypodermic-sharp) pointer firmly to the robot, and teach it as a TCP. Then, use the 3-Point Base function in the robot -- touch the pointer to the chessboard origin, then a point on the chessboard X axis, then a point in the chessboard's positive XY plane. That's it -- now the robot has a Base whose origin and orientation matches that of the chessboard, and you can program points in that base. This is just like the process of coming up with the conversion between the Camera and chessboard coordinate systems, except that the robot has all the math "canned" internally, while for the camera-to-chessboard, you'll need to create your own algorithm.


    Once you've taught the robot it's relationship to the chessboard, and worked out the conversion formula between camera pixels and chessboard coordinates, that's about all there is to it. You can touch the pointer to the chessboard at certain points with the pointer Tool and the chessboard Base active, locate the pointer tip in the camera, convert from pixels to XY position, and check that against what the robot reports its position as being. Once you have the math correct, these should match up pretty closely (they'll never be perfect, but should be more than close enough).

  • Thank you SkyeFire:applaus::smiling_face:
    i understand the most of the things .


    As attached in image below. Firstly when I want to pick the object, I will start the fixed webcam that shows the video. Then press the capture button. Then it will display the image. The red dot in image is always origin of X and Y (0,0). I have shown x and Y with green letter. Then when i touch any part inside the captured frame, the right side box will show me coordinates in mm according to that fixed position of camera relative to origin (0,0) [in this case i have pressed the rectangular USB stick]. Since the preview frame and captured image frame is fixed then i believe the conversion will be valid as long as the height of camera remain constant. And also i will make sure chessboard does not move. So in this case I don't have to do anything with camera coordinate and table right? So I have to take care about only teaching robot point right?


    Now lets go to teaching the robot TCP. Before teaching I will make sure that height is adjust and chess board is zoomed 100% so no floor is seen and camera is fixed and make conversion(calculation) to mm is changed.



    For the robot, it's quite simple. Temporarily mount a sharp-ish (pencil-sharp, not hypodermic-sharp) pointer firmly to the robot, and teach it as a TCP. Then, use the 3-Point Base function in the robot -- touch the pointer to the chessboard origin, then a point on the chessboard X axis, then a point in the chessboard's positive XY plane. That's it -- now the robot has a Base whose origin and orientation matches that of the chessboard, and you can program points in that base.


    Will be please show small example how to teach TCP so that when i send those x & Y from PLC the robot will know and move to that points? How to use 3 point base function? or will you please explain how to teach the points step wise.I think for now my base is {0,0,0}. I have never teach any point to robot but i assume it is done with pressing touch up after ptp is complete? but what will be my ptp command? and do i have to assume some where $world=$base or something similar?


    Thank you.

    Edited once, last by Captain ().

  • TCP setup is detailed in the manual. Briefly, go under the SETUP or CONFIG menu (I don't have a KRC to check against), there should be a sub-menu for Tool&Base. You'll need to do Tool first, then Base.


    Setting up the Tool, you'll need to use the 4-point XYZ method. This will tech the XYZ position of the tool, but not do anything for the ABC rotations. This should be enough, if you're careful.


    4-Point TCP setup works by stepping through the menu prompts. You need to touch the tip of the pointer to the same point in space from 4 different orientations. Usually we attach a pointer to the table, and use the tip as the fixed spatial point. A pencil on the table, and a table on the robot, will serve, as long as their mountings are solid. If they move during your measurement, it will add so much error that you'll need to re-mount and start over.


    Once you have the pointer TCP set up correctly, you can use the 3-point Base setup method, under the same menu tree.

  • Thank you SkyeFire for the reply.


    i went under Setting>Measure>Tool> 0 XY Z -4 Point then i saw something like shown in picture 1 for base 1.Firstly I get chance to select tool and I press ToolOK. Then it says "Line up tool from different direction" under I get 3 options move to pt.... point ok...close. when I press move to pt, it says "Target is not valid". When i press pont ok, it shows as picture 2.
    Then there is 2 menu, repeat all and repeat. But I am not able to move robot or change the coordinates of xyz by this procudere. :wallbash:


    Will you please explain again how do I teach TCP 4-point XYZ method after this step? :waffen100:
    and will you tell me which manual explain teaching points? In expert programming its just 1 page explanation and it doesn't help.



    Thank you.

    Edited once, last by Captain ().

  • programming manual for system integrator explain this and just about everything else. this is the #1 manual to have. even if it is for a different controller, workflow is the same.

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • Thank u panicMode and Skyefier again.
    well i can jog the robot and do the tool calibration and base calibration . Not the correct one still. However I know the producer now.
    But still few questions.


    1. When doing TCP 4 point - XYZABC, do we move/ teach the point by setting our movement(jog-keys) in joint, tool or base co-ordinate system ? Lets say if you teach by jogging keys in tool co-ordinate system, and in the top comment you mention don't do anything you ABC. Then how is it possible to touch the tip from 4 different direction without rotating axes?


    2. Same for base, when we calibrate base should we calibrate setting our jog keys in base co-ordinate system or moving axis (joints)or world coordinate system. it will have different effect right?


    And lets summarize once again my detail, So I have camera. It capture image and in the image (in pixel), there is X and Y pix starting with (0,0) pix and positive x and Y up to maximum 319*239 PX which is always the same because image is always inside frame and frame is fixed. So when I pressed inside the image the shown pix is converted into MM giving the X and Y into MM according to my conversion making camera height and zoom always constant. (please see picture above)


    then we need to define the work space which is chessboard. So if I calibrate base, by teach the origin same as camera's origin and x and Y point. then this will make the camera's co-ordinate system mm and robort co-orrdinate mm same right? meaning if i touch the point in my picture then the x and y shown will be same as x and y shown by robot Cartesian coordinate if i move the robot to that point right?
    And i didn't understand what does the tool calibration TCP does in my case? when we calibrate tool the can reference point be anywhere right inside the work space(chess board)? And what about the height of the pencil on surface and and height of tool mounted on A6, because at the end I need to pick rectangle object from surface , so do i need to care about height when teaching point TCP? If I don't need to care about that then later at the end how will the robot move that position (especially to that height) to pick object when I touch the object? And the tool mounted in A6 can be changed later right? because when doing 4 point calibration I want to use sharp tool but later to pick object so i need to use another one.



    Thank you. many question but may be some has similar answers . :sleeping_face:

    Edited once, last by Captain ().

  • 1. Misunderstanding.The 4point XYZ method only sets xyz coordinates of tool but not abc. During calibration you have to change orientation, in which coordinates you move does not matter.The robot calculates the coordinates correct.


    2. The same as to 1. No matter in which coordinates you move the robot, but the tool must be selected correct at the beginning of base calibration.

  • As Hermann says, which coordinate system you jog in is irrelevant -- the only requirement is that the physical location be correct when you set the point.


    Brief technical digression: the robot "knows" where the "Zero Tool" (the center of the Axis 6 mounting flange) is at all times, the same way that you know where your own hand is, even with your eyes closed -- that flange is a part of the robot, and it's relationship to all the axes cannot be changed.


    However, the robot knows nothing about what you are attaching to the A6 flange, except for what you tell it (using $TOOL and/or the TOOL_DATA array). If you attach a tool (for example, a pencil) to the A6 flange, and touch the tip of the tool to the same point in space from 4 very different directions, the robot will inherently know the position of the Zero Tool for each of those four orientations. As a side effect, the Zero Tool will be define four points on the surface of a sphere, centered on the fixed reference point you are touching the tool to. As long as that reference point is fixed, the robot does not need to know where it is, but with 4 points on the surface of a sphere, the robot can mathematically reverse-engineer the XYZ dimensions of the tool, because there will only be one solution for the tool which will work for all 4 Zero Tool positions. Of course, this process is slightly "noisy," so the robot will do it's best to average out the error for each point. But if you are sloppy about touching the reference point at all 4 locations, you may generate more noise in the calculation than the algorithm can tolerate.


    As for what the Tool does for you: in order to define a Base using the 3-point method, the robot must have an accurate tool TCP first. The Base is defined by where you move the Tool, so if the Tool is not defined, attempting to define the Base is useless.


    In defining either Tool or Base, having a precise physical tool is important -- using a flat-tipped pencil will give you greater "noise" in the accuracy of the measurements than using a sharp pencil.


    To relate the coordinate systems to each other, there are multiple methods, but all the frames must agree. If you set up the camera to use one corner of the chessboard as the origin, one edge as the X axis, and another edge as the Y axis, then you must teach the robot to use the same points and edges as the same origin and axes. That will get both the camera and robot using the "same map," so to speak.


    Creating the pixel-to-mm conversion will take an additional step, but it shouldn't be too hard. Then it will depend on where you perform the conversion while in operation -- will you do it in the computer that the camera is connected to, or inside the robot? It works either way. You can do the math anywhere along the chain, as long as the conversion formula is accurate and the the conversion is performed before being assigned to a robot motion command.

  • THANK YOU Herman and SkyeFire:dance2:
    Well Apparently it look like I did correct tool and base calibration :smiling_face: . The X and Y in mm of camera seems to be equal to X and Y in mm of the Robot (not 100% but at least sufficient for now) :fine:



    Creating the pixel-to-mm conversion will take an additional step, but it shouldn't be too hard. Then it will depend on where you perform the conversion while in operation -- will you do it in the computer that the camera is connected to, or inside the robot?


    yes, I did in external computer and I want to send it to the robot (via PLC ) so that robot can go to pick up the object when I press inside image by - PTP input_fromCAMERA C_PTP
    But still few doubts .....below I have attached the final value of tool and base calibration.


    1. Since I have values X and Y mm to send to robot to move, I should move the robot in Cartesian co-ordinate right? (example: inputX.X= input_fromCAMERA & inputY.Y=input_fromCAMERA then PTP input_fromCAMERA C_PTP)


    2. I get only two variable XY(when I press image ) to move in Cartesian co-ordinate, what Should I send the value of Z A B C? Should I put the constant value and send for ZABC? And Z should be 0 right? I want A6 to point vertical down when it is about to pick the object in workplace(chessboard). How can I make the balance in A6 so that it vertical down when it is about to pick object or where ever position of chessboard it moves to?


    3. What does those value of XYZ ABC of tool and XYZ of base means now?
    Thank you.

    Edited once, last by Captain ().

  • Well, if you do something like this:


    In this example, the camera only controls X and Y. When the robot moves to PositionFromCamera, it retains whatever ZABC values StartPosition had.


    In general, in KRL, if you give a LIN or PTP command a position (POS, E6POS, FRAME, AXIS, E6AXIS) that only has some members of the Structure variable filled in, the robot simply retains whatever the last position was for the members that are left undefined. So, in the above example, if StartPosition had a Z value of 50mm, then the robot would still be at 50mm in Z after moving to PositionFromCamera.


    In this application, you're not going to bother with B or C, beyond setting them correctly at StartPosition, simply b/c your vision system has no way to measure for corrections around those axes. You may end up making A adjustments, since your vision system can make measurements of rotations around the Z axis. But just getting X and Y working first is the way to start.


    The value of your Tool should be unchanging, once you've done the calibration. Your Base value should also be permanent, but you may find it useful to make adjustments to a copy on the fly. This example does the same thing the previous one does, but by changing $BASE, instead of making a point:


    In this case, PickupPosition needs to be pre-programmed for the "zero position" pickup -- that is, the correct pickup position if/when the corrections from the camera are all 0 (or so tiny as to be practically 0). In this case, you teach a "perfect 0" position for the robot, and then shift that position around to follow the camera by shifting $BASE mathematically using the vision data. This shift works by taking BASE_DATA[1] as its reference origin (that's why it's on the left side of the Geometric Operator ":"), and applies the values in ShiftFrame before putting the shifted results into $BASE. It's important to ensure that BASE_DATA[1] never gets altered, as that's your permanent reference frame -- instead, you apply the shifts to the "working copy" in $BASE.

  • Thank u SkyeFire
    I tried the first method.

    DECL E6POS PositionFromCamera
    $BASE = BASE_DATA[1] ; activate common robot/vision base
    $TOOL = TOOL_DATA[1]
    positionfromcamera=$pos_act
    PositionFromCamera.X = XInputFromCamera ; Get X value from camera
    PositionFromCamera.Y = YInputFromCamera ; Get Y value from camera
    PTP PositionFromCamera c_ptp


    At first it worked but later it didn't worked. Some time there is error saying"work envelope exceedeed" or sometime "Ax out of range" and some time " start movement Inadmissible ".


    I think this is due to change in base and tool. When i go to option current tool and base, I set current tool1 and base 1. But later it changes automatically to 0. then when i open my .src program then if I see current tool and base then it is not 1.
    Although we have done $BASE = BASE_DATA[1]
    $TOOL = TOOL_DATA[1] , It doesn't show me the current tool and base [1] inside my program when I tried to exectue PTP motion. :wallbash:
    even If i change it to 1, it automatically chnages to 0. :waffen100:
    In my declaration section i can see DECL FDAT fpositionFromCamera={tool_No 1 Base_no_1, IPO_frame #BASE} but still I cannot make the PTP PositionFromCamera c_ptp.


    So how to get rid of this problem and how can I make my base and tool as 1 inside my program or before execting my PTP :frowning_face:


    Thanks

    Edited once, last by Captain ().

  • you could read messages that robot displays and remedy whatever they tell the problem is. for example program speed, approximation etc.

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • Hi panic mode Thank you for the reply. I don't mean the same but I managed to get something.
    But I want to ask some more question.


    1. How to select the proper work space? because In my above picture of top post I calibrated tool and base assuming my chessboard as work space. So now I can move the robot along the origin, X axis or Y axis or XYpoints. However there are some points inside the chessboard where the robot cannot go when moving in Cartesian(because of the A4 or a1 axis limit). but when I moved/jogged the robot with joints coordinate, it can reach at any points inside the chess board. So the easiest solution I can think right now is changing the work space (shifting chess board) and re-calibration base again. But there might be other way too? when i try to jog the robot along the base X=0,y=0 at some point it stop because A4 is beyond limit.


    2. Why does robot accept the softX_end[] above the limit? for example for kr3, A4 limit is +- 180 but why does robot accept if I modify it even higher lets say -185? & if I run it is going to crash?


    Thank you.

  • "Work space exceeded" isn't about a particular workspace -- instead, it means that the robot has been given a command to move to a location that is beyond the arm's physical reach.


    This is separate from a "axis limit" error. There are positions which the robot can reach with a particular set of axis angles, but potentially not with another.


    PTP motions are predictive for these errors, but LIN/CIRC motions are not. Basically, a LIN motion is made up of many, many, very very small PTP motions, and the "look ahead" for these errors is just the next PTP motion segment. So if you order the robot to an unworkable position using PTP, the error will be generated before the robot starts moving. Change the PTP to a LIN, and the robot will try to make the move, and keep moving until it physically hits a limit.


    Axis limit errors are complicated b/c you can "wind up" the wrist axes during LIN motions until you hit a condition where a position that is entirely reachable, cannot be reached by LIN from the robot's current position. If you encounter cases like this, it's best to use PTP motions, perhaps with E6AXIS positions rather than E6POS, to "unwind" the axes before beginning a LIN motion. This is most often achieved by using the system HOME position at the beginning and end of each program run.


    Another technique is to use PTP motions whenever possible. This usually allows the motion planner to pick out the least difficult path between Points A and B -- however, if the wrist has been "wound up," this can result in the wrist "unwinding" unpredictably when the path planner detects that it's needed. So you want to do this well clear of everything.
    Often, how we do this is to make all the moves above the table PTP motions, and make only the "pick/drop" motions LIN. That is, generate your Pickup position from the vision, mathematically generate an "Above Pick" position from that, then make the Above->Pick->Above moves LINs, and all the others (say, HOME->Above) PTPs. This creates minimal opportunity for the LIN motions to "wind up" the wrist axes.

Advertising from our partners