movement of kuka

  • Hi


    -details
    kuka krc2
    webcam(run in c#)
    first of all, i have done tool calibration(xyz-4point) and base calibration(3-point) for my new workpiece.
    using c# i am capable of sending x and y coordinate from capture image which has the same dimension as the workpiece of the robot.
    after i finished base calibration i got these value x=257 y=-221 z=-36.
    so my question is if i want to move the robot according to the x and y coordinate from the picture, first i must position(manually) the robot close to value of base calibration (x=257 y=-221 z=-36) :waffen100:!!! otherwise one of 6 axis will be out of limit so as soon as the initial position of the robot close to this position he moves according to the value which sent by the user from the capture image ???
    so i want to move the robot from any position, it doesn't matter if he close to the base value or not . so how can i do that and what things does cause this problem ?? looking to your answer


    thanks a lot

    Edited once, last by jjf ().

  • Place your Ad here!
  • 1. What kind of motion is the robot using to follow the correction from the vision system? It makes no sense for the robot to strike an axis limit unless you pre-position it near the Base origin. One possible fix would be to have the robot program always start from a fixed E6AXIS position above the work area, with all the axes near the center of their ranges -- it's possible that, if you are using too many LIN motions, the wrist axes are "winding up" progressively over multiple cycles until your axis limits are hit.


    2. How is the vision system calibrated to the robot Base? If the two are not using the same origin and orientation (or, alternatively, you have some sort of conversion algorithm), then no Absolute coordinates will work.


    3. Are you using absolute or relative corrections? Are you correcting the Base using the vision offsets, changing points, or some other method?

  • thanks skyfire for reply


    well i used PTP motion but i believe that the problem is caused by S and T which are not corresponding to the Cartesian(x,y,z) coordinate which sent from the image. so if i can convert Cartesian to six angles degree then i can see the value for each angle and then i can figure out if the coordinate i am sending is out of range!!


    the origin point in base is different then vision system but i believe if they are different the robot can still move to the limitation of the work service !! also what you mean by the same orientation :waffen100: because i measure the x and y of the robot's work service and then i divide it to the pixel of the snapshot picture

    Edited once, last by jjf ().

  • If you pre-position the robot with the axes properly "centered" in their motion ranges, you can use a LIN motion (which ignores S&T), or simply use a FRAME variable (which has only XYZABC) for the vision motion command. That should eliminate S&T as a factor.


    You've done a pixel-to-mm conversion -- that only takes care of scaling. You still have to handle coordinating the location and orientation of the two frames of reference: the robot Base, and the vision. If these are share the same origin and orientation, you will not be able to use the vision to guide the robot unless you have some additional conversion algorithm working.


    A Cognex camera, for example, mounted overhead, would have it's Z axis nearly anti-parallel to the robot's World Z axis, and the X and Y axes are likely to be skewed by at least some small amount. As such, the X value from the camera might need to be translated to a trigonemetric combination of X and Y in the robot. The simplest way to handle this is usually to lay a target grid under the camera, and locate&mark three points on the grid: One located at the Origin of the vision system's frame of reference, one located exacly on the X+ axis of the vision frame of reference, and one located in the X+Y+ plane. Then use a TCP-taught pointer on the robot to touch each of these three points, in the Base 3-point setup procedure. This will "teach" the robot a Base frame that (if done properly) is exactly matched to the Vision frame of reference. After that, if you move the robot in that Base's X or Y axes, the direction should match exactly the camera X and Y axes. After that, all you need is pixel-to-mm scaling (which you apparently have already done), and the robot should be able to move to any point in the camera's reference frame, using the coordinates from the camera.

  • thanks skyfire again


    basically i already have touch three point which are the origin, x, and xy by using the base 3 point and the robot can be move to almost in any point in the camera's reference frame. but the problem is at the beginning i have to position the robot close to that value i mentioned it before otherwise the robot will not move so i would like to send the the coordinator from the camera and it does not matter where is the position of the robot at that time ,
    could you pleas give more details or example of how to use LIN motion or frame variable
    see the attachment of how i define the motion for camera

  • Honestly, first you need to understand why A2 is being driven out of range. Examine the coordinates of the CameraPosition variable, and maybe try moving to them manually.


    LIN CameraPosition C_DIS.


    DECL FRAME CameraPosition = {X 0,Y 0,Z 0,A 0,B 0,C 0}

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account
Sign up for a new account in our community. It's easy!
Register a new account
Sign in
Already have an account? Sign in here.
Sign in Now

Advertising from our partners