visual servoing with kuka iiwa

  • Dear All,
    I am going to use kuka iiwa for a visual servoing scenario.
    Here is my plan...
    I have connected a camera -mounted on flange, to a remote computer to do all the image processing on the remote computer. on the robot computer side, I am going to use smart servo.. based on the provided sample program "SmartServoSampleSimpleCartesian".
    The remote computer is going to receive the information (for instance pose of the robot) from the robot and send proper commands back to the java program over a UDP connection.


    essentially, Is that a proper plan to do that?
    How can I establish this connection?
    Do I have to use KLI for client-server programming?


    Thanks in advance.

  • Hello hsadeghian,


    Quote

    Here is my plan...
    I have connected a camera -mounted on flange, to a remote computer to do all the image processing on the remote computer. on the robot computer side, I am going to use smart servo.. based on the provided sample program "SmartServoSampleSimpleCartesian".
    The remote computer is going to receive the information (for instance pose of the robot) from the robot and send proper commands back to the java program over a UDP connection.


    essentially, Is that a proper plan to do that?


    that sounds like a proper plan.

    Quote


    How can I establish this connection?


    In the KUKA Sunrise Controller, there are the TCP/IP and UDP Ports 30000-30010 reserved for running Jobs like those. That means: If you open a socket on one of those ports, you can run the communication to the external Computer right via KLI.
    You will find many tutorials about "How to UDP/TCP/IP Sockets with Java" in the Internet.


    DrG


    PS: Some practical hints:


      • It is essential to know exactly where the Robot was located, as the picture was taken.


      • Advice: sample the robots position data "short BEFORE" exposure as a proper estimate
        (much better than using the Position AFTER the image is read into the main memory of the image processing cpu - especially if you use WebCams via USB)


      • Dont Forget: This Scenario is Feedback control (!) - that means, since the camera is Robot mounted, any movement of the Robot has consequences to the Image - and every Image has consequences on the Robots movement...
        ... in fact: the control Loop is very likely to become unstable - or will show limit cycles
        In my case, this happened to me right that very instance, as my Boss hat the first time view on the new application :icon_wink:. The Robot tracked the target nicely - the target stopped, for whatever reason, and the robot moved in limit cycles with an amplitude ~2-3 cm

  • I appreciate for your informative information,
    I try to use iiwa_stack as well.


    one other problem I just noticed:
    in smartServoSampleSimpleCartesian example a sinusoidal motion is set for the TCP in Z direction. but looking at the motion of the robot you can find that it does not follow exactly the given trajectory. It is supposed to move in only z direction but the robot moves in x and z direction not precisely.


    I increased the "MILLI_SLEEP_TO_EMULATE_COMPUTATIONAL_EFFORT" to let the controller reach to the given destination. Yet no change is observed. I also increased the values

    Code
    aSmartServoMotion.setJointAccelerationRel(1.0);
    aSmartServoMotion.setJointVelocityRel(1.0);


    but not special improvement is seen.
    WHY?
    Thanks

  • Hello Hsadeghian,


    Quote

    one other problem I just noticed:
    in smartServoSampleSimpleCartesian example a sinusoidal motion is set for the TCP in Z direction. but looking at the motion of the robot you can find that it does not follow exactly the given trajectory. It is supposed to move in only z direction but the robot moves in x and z direction not precisely.


    Well - if my Memory serves me right - the example is based on the "SmartServo" not the "SmartServoLIN" Motion.
    Therefore the Interpolation is computed Joint wise -> "the motion model behaves PTPish". Therefore your reported behaviour is explainable - and correlates to the expectations. If you like to run straight lines, you need to switch to "SmartServoLIN", which performs a Cartesian interpolation.


    DrG

  • Thanks,
    I also tried the smartServoLIN sample. It is more precise but yet much difference exist between the commanded trajectory and real one... even by increasing the acceleration and velocity variables!
    ServoMotion.setMaxTranslationAcceleration();
    ServoMotion.setMaxTranslationVelocity(…)


    I am wondering how the commanded camera velocity out of visual servoing algorithm can be realized. The stability of the systems are usually shown assuming perfect tracking of the camera velocities. Thus how the stability is preserved!?

  • Hello again,

    Quote

    I am wondering how the commanded camera velocity out of visual servoing algorithm can be realized. The stability of the systems are usually shown assuming perfect tracking of the camera velocities. Thus how the stability is preserved!?


    Well, as you already stated, right there is the incorrect assumption:
    In the feedback control Loop of a "real world" Visual Servoing,
    you need to consider (model) the transferfunctions of ALL participants, especially that one of the robot (and it's interpolator). As a simple model, you could model the robot - due to its inertia - at least as a PT2 element. (Proportional gain with second order delay) (Wikipedia link -sorry in German, but the Pictures are nice: https://de.wikipedia.org/wiki/PT2-Glied
    an english link https://hackaday.io/page/4829-…on-of-a-damped-pt2-system)


    In fact: it is more an issue of "control loop performance", not so much about "stability", as you design your visual servoing Controller...
    ... since the feedback gains are required to fit the true loop > which results in lower feedback-gains (provide stability) or better models to increase the gains again (improve performance)...


    DrG


    PS: I just can repeat my practical hints of an earlier post:


    It is essential to know exactly where the robot was located, as the picture was taken.
    Advice: sample the robots position data "short BEFORE" exposure as a proper estimate
    (Hint: go for the measured position)

    Dont Forget: This Scenario is Feedback control (!) - that means, since the camera is Robot mounted, any movement of the Robot has consequences to the Image - and every Image has consequences on the Robots movement...
    ... in fact: the control Loop is very likely to become unstable - or will show limit cycles

Advertising from our partners