Posts by DrG

    Hi Giuseppe Capellacci,

    Quote

    I am writing a TCP/IP server and a TCP/IP client using java sockets. Actually, when I use the robot as the server, I am able to establish a connection using my laptop as client. But when I try the opposite task (robot-client and laptop-server), it says "Connection timed out: connect".


    Hm.. the communication should be "symmetric" feasible.


    I would check the Network/Firewall Settings on the Laptop, since a lot of those Firewall-Systems distinguish between outbound and inbound traffic - aka: who is the Server and who is the client.
    Therefore, as you open the socket ports in your Firewall, you can configure for "incoming" or "outgoing" in separate views.


    For an simple test: Deactivate the Firewall of the Laptop for the Duration of your test, and rerun the Client/Server Programs.



    A quick glance on your code: It seems ok, unless:
    I personally would not immediately close (server.close()) the Server socket directly after first Connection of the Client ("Server.accept()"), but keep the Server running in an endless Loop reentering "accept()" as soon as the Client finishes - this saves a lot of Manual "restart" work.


    Therefore: In your program, the Server MUST be started before the clienat - and each time a Client connects...
    ... you could reduce this pain..


    Hope that helps,
    DrG

    Hi Beginner,

    Quote

    Seulki:: but have you called a stopRecording() method at the end?


    Seulki is right...


    you Need somehow to finish/stop the recording (refer e.g. to Section 15.24.4 in the Manual).


    Either you call it manually: "stopRecording()"
    or you Trigger it via the Actions
    or you finish the application
    or the tracing depth of the Recorder is reached.


    See also Section 15.26.6 - Example for datarecording.
    There the "rec.stopRecording()" is called
    AND the application is synchronized (rec.awaitFileEnable()).
    Note: This helps especially for Trouble Shooting...
    Further hints for Trouble Shooting: Inquire the state of the Recorder by calling "rec.isXXX" - isEnabled; isRecording; isFileAvailable


    DrG

    Hello g.dechambrier,


    Refer e.g. to https://community.cisco.com/t5…h-dup-results/td-p/977909 for Trouble Shooting.



    DrG

    Hi Beginner,
    another question:
    in StationSetup.cat - is the "Sunrise Data Recording" Feature checked/selected?


    DrG

    Hi Beginner,

    Code
    I did it and just copied the example :-) but it did not write anything out...


    That does not Sound cool...
    ... hm... the example at the end of the chapter 15.24 (short before 15.25) should work - I had a quick glance on it.


    Q: Where did you look for the datafiles?


    For troubleshooting, you can "manually" force start/stop recording by calling directly "rec.startRecording()" <> "rec.stopRecording()".
    AKA: Bypass the StartRecordingAction mechanism.



    DrG

    Hello Beginner,


    please have a look into the Help of the Sunrise Workbench -> Chapter 15.24



    DrG
    PS: Sorry - not more time for Details...

    Hello Vale,
    Well, 630 Points per Spline lead to computational effort.
    Especially as you use a high Cartesian Speed AND in the same instance with reduced Joint speeds...

    Quote

    Spline planetarySpline = createPlanetarySpline(startingFrame).setJointJerkRel(1).setCartVelocity(900);
    planetarySpline.setJointVelocityRel(0.70);


    Question: Do you really require the 900mm/sec?
    Background: This Speed will - with a high propability - not reached in this Kind of Scenario...
    ... but leads to contradictions in the optimization Problem - especially if the Joint Speeds are reduced also ...
    ... unfortunately, this draws computational power - which you percieve as delays.


    Advice: Reduce the Cartesian Speed - e.g. to 100mm/sec (or lower) - and remove the "setJointVelocityRel()" constraint- to try again.
    Then the computational expensive contradictions (Desired target Speed vs machine joint speed constraint) are less probable.


    If this does not help, you could try to use a set of splines with less segments per spline and Combine/blend them via batch() or moveAsync().


    DrG

    Hello again,

    Quote

    I am wondering how the commanded camera velocity out of visual servoing algorithm can be realized. The stability of the systems are usually shown assuming perfect tracking of the camera velocities. Thus how the stability is preserved!?


    Well, as you already stated, right there is the incorrect assumption:
    In the feedback control Loop of a "real world" Visual Servoing,
    you need to consider (model) the transferfunctions of ALL participants, especially that one of the robot (and it's interpolator). As a simple model, you could model the robot - due to its inertia - at least as a PT2 element. (Proportional gain with second order delay) (Wikipedia link -sorry in German, but the Pictures are nice: https://de.wikipedia.org/wiki/PT2-Glied
    an english link https://hackaday.io/page/4829-…on-of-a-damped-pt2-system)


    In fact: it is more an issue of "control loop performance", not so much about "stability", as you design your visual servoing Controller...
    ... since the feedback gains are required to fit the true loop > which results in lower feedback-gains (provide stability) or better models to increase the gains again (improve performance)...


    DrG


    PS: I just can repeat my practical hints of an earlier post:


    It is essential to know exactly where the robot was located, as the picture was taken.
    Advice: sample the robots position data "short BEFORE" exposure as a proper estimate
    (Hint: go for the measured position)

    Dont Forget: This Scenario is Feedback control (!) - that means, since the camera is Robot mounted, any movement of the Robot has consequences to the Image - and every Image has consequences on the Robots movement...
    ... in fact: the control Loop is very likely to become unstable - or will show limit cycles

    Hi Wang Zhiyang,


    Sorry - I don't know about the Reflex Hand...
    ... but, I stumbled about your Networking setup:


    Quote

    Now the ip address for Reflex Hand is 10.1.1.10, while the ip address for Kuka through the ROS package is set to 160.69.69.100. How to coordinate them?


    What about, you get both IP-Adresses matching in the same subnet, such that you avoid fiddling around with too big subnet masks?


    Eg. you would configure the Reflex Hand into the subnet of the KUKA Robot Controller, eg. 10.1.1.10-> 160.69.69.10
    Or vice versa, e.g. 160.69.69.100 -> 10.1.1.100


    Background: In your configuration, you would Need a veeery open subnet mask (or other Manual Routing configurations), if both participants Need to communicate with each other, which makes it very uncomfortable to use w.r.t the Internet - or other Tools... And such a subnet (or other Routing techniques) will have horrible evil side effects.


    DrG

    Hello Hsadeghian,


    Quote

    one other problem I just noticed:
    in smartServoSampleSimpleCartesian example a sinusoidal motion is set for the TCP in Z direction. but looking at the motion of the robot you can find that it does not follow exactly the given trajectory. It is supposed to move in only z direction but the robot moves in x and z direction not precisely.


    Well - if my Memory serves me right - the example is based on the "SmartServo" not the "SmartServoLIN" Motion.
    Therefore the Interpolation is computed Joint wise -> "the motion model behaves PTPish". Therefore your reported behaviour is explainable - and correlates to the expectations. If you like to run straight lines, you need to switch to "SmartServoLIN", which performs a Cartesian interpolation.


    DrG

    Hello hsadeghian,


    Quote

    Here is my plan...
    I have connected a camera -mounted on flange, to a remote computer to do all the image processing on the remote computer. on the robot computer side, I am going to use smart servo.. based on the provided sample program "SmartServoSampleSimpleCartesian".
    The remote computer is going to receive the information (for instance pose of the robot) from the robot and send proper commands back to the java program over a UDP connection.


    essentially, Is that a proper plan to do that?


    that sounds like a proper plan.

    Quote


    How can I establish this connection?


    In the KUKA Sunrise Controller, there are the TCP/IP and UDP Ports 30000-30010 reserved for running Jobs like those. That means: If you open a socket on one of those ports, you can run the communication to the external Computer right via KLI.
    You will find many tutorials about "How to UDP/TCP/IP Sockets with Java" in the Internet.


    DrG


    PS: Some practical hints:


      • It is essential to know exactly where the Robot was located, as the picture was taken.


      • Advice: sample the robots position data "short BEFORE" exposure as a proper estimate
        (much better than using the Position AFTER the image is read into the main memory of the image processing cpu - especially if you use WebCams via USB)


      • Dont Forget: This Scenario is Feedback control (!) - that means, since the camera is Robot mounted, any movement of the Robot has consequences to the Image - and every Image has consequences on the Robots movement...
        ... in fact: the control Loop is very likely to become unstable - or will show limit cycles
        In my case, this happened to me right that very instance, as my Boss hat the first time view on the new application :icon_wink:. The Robot tracked the target nicely - the target stopped, for whatever reason, and the robot moved in limit cycles with an amplitude ~2-3 cm

    Hello arthur_delafin,


    Short Question: Did you try to press the start button a second time?


    Notion: The first time you press the start button, the Robot moves to "BCO" - which means it runs to a positon, where it left the programmed trajectory.
    The second time, you press the start button, it then continues to travel along the programmed path.


    DrG

    Hi barely_sane,


    from a quick glance on your code,

    Code
    @Inject
    	Tool tool;
    	LBR robot;


    I would expect, that the Nullpointer exception stems from a missing "@Inject"


    Code
    @Inject
    	Tool tool;
    @Inject  /// <<<< Missing Inject? In this case, the Robot is not initialized, which might lead to the reported Null-Pointer exception
    	LBR robot;



    Greetings,
    DrG

    Hello g.dechambrier,



    Hm.. loosing the command session every 2min -> I guess, this is the source of the Problem you already described in your Initial post


    Quote


    I am on a laptop Ubuntu 16.04 machine


    Standard Ubuntu - without Linux Realtimepatch - is an "all purpose" System without any Actions taken to provide/Keep the realtime requirements.


    Such that - if any system part , like the file indexer, an IT Security Suite, a firewall, OR another process ... what so ever ...
    is scheduled,
    => the FRI Client or it's data Transfer (TCP/IP Stack) is NOT scheduled (aka: blocked)
    => Consequence: the realtime condition is hurt
    => and therefore you are loosing the FRI Command mode.


    Strong advice: Switch to a RTOS (RealTime OS), if you like to pace the system around 1-2msec.


    Proposal:


      • You could use the Linux Realtime Kernel (e.g. the preemptive realtime patch ) - which I assume should be available for ubuntu as package.

      • AND additionally, you configure the FRI Client process to be realtime

      • AND you make shure, that the TCP/IP Stack is configured properly - aka the FRI Port is given priority - according to the Linux/ubuntu rules, (which I don't know, but I guess the Internet knows)

      • such that you obtain realtime quality.


    Else the only Workaround is to reduce the pace (as you already reported) to mitigate the issue.


    DrG


    PS: You might further investigate in your existing Linux installation, aka: Look for any process, which is scheduled every 2min - and therefore kills the FRI Clients realtime condition...
    ... but this approach is VERY cumbersome... and - since there is NO realtime guarantee by a Standard Linux - the propability of failure is extremely high
    -> Especially in eventy, where you truely don't like to have it (e.g. important People, you ask for Money conducting further research).
    The law of Murphy states, that the propability of failure correlates with the importance of peoples watching the Experiment
    (Personally, I would use the LinuxRT-Kernel to avoid the risk)

    Hello hrcTsn,

    Quote


    Does anyone have any experience on the topic, e.g. common issues with setup not related to the FRI side but the Simulink side?


    well - I've played around with Simulink and FRI...
    ... cut a Long Story short: At some Point, I had to stop due lack of time...
    ... essentially: You must control the Simulink Simulation runtime, else you will fail with the realtime condition.
    ... Ok, the Initial steps are relatively simple and straight forward:


      • create an S-Function block (Advice: use legacy_coder of Matlab as much as possible)
        In fact: Write three extern "C"-Functions like "myFRI_init" "myFRI_step" "myFRI_Close" calling the FRI Client SDK accordingly
        For FRI-ClientSDK, you create/setup the application, like "SimulinkApp; SimulinkClient"
        Then you can use the mathworks provided "legacy-coder" to generated the required C-Functions playing with Simulink/S-function stuff
        (and the legacy coder can generated the *.tlc files, which are required for the realtime execution model)

      • compile the S-Function block for Matlab using mex (-> for Windows leading to mexw32/mex64)
        Note: The legacy_coder helps a lot...

      • then you could use it from Simulink - aka pressing "Play", the FRI-Connection is opened

        !!!! BUT: you are not "out of the box" in a realtime execution environment !!!


      • Due to the Threading (execution) policy of Simulink, the realtime conditions to FRI are not (always) guaranteed...
        I've obtained best results, using the runtime modes "Accelerator" /"Rapid Accelerator" or "external mode",
        all of those have their own issues.
        Else you should use the realtime-workshop/embedded coder, and generate the code for an "external target" (like "xPC Target" or the like for a realtime-able host Meeting the FRI requirements)
        In this case, the Thing should workout fine.


    CAVEATS with Simulink/Mathworks threading model:
    Modify the receive Parameters of the FRI-ClientSDK to be used with a timeout-Feature
    ELSE the Simulink GUI-Thread is blocked !!!! AND you have to force MATLAB by the task-Manager OR send FRI packets to the matlab process (there is - especially in "NORMAL" Mode of Simulink-Simulation NO Separation of the threading contexts)

    Code
    // create new udp connection
       connection = new UdpConnection (blockingReceiveTimoutInMsec);


    Sorry, I could not help much more - or provide a working downloadable solution...
    ... don't hesitate to ask...
    DrG

    Hello g.dechambrier,


    essentially you are on the right track..
    ... but you need to keep the contract for "WRENCH" mode, in fact, you neet the CartesianImpedanceControlMode for usage of "WRENCH" Mode.


    Proposal - you could do s.th. like:




    DrG

    Hello g.dechambrier,


    In my FRIClient app I am doing some work before the app.step() function. My current theory is that I am not consistently calling app.step() at the desired rate (2 milliseconds in my case) which leads to this error (The "Possible connection problem").


    Hm... this is exactly the case.
    In fact, the FRI Controller side CANNOT distinguish the reasons of exceeding computing time -> breaking the realtime condition. For the FRI Controller side, it always Looks like a "Connection Problem", since the datastream is not in the programmed pace (no matter if there is a FRI client side programming error OR there are networking issues). In both cases, the datagrams are too late.


    In consequence: You suspected correctly, that the realtime pace of incoming FRI channel datagrams (the UDP telegrams sent from the FRI Client) are not in proper realtime condition (in your case every 2msec a datagram),


      • then the FRI connection quality drops

      • AS SOON AS it drops on too bad quality,

      • the FRI Overlay Motion is FORCED, and the FRI sessions statemachine is set to "MONITOR_WAIT".


    If you like to dig a bit deeper, you may attach an FRI Session Listener on the Java/RAPI side, e.g. like:



    //FRISession friSession;

    friSession.addFRISessionListener(new IFRISessionListener() {

    @Override
    public void onFRISessionStateChanged(
    FRIChannelInformation friChannelInformation) {
    // TODO Automatisch generierter Methodenstub
    getLogger().info("Session State change "+friChannelInformation.getFRISessionState().toString() );

    }

    @Override
    public void onFRIConnectionQualityChanged(
    FRIChannelInformation friChannelInformation) {
    // TODO Automatisch generierter Methodenstub
    getLogger().info("Quality change signalled "+friChannelInformation.getQuality());
    getLogger().info("Jitter "+friChannelInformation.getJitter());
    getLogger().info("Latency "+friChannelInformation.getLatency());
    }
    });



    DrG

    Hi Tammoj,
    use the "Injection" mechanism, and you should be fine:


    In your class BackgroundTask (-> BackgroundTask.java) have the following stuff:



    class BackgroundTask extends RoboticsAPIBackgroundTask {


    @Inject
    private ITaskLogger _logger;


    }



    As you use the eclipse mechanisms to insert the code,
    the Import Statements are resolved automatically, then you should find a list of imports, like the following



    import javax.inject.Inject;
    import com.kuka.task.ITaskLogger;


    After that, you can use the logger, as you already are used to it from the RoboticsAPIApplication:



    _logger.info("Hello World");



    Greetings,
    DrG

Advertising from our partners