• Hi,
    I'm a student. During my internship I'm using a force/torque sensor connected via Ethernet to a KRC 4 (KSS 8.5). I'm using RSI 4.0 to exchange data between the senor and the robot controller. The sensor works very well and the constructor provides us some KRL functions to use. However, we want to add other KRL functions?
    After searching in the RSI documentation, I understand that we have to define our RSI context, after that, all we have to do is "triggering" the RSI connection using RS I_CREATE, RSI_ON ...( as if RSI and the KRL code works separately )
    The problem is with the existing objects on RSIvisual we didn't find the one who can be useful for our new functions (we need to do some multiplication with matrix and the RSIvisual math library is poor and it is so difficult to do it graphically)
    This is why we want to transfer the sensor data (the output data of ETHERNET object) to our KRL code, do the calculation in KRL code and return it back to the sensor ? is it possible to do that? With RSI, is there any KRL function to extract the received and sent data ( like the functions of EthernetKRL)??
    I hope the attached image explains more my problem and thank you in advance. :help:

  • you can modify ETHERNET object and add more inputs and outputs to it.

    why don't you start with program example from manual? it has everything to get you going. play with it then incrementally add your modifications.

    basically create RSI context graphically. This produces three files (i hope it is same on KSS8.5).
    then you place those files on a controller in strategic locations. name of file is what you use in KRL program when using RSI_CREATE().

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • (note: my experience with RSI is v3.x, so some of my knowledge could be out of date)

    What kind of transform do you need to do? I don't have my docs available ATM, but I'm certain the RSIVisual math library include a couple of 6DOF transform function blocks, both "general" and specialized for Tool/Base transforms.

    If you absolutely must transfer data from the RSI context into either the Level 1 (regular) or Level 0 (SPS) interpreters, I can think of two ways: the $SEN_PREA system variables, or the I/O system. There's no other way that I can recall off the top of my head to cyclically transfer data between a KRL interpreter and the RSI container(s).

    There's not much "bandwidth" between the RSI context and the KRL context -- they can both read and manipulate the I/O table, but IIRC there's no RSI objects to read $OUTs, only setting them (could be wrong). But the $SEN_PREA variables have read/write access from both contexts, unless I'm forgetting something.

    Your program in the KRL interpreter would have to be able to keep up with the RSI container, which could potentially be an issue depending on your RSI cycle rate. The good news there is that a well-designed state machine in KRL should be easily able to do this -- the potential joker in the deck is the signal update timing. $OUTs are essentially interrupt-driven, but $INs are only read once every IPO cycle (12ms for the L1 and L0 interpreters, but RSI can run at higher rates in certain modes). The $SEN_PREAs... I'm not sure what controls their timing, it would be an interesting experiment. Safest bet would be to run your RSI on the same 12ms clock rate and avoid the issue entirely.

    The KRL interpreters and the RSI containers are, effectively, separate contexts -- it wouldn't be too far off to consider them separate threads on the same processor, with certain shared resources. RSI operates much "closer to the metal," as it were, tying inputs almost directly to motion without many of the abstractions that exist between KRL and the motion planner. This is what makes RSI so powerful, but also potentially hazardous -- RSI is "working without a net," essentially "below" many of the protections KRL provides.

  • Thank you Skyfire and panic mode for your quick responses,
    There is a slite difference between RSI 3 and 4 . The configuaration is not the same anymore, and the files extensions has changed,with RSI 4.0 we have only .xml.rsix and of corse the .src program. But the principle still the same.

    I already saw the ethernet server example. In general, in the KRL code, there are only the RSI functions. This prooves what Skyfire said ( I wouldn't be too far off to consider them separate threads on the same processor, with certain shared resources).
    I have just tested the $SEN_ PREA for the ETHERNET output and it works very well, thank you again :icon_smile:

    I want to do other functions like a continuous gravity compensation of the sensor and other stuff. I know that there is the ForceTorqueControl plugIn and the ZEROGRAVTRAFO object But because there is an academic part in my internship I want to do my own functions to test and learn the principles.
    My goal first is to understand how far I can go with RSI and why not defining my own control Law instead of dragging and dropping RSIvisual object that I don't know how it works (I think I'm dreaming here because I don't think Kuka gives as access to parameters that are "close to the metals" but this come for another topic).

    SkyFire, is there a KRL function to calculation the $SEN_PREA cycle time ( similar to tic toc of matlab ) ?

    Edited once, last by Y_Ma ().

  • The only way I can think of to test the refresh timing of the $SEN_PREAs would be to create an IPO_FAST RSI container and set it up to change the $SEN_PREA values at something faster than 12ms. Then create an .SRC program that would loop as quickly as possible and log the $SEN_PREA values every loop. In an unhibited loop, KRL code executes in something under 50ns/line, IIRC, but each interpreter only executes for 2-4ms out of every 12ms, (again IIRC).

    I don't think the $TIMERs or WAITs can be reliably used in increments below 12ms, so I wouldn't rely on them for controlling program timing. Instead, create a very large STRUC array in the .DAT file, and pre-seed it with null values:

    STRUC LogArrayStruc INT TimeVal, REAL SENPREA
    DECL LogArrayStruc LogArray[1000]
    LogArrayStruc[1] = {TimeVal 0}
    LogArrayStruc[2] = {TimeVal 0}
    ; fill all 1000 entries -- this is easiest to do using Excel

    Then, in your KRL program, once the RSI container is running:

    $TIMER[1] = 0
    FOR Index = 1 TO 1000
      LogArray[Index].TimeVal = $TIMER[1]
      LogArray[Index].SENPREA = $SEN_PREA[1]

    The idea here is to examine the "beat frequency" between the KRL and the RSI. If, for example, your RSI is changing $SEN_PREA every 4ms, what you should see in LogArray is 3 repeats of every TimeVal. If the Level 1 interpreter is "checking" the $SEN_PREA values only once every 12ms, then the SENPREA values will repeat along with the TimeVal values. OTOH, if the Level 1 interpreter isn't limited to the 12ms refresh of the $SEN_PREA variables, you should see something like:

    LogArray[1] = {TimeVal 24, SENPREA 123.456}
    LogArray[2] = {TimeVal 24, SENPREA 456.789}
    LogArray[3] = {TimeVal 24, SENPREA 987.543}
    LogArray[4] = {TimeVal 36, SENPREA 543.210}

    Basically, if the logged values of $SEN_PREA only change when the logged $TIMER values do, then the Level 1 interpreter can't "see" any changes in $SEN_PREA that happen faster than once every 12ms. OTOH, if the logged $SEN_PREA values change faster than the $TIMER values, that's a very strong indicator that the L1 interpreter can see $SEN_PREA update as fast as the RSI container can make the changes.

  • I have tested your idea today. I have done the calculation in a sub program, and this is the result:
    All that we can guarantee here is that $sen_prea changes faster than 12 ms (the refresh time of the timer)
    which means that we have no problem and we can continue digging.
    what is confusing here is that, even with a sub program, the for loop cycle time is not constant which make the subject more challenging.

  • Well, it's mostly historical. Robot programs have almost always been dominated by motion time, so the non-motion execution speed of program code was generally far less important than maintaining the timing relationship between the program code, motion commands, and the hardware.

    So, KRCs have been running on a 12ms internal clock cycle for at least 25 years now (they've only recently started to provide higher rates in certain circumstances). And in that 12ms "pie", ever sub-task got a fixed "slice", usually 2-4ms. The L0 interpreter, the L1, the motion planner, the I/O refresh, each of them got that fixed slice of time, once every 12ms. And inside their "slice", each task was allowed to run as fast as it possibly could, except in regards to motion commands.

    Over the years, the CPUs got faster and faster, but changing the 12ms clock cycle would have been expensive, risky, and frankly there wasn't much need or demand for it (until pretty recently). So keeping the 12ms cycle, and all the "slices", in their fixed relationship was the low-risk technical decision. But with faster CPUs, the execution time per line of non-motion code kept getting smaller. So people started writing fancier, more elaborate functions into robot code, slipping them in between motions or into the SPS.

    Ironically, most of what a KRC needs to function would still fit into a 386DX class CPU. The bulk of the extra CPU horespower has been taken up running Windows for the user interface. :icon_rolleyes:

    The RSI is something else completely. It is, essentially, a tool to bypass much of the robot's higher-level path planning abstractions (and safety nets) and connect sensor inputs directly to the motion planner in near-realtime. While it usually runs on the same 12ms cycle as the classic internal clock, it's not part of it, AIUI -- it doesn't get a "slice" of the 12ms pie that the rest of the robot runs on. Instead, it runs as a parallel process, essentially asynchronous from the rest of the robot, but running on a compatible frequency.

    So, the strangeness you're seeing is mostly a result of "backwards compatibility barnacles," much like modern versions of Windows still have support for old 32bit (and even some DOS 16-bit) functions, because changing the OS would break too much mission-critical software that's still installed in production.
    That, and the fact that automation software has to be conservative about changing stuff that already works. As my team used to joke, "when some Web Weenie makes a programming bug, he crashes someone's website for an hour. When we make a programming bug, we crash $50000 worth of hardware into another $50000 worth of hardware at 50mph+!"

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account
Sign up for a new account in our community. It's easy!
Register a new account
Sign in
Already have an account? Sign in here.
Sign in Now