Real Time external control of Kuka using RSI

  • Hi. I'm new here and perhaps this question is dealt with on other threads, but thought I'd start by asking the specific question.


    I am investigating the possibility of controlling a Kuka KR10 via the RSI interface from an external program. My program has a PLC which generates quad signals for directly controlling servo drives as well as an RS232 option which can stream ascii values in realtime. I am considering using a serial to ethernet adapter to transmit the data from my program and control the robot, either in cartesian space, either full 6 dof or in palletising mode. I have read that external signals can be used to modify or offset the robot position (within limits such as +- 5deg), but I'm not clear if there is a way to completely control it through the full limits of its movement. In other words move it anywhere in it's available xyz space responding to a continuous stream of data.


    Well that's the first question. Then if that is possible, is there any information on how to setup an ethernet connection and how to format the data. Also would an ethernet device have to be configured in any special way? Are certain types of converter going to be better than others?


    I did read that in atleast one mode the sender has to respond to requests from RSI in order to continue responding, so it would need to be a two way communication. Because I would be going through a converter I'm not sure if I could handle that scenario. So are there other simpler ways to control the axes that simply make it a slave without depending on responses?


    Finally is there any way to emulate an RSI environment without actually running a robot? I'm thinking perhaps some less destructive mode might be necessary to start with, while sorting out the basics....


    Perhaps a lot of questions, but I'd be very grateful if I can get some pointers, even just to confirm I am not barking up the wrong tree...

  • What controller and KSS version? It matters more than the robot model.


    I don't think RSI is emulatable without an actual robot, although that may have changed -- and even if it is, you'd have to have OfficeLite or an OfficePC, neither of which is cheap.


    If you have a PLC, I would strongly advice using whatever FieldBus your robot and PLC support, over trying to do a serial-to-Ethernet conversion. Frankly, using FieldBus simplifies things a great deal, and removes all the TCP/UDP complexities from your setup.


    RSI can allow you to remotely control the robot all the way down to the servo level, if you want... but that means you have to create an RSI program that handles all the duties that the KRC would normally. RSI gives you much more granular control, at the cost of "working without a net" -- you now become responsible for handling many of the low-level functions that the robot would normally handle for you invisibly. Like acceleration -- for a normal program, if you order the robot to accelerate at a bazillion m/s^2, it'll simply do the best it can and soldier on. With RSI, if you order the robot to accelerate too quickly, you'll generate a system fault.


    RSI also isn't safe -- in order to give you that fine-grained control, it bypasses a lot of typical safety features. Like letting the robot move at full-auto speeds even if it's in T1.

  • Thank you for the reply. I'm afraid some it is going a little over my head.


    The intention is to use a KRC4 compact controller with most recent software.


    The controller can be fitted with several versions of fieldbus via an add on card. But ethernet is fitted as standard.


    Either way the question is what can be done with RSI if I can get communication happening?


    And do I even need RSI if the lower level stuff could do the same thing?


    Before I can even consider if things are safe, or if the robot will attempt do what is commanded or go to error, I need to find out what control is possible. I would like not to have to move the joints directly, but to command movement in basic cartesian space with some rotations.


    As far as accelerations etc, that doesn't seem rellevant. I want to feed a stream of position values for each axis which will define the movement. They can be at whatever frequency is required for smooth movement.


    Should I be looking at the KRL manual first? Or should I look at the WorkVisual interface?

  • "KRC4 compact controller with most recent software."


    "the most recent" is meaningless. please state version number...

    everything about robot can be dangerous but RSI kicks it up a notch so i would suggest to stay away from RSI until you know exactly what it does and how it works. RSI comes with examples and documentation explaining everything. one thing to consider is that RSI timing is very tight (critical).

    I am not saying that serial converters cannot be fast but i would have to see one guaranteeing response time.

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • Sorry - I didn't know the version was so critical. I believe it is KSS 8.5


    I am here asking some questions because I'm in a situation, where I need to work out if what I plan is even possible, before I commit to purchase a robot. So this is part of my quest to find what RSI is and what it can do. I do have one manual from Kuka but have found it a bit incomplete as it does not go into much depth explaining things. I have been told that I would need RSI because it allows access to real time control of the robot from an external source. (They call them "sensors".) But not a lot more detail than that...


    Maybe if someone can point me to particular documents that would be most helpful... Or are the real manuals only available after you buy it?


    Thanks Gerald

  • please read pinned topic READ FIRST. it tells you some basics to know about Kuka robots, where to find documentation, what to share when asking for help etc. you do not have to buy robot or products to get access. you can get KUKA documentation for free, just register with real email - work/school/some institution (do not try to use webmail like yahoo, hotmail or gmail). bunch of manuals are also shared on this forum. KUKA is very friendly when it comes to access to manuals.




    when you stated "but thought I'd start by asking the specific question." i expected exact PLC model, exact robot controller, exact KSS version, exact size and format of data to be exchanged, response time in xx milliseconds etc. but you could not put a number on any of those. it is even unclear why use PLC at all, specially small one. seem to be just a complication...


    ok so this is not specific question but general exploratory quest. lets put things into perspective - using numbers.


    what is the real time for you or your application? for me it is anything that can be reproduced within given time limit. and that limit depends on use.

    if taking buddy for a beer, 20min is real time.

    for PLC controls usually 10-100ms is real time.

    for non collaborative KUKA robots, 12ms is real time (one interpolation cycle)

    for collaborative KUKA robots 1ms is real time

    for catching pulse of quadrature encoder, real time is in nano seconds.


    small cheap PLCs with RS232 as the only communication option are generally based on a single chip that has to handle everything (comms, scanning logic, using interrupt, generate PWM etc.). this is a far cry from "real time". for example speeds for encoder inputs or pulse train outputs used for positional control of basic axis are far slower than what modular PLCs can do. we are talking order(s) of magnitude. small brick PLC may be able to get up to some 100kHz per channel, dedicated modules easily go past 1MHz. same goes for analog I/O, communication options etc. baud rate matters.

    sending same message length using serial is way slower than using Ethernet. so adding serial in any part of the communication chain is a bottleneck.

    and even if we only compare transmission time to sent a message 115.2kbps (common upper limit for majority of brick PLCs) is 1000x slower than 100MBps. that does not include placing data into buffer, triggering send command etc. which is something that single chip controllers cannot delegate to and therefore must stop everything else to do this.


    higher end PLCs have dedicated modules (with own CPU) for such functions so none of the CPUs is bogged down even when used at capacity.


    RSI can exchange and process data within 4ms (fast) or 12ms (standard). if the times are not met, fault is generated and system stops.

    if you plan on using RSI to exchange data with another system then you need to make sure that every part of the communication chain can complete it's job withing such time limit - every time.


    next thing to consider is price. RSI is well over 5K. this is literally 10x more than what brick PLCs go for. and it is something I would consider significant investment given that application may not work due some bottlenecks such as low end PLC and RS232.


    RSI is used when path corrections are needed for current motion (one that is executed right now). For example while welding and following level or curve of the joint. Using RSI this would require hundreds of updates per motion. this requires real time communication.


    For palletizing etc. this tight control is NOT required. it is enough to get the one set of data (such as posiiton or row/column/layer), and robot can carry out several motions autonomously without "holding hands". This is something where EthernetKRL would be suitable and it would cost some 5x less than RSI. and data exchange with PLC through RS232 converter is perfectly fine.


    it is also not clear what is that you envision. you mention external axis. what is this for? is this the main reason you think of using PLC?

    or PLC will be more familiar platform for enduser if they need to adapt palletizing jobs (edit parameters etc)?

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • Thank you for the questions and info. I have found that particular manual but it didn't answer my question. Or I couldn't find the information that may have.


    First though, to me real time means simply the way you would normally work with the robot using the pendant. ie. be able to move it to key positions by manually jogging axes, record them in a file and plan a move between those points, then run it. Unfortunately the existing robot programming approaches are designed for factory work, whereas I am programming for film cameras mounted on rigs to create fluid repeatable movements.


    In Kuka there exists a system for making similar moves using ready2animate. But the moves data has to prepared in a 3d virtual environment separate from the robot, then transferred as a file for playback in the robot. Any subsequent changes have to go back to to the 3d program and the process is repeated. For my purposes it needs to be more interactive - perhaps that is a better term than realtime?


    So hopefully with that information you have a better idea of the context.


    Now - the software I use has been developed over the years for controlling all kinds of rigs including simple dollies and larger boom arm rigs. It uses a hwat I have called a PLC - since that term seems to be more familiar to engineers when they ask me - though it is in fact a cnc controller made by Dynomotion in the US based on an fpga chip. I have simple adapted it to interface between my software and the required hardware. When running moves it takes care of real time playback of the moves, which have been designed in the software using flexible curve shaping functions to join the key frames. To control the motors it generates step/direction signals or quad pulses. It also has a hardware serial port which can send whatever I program. I read that one of the Kuka sensor modes was 83.3 hz(12msec) which it is easily capable of doing. The serial stream is very stable, though I guess converting it to ethernet might be another matter...


    As far as what the data actually is. I am imagining sending individual lines which continuously update the position for each axis.


    For example a line might look like this if there were 6 axes.


    28.972000 57.944000 86.916000 115.888000 144.860000 173.832000


    This line can be repeated at a constant rate. Additional time information and formatting can be included... I expect some filtering might be required at the robot end so things run smoothly... unless it has other options for interpolating on the run? It is acceptable if there to be a slight delay in response so long as it is consistent... How long a delay? Well imagine you are jogging, you want it to respond fairly tightly... and stop promptly when you release a button.

    Originally I had the idea to just find a robot, throw away the control unit, and install generic servo-drives so I could run the robot motors directly. Though I would also need to do the kinematics solving, and might miss out on some safety features... Hence this current idea.


    As further background, I have a client who has already successfully interfaced my system with a Staubli 130 robot. I wasn't privy to the exact details - but basically he and another engineer used the quad outputs connected directly to a digital interface board in the robot controller. It was opportune for them that a quad input board was a standard option. These inputs are patched in the controller to directly move xyz cartesian positions of the robot in real time. Normally I understand the inputs were provided as a way to facilitate manual control of individual axes using optical encoders. I would also have gone that way but i wanted to use an Agilus which is a much more portable robot than the Staubli.


    Hence my enquiry. Is it theoretically possible? Is RSI the best way to attempt it?

  • I believe Bot and Dolly use something more like Ready2Animate. ie. Running prepared canned moves. Maybe they have evolved but I'm not sure. The MRMC Bolt is certainly a realtime interface but with a Staubli. They sell it as a package, but no way I can afford it... :frowning_face:


    So is there a chance of taking this further here? Some of the things you said made it all sound a bit scary. After looking in the RSi manual I have more questions than answers. I'm still not convinced by the documentation that it is possible. I only read some references to "modifying" a move within limited ranges. So some expansion on your reason for saying it is would be reassuring..


    The RSI manual also talks about using WorkVisual which looks reasonably user friendly but I have found no manual for that. And yes I have asked my local Kuka office but they have not come good as yet... though they have been good enough to invite me to plug in to their demo robot and test things. But realistically I can't do that until I am more informed. If they gave me one to take home and play with over time it might be possible - but not going to happen..


    So just hoping without doing a complete PHD on this i can get some more direct pointers.. :smiling_face: Thanks

    Edited 2 times, last by geraldft ().

  • all user documenation is free and accessible though Xpert. RSI is a tool to develop solutions. it not ready solution on its own. one needs advanced software development skills to use it, but does it really need to ne real time? my understanding is that moviw scenes require a lot of preparation and shots are rather short. so why bother with demanding time critical data transfer if scene is 15 seconds? why not transfer entire computet data set or program and then run it...

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

  • So... RSI is extremely powerful, but also extremely dangerous. It basically gives you direct control of the arm's motion, "below" many of the robot's normal "safety nets" that normally, invisibly, take care of bad commands (like telling the robot to move at some impossible speed or acceleration). RSI also operate below several of the human safety features -- for example, the T1 protection against moving faster than 250mm/s is bypassed.


    RSI also puts a higher burden on whomever is programming the RSI application -- every function that the robot is no longer handling for you, you must now handle yourself. It's possible, I've done it, but it's not easy, or for anyone who doesn't really know what they're doing.


    So, the key questions here:

    1. Do you need RSI? If you want something that gives joystick-like control of the robot's motion to an external system, then the answer is probably yes.

    2. Can you use RSI? This isn't rhetorical -- if you need to use this robot near people, the answer is an emphatic NO. Not unless you go with a Collaborative robot like the iiWA, in which case you won't be using RSI anyway, but the iiWA's own equivalent. If you're using a full-size robot, safety is NO JOKE, and RSI is a really easy way to kill someone, even if you know what you're doing.

    3. Can you use RSI? This is different from #2. RSI can, in effect, turn the robot into a "puppet" of an external system -- there used to be a MATLAB add-on that, with RSI, could "slave" the robot to a stick-figure robot model running in MATLAB. But it takes some serious programming chops on both ends -- your program is now responsible for many of the low-level control elements (acceleration, decel, path smoothing, boundary checking, PID loops) that the robot normally handles "invisibly". If your "normal" robot is a car with an automatic transmission, RSI is like going to a stick shift... which six clutch pedals, six sticks (with 100 gears on each stick), and six gas&brake pedals. You would have to build an RSI application in the robot, and your remote application in the PLC, and have them interact.

    RSI is also deterministic realtime -- you can't just issue a command "go here", you have to issue a relative motion command every 12ms -- to make an axis move at, say, 10deg/sec, you have to issue a motion command of 0.12deg every 12ms. But if you simply go from 0 to 0.12, you'll probably "stall" the robot as if you'd hard-clutched it -- you alsohave to handle the accel, decel, and probably get into some PID loop tuning. For each axis. And that's just the beginning.

  • Wow, that sounds hot.


    As for solution, I think OpenShowVar would be easiest, workable and cheap. If you write movement commands onto a buffer on robot side to enable motion planning, the motion will be smooth.

    Working industrial robot in automatic mode near people is entirely different can of worms.

  • Hi There's a few things to respond to here. I'll start with PanicMode.


    Your main question : "Movie scenes require a lot of preparation and shots are rather short. so why bother with demanding time critical data transfer if scene is 15 seconds? why not transfer entire computer data set or program and then run it..."


    I mentioned earlier that is the way that Ready2ANimate works in collaboration with the Mimic plugin for Maya. So there is an existing pathway for that approach. But to create the shots on Maya you need an accurate model of your set to plan the move. Once the shot data is transferred to the robot, if you want to modify it, it has to be done back in Maya. There is no interactive control between Maya and the robot. The kinds of scenes that I generally work on have to made in situ on the actual location. The director wants to see the shot live in camera, and modify the camera position interactively, often with an actor also present. The move is created in this live environment, tested and edited as required until it is ready to shoot.


    So I need to be able to jog live axes and record the positions on a timeline, then edit the curve shapes and timings between the key positions. I then need to be able to run it at varying relative speeds, as well a go to specific positions along the path of the move to set other parameters such as lens focus and lighting. Now - perhaps there are other ways for doing this, other than using a direct slave mode for the robot - such as just sending jog commands and reading back data positions to create the path, then uploading the path data for the robot to run? And perhaps this can be done in KRL without even needing RSI? Though most of what I have seen in KRL is about point to point linear and arc moves which by themselves can't describe exactly what I would need. They seem more akin to cnc g-code than something more fluid as required for cinema camera moves...


    However I don't have enough of an overview of possibilities at this stage, and hoped someone more experienced and wiser could comment initially.


    As an example I have looked at an RSI manual 4.1 for KSS8.6. But I'm not able to tell from it how I would command incremental unlimited movement.


    Among the RSI objects it lists available I only found "RSI objects for motion correction" with entries such as PosCorr and AxisCorr which both have limitations. In one of the examples it includes a line "LIN_REL {Y 100" yet LIN_REL is not defined anywhere else in the manual. I have a feeling there is more than what is described in this manual...

    Edited once, last by geraldft ().

  • HI Mentat


    Openshowvar looks interesting. Do you know much about it? It seems a bit light on documentation. From what I can tell it is basically a remote console into Kuka? Not sure if I can use it as it stands...

  • Skyefire. I included Jogging in my earlier description - so yes that is a priority.


    My program already has facility for reporting and warning if position limits, speeds or acceleration exceed certain values. Though they could still be ignored I suppose - you can't prevent someone from attempting everything... and there is human error.


    "Can you use RSI? This isn't rhetorical -- if you need to use this robot near people, the answer is an emphatic NO."


    Interesting. But there is already a precedent, much bigger robots than an Agilus are already being used in close vicinity of people. I understand in Germany there are strict rules about filming, but the rest of the world seems to rely on reasonable care and caution, with no specific rules. Certainly a cage is not practical on film sets... I would have thought RSI or not - you could easily hurt someone with a robot...


    "RSI is also deterministic realtime -- you can't just issue a command "go here", you have to issue a relative motion command every 12ms -- to make an axis move at, say, 10deg/sec, you have to issue a motion command of 0.12deg every 12ms. But if you simply go from 0 to 0.12, you'll probably "stall" the robot as if you'd hard-clutched it -- you alsohave to handle the accel, decel, and probably get into some PID loop tuning. For each axis. And that's just the beginning."


    Ok - so this is getting closer to a description of how it might actually work.. So I can't tell it to go to a specific position but must use only relative position commands? So if I calculate and issue a stream or such commands, I expect there may be cumulative errors over time? So I would need to interrogate and restore the position periodically?


    As far as stalling the robot - my program always plans trajectories using ramps at the start and completion of any movement with both accel and jerk control. When starting a move, it will generate a sequence of slightly increasing incremental positions. Is it not possible to filter this in the robot if it is a little harsh? Or are you saying I'd still have to accompany each increment with acceleration information?


    Finally - if I do issue a move that exceeds the physical position limits or is widely errant in speed or acceleration, will the robot always attempt it anyway, and if it can't will it eventually shutdown or stop? Are there no brakes at all when in this mode? Or would I need to program something in RSI to handle these possibilities?


    PS - I like the idea of the iiWA which is similar to Motoman and Universal Robots, but the iiWA is too small ... :frowning_face:


    Thanks G

    Edited 3 times, last by geraldft ().

  • Hello, if you try with OpenShowVar or KUKAVarProxy.exe you can start right away and all the safety features of the robot are still there... With KUKAVarProxy.exe i managed to control the robot with Logitech joystick, record the robot moves on the fly,replay them, even with my voice... So if you really dont need the 12ms realtime control of the robot, i would try with KUKAVarProxy and its free... The next thing i will try using OpenCV and camera and control the robot with movements of my hand...


    Here is the video of the robot response over Ethernet and controling the KUKA with voice...



    External Content youtu.be
    Content embedded from external sources will not be displayed without your consent.
    Through the activation of external content, you agree that personal data may be transferred to third party platforms. We have provided more information on this in our privacy policy.

  • Thanks Danny. That's very interesting. Also kind of fun - but I can imagine it going awry in the wrong circumstances! :smiling_face:


    Can you explain a little how you work with OpenShowvar? When I run it, all I see is a terminal which looks like I can type commands and see responses. Oh and a 3d view of robot in a cage...


    Are you running a separate application which sends and receives commands via this terminal? What language is used? Is it basically a window into KRL?


    Can you point me to any documentation or help? Thanks

    Edited once, last by geraldft ().

  • Ok - so this is getting closer to a description of how it might actually work.. So I can't tell it to go to a specific position but must use only relative position commands? So if I calculate and issue a stream or such commands, I expect there may be cumulative errors over time? So I would need to interrogate and restore the position periodically?

    RSI can provide realtime feedback of the robot's actual position, so yes, the usual practice is:

    1. Subtract the actual from the target

    2. Generate an error from the difference between the two

    3. Determine the direction of correction for each axis from the error

    4. Set the speed of the correction by determining how much of the correction to apply over the next IPO cycle (typically 12ms)

    5. Feed the correction through a PID algorithm based on the error

    6. Send the output of the PID to the motion control object


    Again, RSI is a toolbox, so there's more than one way to skin this cat. I've generally done all of the above inside RSI, because I'm generally tying raw sensor feedback directly into the robot. But other people, with less skill in KUKAs but more in high-end maths and computer-side programming languages, lean towards minimizing the RSI program and doing most of the heavy lifting in an external program they're more comfortable with.

    Interesting. But there is already a precedent, much bigger robots than an Agilus are already being used in close vicinity of people. I understand in Germany there are strict rules about filming, but the rest of the world seems to rely on reasonable care and caution, with no specific rules. Certainly a cage is not practical on film sets... I would have thought RSI or not - you could easily hurt someone with a robot...

    Anyone operating a non-Collaborative robot near people, without safety interlocks or a deadman switch on every person within reach, is operating in violation of RIA safety standards. Which will become a factor if/when someone gets hurt, during the ensuing investigation and/or lawsuit.

    RSI just makes it easier. For example, normally, a KUKAbot in T1 mode cannot move faster than 250mm/s, and requires continuous holding of one or more deadman switches to keep the motors energized. But running RSI in T1 mode removes that speed limit (and T2, AUT, EXT modes never had that limit). And even with the deaman switch, a robot moving at high speed takes time to stop -- first there's the reaction time of whomever is holding the deadman, then the time for the motors to be deenergized and the brakes to close, then the time for the brakes to drag the drive shafts to a halt. The stopping distance of a robot moving at 250mm/s is measured in mm. The stopping distance for the same robot, moving at 1-2mm/s, can be a meter or more.


    I personally came close to losing my teeth once because I had the robot in T1, running an RSI program where I had accidentally screwed up a correction gain. I was standing close to the robot b/c I had to observer it's behavior. It moved so fast I never managed to drop the deaman until it was all over. The only thing that saved me was that I had an attack of paranoia and wrote my program to limit the robot to no more than a couple degrees of RSI correction on each axis. If I hadn't, and if I had been between the robot and something solid, I could have easily been crushed.


    This can be mitigated by using safety-rated means of limiting the robot's volume of motion, keeping the robot in T1, and keeping anyone (except those holding a deadman) out of that motion volume at all times. That can technically satisfy the RIA regs, but requires rigid adherence to procedures. And it still only takes one moment of inattention to get someone hurt, even without RSI. Adding RSI sharply increases the risk, and reduces the safety margin.

  • On the robot side I'm using KUKAVarProxy.exe which is server program that can access robot global variables... Usually you declare variables in config.dat... Then on my PC is a client program in .NET which communicates with KUKAVarProxy.exe and on the robot there is KRL program which can also record robot positions over time or when i say TouchUp in file).... From PC i just change position variable in KRL program... .NET handles the position data of the joystick or my voice commandes or manual command by using mouse and send position data to variable in KRL with KUKAVarproxy.exe... KUKAVarproxy is just TCP/IP bridge so one can access his variables.... Instead of using laptop i could also use Arduino or RaspberyPi for example.... Protocol is just basic TCP/IP and you can see also quite responsive if you take also into account that PC also has to deal with voice recognition... So if you can deal with 100ms lag for control and do not need 12ms control, you can do interesting things...

Advertising from our partners