Using KUKAVarProxy as in Danny's example would certainly be safer than RSI, thanks to not bypassing many of the robot's internal safety features. If nothing else, it would be a much safer way to learn the idiosyncrasies of the robot's motion before diving off the deep end into RSI.
Real Time external control of Kuka using RSI
-
geraldft -
May 8, 2020 at 7:57 AM -
Thread is Unresolved
-
-
Using a proxy and talking straight to KRL would save a lot money too.... But the question will be if I can achieve fine control over running moves. Using the default KRL point to point techniques won't cut it. I still need to plot the path through the points graphically in my program and run that exactly in Kuka. Maybe that can be done by transferring a detailed move list to memory in KRL? (Like Ready2Animate)
But then it will mean I am writing a whole new application - or module in addition to my existing one...
There is still some way to go on this I feel....
I have since had more of a chat with the local Kuka expert and he doesn't seem especially concerned about the safety aspect of using RSI for my project. (Any more than safety will always be a concern with robots..) I'm just having to decide now if I really need it....
-
Like i said, in my example i can easily record the robot motions(i can adjust time of the recording , for example store robot positions every 100ms or every 5s...) Then i can replay then easily from that file) it is just text file...
So if your program can produce path, points in a file you can read them in KRL, and replay them... Or if you program a buffer you could feed them on the fly, without the file...
But it is for you to decide if you really need that 12ms realtime control, if you later just want the robot to replay some motions from PC if i understand correctly...
If you can deal with a bit of lag in control, and you have time to test it, you have option to do that for free...
-
Hi Danny.
When you say "record" I guess you mean to plot a sequence of points, then make a file which you copy to KRL make a move passing through those points? In addition you can vary the time period between each point? I assume you can also adjust dwell at each point so the move either continues through the point or pauses?
The kind of moves I need to create effectively have a lot more points. They can be much more free-form. So even if I start the programing process by recording 1/2 a dozen major points, the curves between those points will be edited so the nature or "character" of the intervening movement is very specific. For example it might start moving slowly away from the point and speed up as it moves closer to the next point. It might also pass through any point at a greater or lesser speed. This process can take a bit of back and forth, looking at the move and making frequent adjustments to fine tune it.
The main thing is that the move should pass smoothly through all of these points with good repeatability, following the original curve. Below is an example... running over a 4 second period. Note I am only showing one axis. The black dashed line is velocity, acceleration can also be viewed but is turned off for clarity...
In my application, in order to run the move, a list is created which describes the move at 20 millisec intervals. (50hz) I currently use a separate control unit (like a PLC) into which I can load this file list and run the move. The controller generates step and direction or quadrature signals to control the motors via drives.
Using the application with Kuka the controller now seems like it could be bypassed, since it could simply talk directly to Kuka via ethernet and the terminal program?
So do you think a move like this could be accurately run in KRL using something like your approach? I imagine that similar to you, (minus the voice commands I would jog axes to each of the intended main key points, copy them in my remote application and then edit a curve between those points, which is then loaded into KRL and run in real time. If the Kuka requires 83.333 hz of data then that can be accommodated... Movelengths are most often under a minute, but can occasionally run as long as 10 minutes.
In this scenario, I expect there will be a slight lag when starting the move. But this is only be an issue if other motions separate from the robot had to be run in sync. (Such as lens focus) But if I can access a GPO signal from Kuka then that should enable syncing accurately...
-
That looks to me exactly like the ready2animate Maya plugin. In your case how do you make the changes to the movement is it by hand with the actual robot recording the points?
-
Hi dexterv. As far as I understand, ready2animate does not allow interacting directly with the robot to create moves. The moves are created entirely in the 3d Maya environment using a virtual rig model, then exported as a file to run in the robot using ready2animate.
If someone knows other then please correct me...
-
Seems like you your motions could benefit from splines, as they are interpolated through the points and you can set velocities and accelerations for different parts of it. Also to add triggers if you need to sync or mark some specific points on it.
I heard kss 8.5 having Directory loader functionality (just hearsay) without the module itself, so dynamically created splines might be an option. If not, writing some boilerplate splines and ending them approximately would still be very close to ideal.
-
Yes, that is true that there is no direct connection between the two as far as I know, but how would you create them in the first place? You would point the real robot at a scene and then what happens? Do you need to move it by hand/remote in order to get the right camera angles and record the positions? How would your hardware do it?
If it is for the hand guiding then have you looked at this?
If you are looking to get a few positions for the robot and then play with timing and interpolation you can do that easily in Maya and transfer it to the robot. No need for anything else, I guess I am missing something from your current setup that you want to keep.
-
the only downside to ready2_pilot is that it does not have enabling switch. so one must still hold smartPAd in another hand. and if you have that - there is a space mouse too.
-
the only downside to ready2_pilot is that it does not have enabling switch. so one must still hold smartPAd in another hand. and if you have that - there is a space mouse too.
I did not know that...
-
just like a magic trick, drawing your attention to one hand while the other is busy with something else. i bet now same video looks a bit different....
-
Dexterv. That's what I thought - Maya can't get camera positions straight from the robot. It has no live link to Kuka.
In Maya you use a virtual camera to frame virtual objects, which may or may not precisely match the real world objects on your set - even assuming you previously have gone to the trouble of scanning and measuring the set. And if you are filming people, who are most likely not bolted to the floor, then that is a whole other variable...
Maya does have it's place as a very powerful application for pre-visualising and presenting planned moves to clients prior to going on set. But as an on set tool it is overly complicated and lacks interactivity... In any case my application can already import files from Maya, so if I can solve this, I would also have the option of using Maya, without needing ready2animate...
So here's my guide or specification in more detail...
For real life programming, my application needs to be able to jog the robot directly, or at minimum if you using the smart pad, it should be able to read the current axis positions directly from the Kuka system. Whether to use hand guiding or simple jog buttons or a joystick is not important, so long as the camera can be moved around manually to find the desired frame... it should also be possible to direct the robot to move to any position in the move that is being worked on, including anywhere on the curve between the main key points.
After the initial recording of the points, a quick interpolation is made between them, and that move is run. Typically then a lot of editing starts. Timing between the points will be adjusted, new points might be added and curves are adjusted. It is often necessary during this process to move the camera to a specific time position in the move, then adjust the frame a little and re-record that point. There is a lot of back and forth. Finally when the move is accepted, then there needs to be a focus run. The camera will be moved to positions at intervals during the shot and a separate axis, controlled by the external software, is edited to adjust the lens focus during the shot. Finally this lens axis is run in sync with the robot. During the run it must hit all it's marks hence needs to be synced tightly with the robot run, at least from the start point. An extra robot axis might be possible to use for this, but generally lens motors are very small, and I understand even the smallest Kuka accessory motor is too big and could easily damage a lens...
A further complication is that it is often required to run the move at different relative speeds - for instance at the cameras normal rate of 25fps and also at high speed to achieve slow motion in the subject. Then the expectation is that both moves will actually match when overlaid with each other... Normally that scaling is carried out in my software so should not be an issue depending on how tightly the robot motion conforms to the data...
As it stands I am still considering RSI vs Kukavarproxy.
-
just like a magic trick, drawing your attention to one hand while the other is busy with something else. i bet now same video looks a bit different....
Indeed, this is exactly what happened. I was left wondering how I missed that.
-
BTW - DannyDJ if you are there - could we start a separate discussion about how to work with OpenShowVar and KUKAVarProxy? Or are you willing to have a private conversation about it? I looked on github but there is no commentary. How did you find your way into using it? Thanks Gerald
-
- Best Answer
Hi Danny.
When you say "record" I guess you mean to plot a sequence of points, then make a file which you copy to KRL make a move passing through those points? In addition you can vary the time period between each point? I assume you can also adjust dwell at each point so the move either continues through the point or pauses?
The kind of moves I need to create effectively have a lot more points. They can be much more free-form. So even if I start the programing process by recording 1/2 a dozen major points, the curves between those points will be edited so the nature or "character" of the intervening movement is very specific. For example it might start moving slowly away from the point and speed up as it moves closer to the next point. It might also pass through any point at a greater or lesser speed. This process can take a bit of back and forth, looking at the move and making frequent adjustments to fine tune it.
The main thing is that the move should pass smoothly through all of these points with good repeatability, following the original curve. Below is an example... running over a 4 second period. Note I am only showing one axis. The black dashed line is velocity, acceleration can also be viewed but is turned off for clarity...
In my application, in order to run the move, a list is created which describes the move at 20 millisec intervals. (50hz) I currently use a separate control unit (like a PLC) into which I can load this file list and run the move. The controller generates step and direction or quadrature signals to control the motors via drives.
Using the application with Kuka the controller now seems like it could be bypassed, since it could simply talk directly to Kuka via ethernet and the terminal program?
So do you think a move like this could be accurately run in KRL using something like your approach? I imagine that similar to you, (minus the voice commands I would jog axes to each of the intended main key points, copy them in my remote application and then edit a curve between those points, which is then loaded into KRL and run in real time. If the Kuka requires 83.333 hz of data then that can be accommodated... Movelengths are most often under a minute, but can occasionally run as long as 10 minutes.
In this scenario, I expect there will be a slight lag when starting the move. But this is only be an issue if other motions separate from the robot had to be run in sync. (Such as lens focus) But if I can access a GPO signal from Kuka then that should enable syncing accurately...
Hi... If i understood correctly, on the scene you need to move the robot, to specific points, angles, you need to capture that points, and later you alter the path beetween them on PC, then you need to replay this altered paths...
I dont see any problem with that...
I just wonder how fast these robot motions are if you need them to replay at 20ms...
Here are some good examples in the links (in this one you have research done for RSI and KUKAVarProxy and example how to use it for robot control)
https://www.google.com/url?sa=t&sourc…_LKfGLBiJrx9VVX
Here is the latest version...
https://github.com/OpenKuka/KKukavarProxy
Here is an example with RSI
https://hackaday.io/project/164468…ontrol-with-ros
Here you have pdf with detailed information again with example
-
Thank you Danny
The speeds of the motions can vary between very slow and subtle and extremely fast. The reason for the update rate is to be able to describe the motion in sufficient detail. Also the move is related to recorded camera frames, so each position at 40msec is one camera frame at a standard 25 frames/sec rate. I normally output use 20msec or 1/2 frames for more precision.
I have seen the Norwegian article, though I wasn't sure if it was up to date. I have some hard reading to do...
Do you use the python of java version of openshowvar?
-
Hello, no problem, this are just links freely accesible on the web, some of them helped me... Basically you just need to know TCP/IP protocol, you can do that in python, C++,C#, java which one you prefer... Just need to send and receive data in right packets...
-
Hey i'm doing all this now in Touchdesigner.
I tried openshowvar a couple years ago and it was way too slow
now I am using RSI and passing the values back to touchdesigner in real-time (<12ms)
Because touchdesigner is a real-time renderer, I can make changes to my path dynamically and send it over. I have it controlled with a playstation controller and the angles pass back into the system to visualize it's motion.
We built a small python app that runs in the bg and responds to the messages from the KRC4
Pretty easy actually to control the axis's
What everyone is saying is true though, its a lot more dangerous, but it's way more useable this way.
I built some safeguards into the system, it limits the outgoing acceleration of all moves and it shuts off if the outgoing or incoming coordinates are outside of boundary.
The hard part that you are not even thinking about is the kinematics. I had to re-build a kinematic solver to know where the angles were supposed to be for each Cartesian position I was trying to go. That is the real power of the maya app, is it has options for different kinematic solutions and ways of changing axis positions easily.
-
Finally made this work. After months of finding ways and means around the Kuka, I built an interface from my motion control program to run the robot using RSI. Data is being transferred via a humble Arduino to ethernet converter. The Arduino acts as a UDP server and feeds position updates from the Motion control interface. The data source is quadrature which is normally used to directly control servo motors, but now converted to position data as required by the robot. The program is running a loop between two key positions. This is a very simple example of a camera move. Moves can run over any time period and have dozens of key positions. Playback speed can be changed on demand.
The main advantage of this system is flexibility and that everything can be done in the moment with the robot. Axes can be jogged easily to find key positions. Once a move is set the camera can be moved along the path at variable speed as well at specific relative rates, even frame by frame for timelapse or stop motion photography...
The robot is operating in cartesian space using absolute control at 12msec rate. I will try 4msec later once I sort out a few more details. But even at 12msec it seems to work quite well.
Safety wise it seems well behaved so far. If I accidently asked it to jump to some crazy position it will fault. If it finds a physical axis limit it will stop and fault.
More work to do but quite happy to have gotten this far...
-
Hey i'm doing all this now in Touchdesigner.
I tried openshowvar a couple years ago and it was way too slow
now I am using RSI and passing the values back to touchdesigner in real-time (<12ms)
Because touchdesigner is a real-time renderer, I can make changes to my path dynamically and send it over. I have it controlled with a playstation controller and the angles pass back into the system to visualize it's motion.
We built a small python app that runs in the bg and responds to the messages from the KRC4
Pretty easy actually to control the axis's
What everyone is saying is true though, its a lot more dangerous, but it's way more useable this way.
I built some safeguards into the system, it limits the outgoing acceleration of all moves and it shuts off if the outgoing or incoming coordinates are outside of boundary.
The hard part that you are not even thinking about is the kinematics. I had to re-build a kinematic solver to know where the angles were supposed to be for each Cartesian position I was trying to go. That is the real power of the maya app, is it has options for different kinematic solutions and ways of changing axis positions easily.
Harvey, this makes more sense to me now than it did way back then.. I especially appreciate "Pretty easy actually to control the axis's" But only in retrospect. It's actually very difficult to implement, with many roadblocks on the way - it just seems simple once it's all working!
BTW - I'm not sure why you felt the need to use IK to calculate angles in your system? If you want to know the actual axis angles you can get that from the robot on the run. Or did you need to know in advance to test for limits?
In my system I can set position limits as well as warnings for acceleration and velocity . However, because I am controlling cartesian axes, that does not take into account what each individual axis is contributing... So maybe that's a reason for some IK?
In my system I can also use a gamepad for moving axes, as well as hand-wheels like on a Mill - for more precise control.
-