I would try auto
KRC I/O methods
-
SkyeFire -
December 19, 2014 at 6:31 PM -
Thread is marked as Resolved.
-
-
I would try autoYeah, should be auto according to KUKA support, I had this problem a year ago and they told me to set this to auto. And it have been working since that change actually.
-
Yeah, should be auto according to KUKA support, I had this problem a year ago and they told me to set this to auto. And it have been working since that change actually.
Nice info thanks.
Does anyone know how to reset manually without rebooting the entire PC?
-
Nice info thanks.Does anyone know how to reset manually without rebooting the entire PC?
Goto: Display > Input/Outputs > I/O Drivers and then press the reset button. I think you need to be logged in as Expert level or higher aswell.
-
nice, this sounds interesting. I am frequently having to restart it through the menu, but you are saying that if we set this to "auto" it may restart itself? I have definitely seen errors where it showed the 16 fraem limit was exceeded. I actually have not been able to figure out why that is happening, but its a different issue.
This of course isnt' the same as setting up hotconnect groups, which hopefully kuka will add in the future. For now we are running our bus through twincat, and just relaying commands through the ethercat bridge. It works....as long as the plc is running....
-
If you set it to auto it will reset it self, it works at least on the robot i am working on at the moment.
-
Great overview of options.
The DeviceNet option seems to be most appropriate for our situation: we want to send many points (>100.000) reliably from pc to our KRC1 KS 4.17
Can such a connection be set up similar to a RS232 connection using the CHANNEL command with CREAD/CWRITE? -
DeviceNet and RS232 are completely different things. CWRITE/CREAD have nothing to do with DeviceNet.
DeviceNet has to be mapped to the KRC's internal I/O table. The DeviceNet driver updates the variables states between the I/O table and the DeviceNet bus every 12ms (DeviceNet bus propagation delays is a separate issue). It is inherently parallel, rather than serial.
To send a number of values across DeviceNet, you would need to allocate a number of bytes for each value, and handshake bits to ensure the cyclic transfer across those bytes.
For example, assume you are sending 6 Real/floating point values for each point, and want to use 32-bit resolution. That would require 6*32=192 bits for transferring a single point, plus a few handshake bits -- quite easily within the robot's capabilities.
Your robot program would wait for a handshake "data ready" bit from the source, and read in the 192 bits parallel into 6 Integer variables, then send back a "data received" bit. Then wait for the "data ready" bit to go false, then reset the "data received" bit, then go back to waiting for the "data ready" bit again. Along the way, the 6 Integer variables would need to be parsed into 6 Real values.Given the 12ms refresh rate, and the need to handshake each transaction, you're probably looking at ~24ms per transfer, this way. So 100 points would likely take ~2.4sec. This could be sped up by allocating more DeviceNet bytes and sending multiple points with every transaction. This works perfectly well, but requires additional programming.
-
Tx! Good to hear that, this sounds indeed more steady compared to the RS232.
So I need to dive into the mappings and DeviceNet communication to get it working (need some time for that). The idea is to store the communicated values in an 2D-array acting as a circular read/write buffer for coordinates. Are you aware of any applications using something similar?
-
Depends on how exactly you're defining "circular buffer," but yes, things like this have been done. Given how old your robot is, you may run into some array size limits that will require careful management. And the processor speeds are low enough that performing major operations on the array (say, rotating or bubble-sorting the contents) could have significant time costs, if you're attempting any sort of realtime operation.
-
I meant circular buffer like it is defined on https://en.wikipedia.org/wiki/Circular_buffer. Concerning time costs: if there is any array processing needed it will be done beforehand on the pc sending the data. I operate a KRC1 from 2003 and hope not to hit any array limit by doing so.
With your comment about DeviceNet being "inherently parallel" do you mean that a single DeviceNet message will be processed in one cycle in the KRC and thus updates the bits all at once?
-
All right, so not a FIFO buffer or some other type that would require constantly sorting and shifting the data -- that could be costly in machine cycles, which was what I was concerned about.
DeviceNet doesn't do "messages" (well, not at the application level). On the KRC, it is generally used as "polled" I/O -- every device on the bus is assigned a fixed number of Input and Output bytes, and (from the application's view) all these bytes update simultaneously every 12ms.
(there are actually some complexities here that probably won't be relevant, so I'll avoid going down that rabbit hole right now)
Now, there are some potential issues to look out for: devices with very large byte sizes, on slow busses, or devices that have a low refresh rate (certain slow-conversion analog-to-digital converters, for example) may end up breaking the 12ms rule -- that is, the robot will check the bus driver every 12ms, but if the remote device(s) only update every 37ms, you could see some lag in the data. In the worst case, a device assigned a large number of bytes might take more than one 12ms cycle for all the bytes to transfer, leading to a situation where the data is temporarily incorrect at the moment the robot refreshes the I/O table, because the transfer takes longer than 12ms. Protecting against this kind of contingency is why most applications rely on explicit handshaking bits to manage block-data transfers over these busses. You are unlikely to encounter this kind of problem, but it's good to be aware of it so you know what to watch for. -
I meant raw DeviceNet messages so I think we agee here
QuoteAll right, so not a FIFO buffer or some other type...
The goal is to use the circular buffer as a FIFO buffer, so I am a bit confused by this remark. Can you clarify what you mean?Thanks for the comment about the refresh intervals. I will check it with the DeviceNet slave hard/software which is on its way right now.
Concerning the implementation of the buffer in KRL, I am puzzled how to handle the "parallel processing capabilities" of the KRC1: basically I want something like this:
Code... while Unfinished: ... Get point n from Buffer Move to point n Load new point in Buffer from DeviceNet Slave ;(should work parallel with previous line if $ADVANCE > 1) n=n+1 ...
This will work fine as long as the duration of the moves is sufficiently long, but what if the moves take only few millisecs, will the program wait for the "Load new..." line to be finished? Or are these handled by the Controller as different "threads" ?
-
A typical FIFO buffer would have to be shifted every time a new element was added or removed. So a ten-deep FIFO would take 11 read/write cycles every time you did anything with it. You appear to be achieving a FIFO-type functionality without that burden by using a circular buffer and merely indexing the read/write indices around the ring.
Mostly a difference in semantics, really.
If you have a programmed robot motion that only takes a few milliseconds, that's a badly programmed motion. You end up getting lots of tiny stop&go motions, or lots of speed variance, b/c the points are too close together for the path planning to work optimally. It's a bit like the CAM Smoothing option in Fusion 360: https://www.youtube.com/watch?v=5PKR5ansIPo
Except in this case, you have to take care of handling the minimal spacing between points yourself. Robots are less amenable to "lots of tightly-spaced points" than CNC machines -- they're much more optimized towards a smaller number of points, spaced well apart.With DeviceNet polled I/O, the values are "static" until a refresh happens. That is, if you read the inputs 5 times in between refreshes, you'll simply see the same values 5 times. You can actually do this yourself by setting up a simple .DAT file array and running a loop to record to it:
If you feed the robot a sine wave on that 32-bit input, the values in the LogArray, plotted graphically over time, will show flat plateaus where your read loop is happening faster than the I/O refresh. Comparing the sine wave the robot records vs the reference sine wave you know you're sending will tell you a lot about your total system I/O refresh, between the robot, the DeviceNet driver, bus propagation, and input device update rate.Parallel-processing motion can be tricky. To avoid any stop&go, your buffer has to stay ahead of the Advance pointer. So when using $ADVANCE=1, if the robot is moving from P1 to P2, you already have to have valid data for P2 and P3, and you have to have valid data for P4 before the robot reaches the mid-point of the approximation path through P1.
-
Thanks for the suggestion on how to find the refresh rate. As soon as my DeviceNet card arrives I will test it and will let you know. Thanks again for the great help!
-
Hoped to get back with some nice algorithm question, but the configuration of the DeviceNet card through devnet.ini causes the DNDRV to stop. The details:
DEVNET.INI
Code
Display More[krc] debug=1 ;was 0 baudrate=500 logfile=1 ; new [1] ;existing IO macid=5 [2] ;new DeviceNet node macid=10
Reset and reconfigure IO does not pick up the new settings reliably, so after restart the message "IO configuration error: DNDRV" appears. The contents of IOSYS.INI do not seem to be relevant for this error to appear (I tried many ), only if the DEVNET.INI scanlist is expanded with node [2] the error appears (and disappears if removed). This all happens onKRC1. What am I overlooking?
-
The most obvious answer is that whatever device you have on MAC ID 10 either has its MAC ID set incorrectly, or its baud rate. Assuming you're not physically unplugging Node 10 when you change DEVNET.INI, hardware issues like cable termination are less likely. First thing I would try is using the Telnet diagnostics to scan the DN bus and see what devices report back. If Node 10 is properly connected and set to the correct baud rate, it should respond to a dnWho and report back its MAC ID and Polled I/O byte counts.
-
I did not connect the new device (to prevent wiring/baudrate issues) because the devicenet master should be able to handle the situation where one of its slave devices (the new device in this case) is offline. Or is that not correct? In other words: should all devices in the scan list be online at the DNDRV startup?
I found a deviceNet document for KRC2, are you aware of a similar document for KRC1. I could only find dnShow and dnWho as telnet commands for KRC1.
-
I did not connect the new device (to prevent wiring/baudrate issues) because the devicenet master should be able to handle the situation where one of its slave devices (the new device in this case) is offline. Or is that not correct? In other words: should all devices in the scan list be online at the DNDRV startup?I found a deviceNet document for KRC2, are you aware of a similar document for KRC1. I could only find dnShow and dnWho as telnet commands for KRC1.
Master does handle nodes that are in scan list but not available - it reports error and stops scanning.
if you want it to run, make sure all nodes are present and configured correctly. -
Connection is now working; sending test packets from the pc to the robots results in red IO bullets in the IO monitor. I will dive into preparing the data on the pc side to research the max refresh rate
-