Creating and evaluating a robot computing network

  • Hi together,

    I wanted to ask if anyone of you knows of a similar project (I couldn't find any, which seems weird), or can evaluate the feasibility of the following idea. I formulated it in a very general way so that no perspective is lost:

    In context of the introduction of IoT technologies companies often encounter limited computing resources, which are no longer sufficient for the intended applications. For this reason, I asked myself how existing robots and machines can form an edge computing network in order to avoid expensive new computer purchases. Thus, a brownfield approach might be feasible without big efforts (may sound to good to be truth).

    For this idea I have 2 questions that came to my attention during my research and where I would be interested in your opinion:

    1. How do you estimate the technical feasibility to form existing robots etc (new Kuka or similar) to a network. This would not only require finding a solution for orchestration, but also meeting safety-critical requirements. As an example, no computation should run for another application, which leads to a malfunction because in the end not enough computing resources were available. How do you see these safety hurdles?
    2. Cost-benefit ratio: In my opinion, there is hardly any computing power in old robots, and the result would not justify the effort. With a modern system (e.g. KUKA KR C4) or a robot with a companion computer, however, I would see some potential, especially if the system is at a standstill. As a newbie in this area, I can't judge exactly to what extent this network is suitable for a real-time production optimization or similar. For this reason I am very interested in a first estimate from you.

    Best regards :smiling_face:


  • Place your Ad here!
  • ...are you trying to create a Beowulf cluster using spare clock cycles in robot controllers? Why?


    Trying would be too much. We are investigating the feasibility. But the Beowulf Cluster comes very close! In fact we did a survey and one main reason for not introducing I4.0 related technologies are limited resources (financial and technical).

    If one could make a step around by using the existing hardware, we could try to solve this problem.

    And therefore we have two main point to focus in the beginning 1. technical feasibility and 2. resulting computation power.

    Regarding the spare clock cycles, it hard to say. If no action is performed, the robot can act as a pure edge computing device. If it is working one could utilize the free computation power (to a limited extent, so that always 100% task fulfillment and safety is guaranteed).

  • That's... okaaaaay. While a lot of robots may have spare clock cycles available, that doesn't mean they're accessible.

    Getting at any of those clock cycles is going to be very brand-specific -- proprietary OSs and languages. Not to mention, almost fully customized on a case-by-case basis, since even in a single facility the amount of free computing resources in each robot can easily vary wildly, from robot-to-robot, and from moment to moment.

    Then there's connectivity. Most industrial robots don't have "network" access in the way we're accustomed to almost everything having these days. They're built for connecting to hard-realtime, zero-jitter industrial busses, and while some of those can pass through a normal ethernet network, tapping that traffic with a normal computer requires special and expensive drivers and/or hardware. Not to mention, it makes your IIoT infrastructure part of the critical path of your automation -- losing that network connection could kill your production cycle if you're not careful.

    And adding "normal-ish" network data transfer to industrial robots requires, again, brand-specific and version-specific option packages for each robot. Not to mention writing communications code in each robot's brand-specific language.

    And that's just the tip of the iceberg. Remember "hard realtime, zero-jitter"? There was a time that a major manufacturer decided that, since EIP and ProfiNet used normal Ethernet cables, they could just wire everything in the entire plant onto one physical network -- the automation realtime comms, email, YouTube, web, the works. It broke everything due to network congestion.

  • Thanks for that answer! That really helps a lot.

    I thought there would be limitations, especially in access and connectivity. I am also completely with you regarding the heterogeneous manufacturer landscape. However, it often happens that a company uses a homogeneous landscape, for example.

    Assuming that only relatively new robots of one brand are considered and taken into account by us, one could use this example to show the performance and connectivity. Using the discrete example of Kuka robots, do you see the possibility there, if there are 20 robots in a row, that one could then implement such a project? Or would the overhead and the effort for orchestration be too great in this case?

    Also regarding the accessibility.

    Of course, I have also found examples with >1000 Raspberry PIs, which were still worse than an Intel Core i9-9900K overall. Of course, this already limits the potential, but a certain performance range should be possible for smaller applications.

    The manufacturer example is nice but a but demotivating. The robots should definitely not be used 100% for other applications immediately, but be used to calculate actions that are not directly linked to the system but for planning purposes etc.

    • Helpful

    So, drilling down to KUKAs specifically. You have two options: run an application in Windows in parallel with the robot's core processes, or create a background task in the robot itself, in the robot's own language.

    All robots used scheduled multitasking of some type. In KUKAs, the realtime OS (VxWOrks) runs in parallel with Windows (which only runs the user interface), but VxWorks has full priority over system resources. Inside VxWorks, the KUKA System Software (KSS) runs multiple tasks on a fixed schedule. Every task (Motion update, I/O update, background task, etc) gets a fixed portion of the overall "IPO Cycle".

    The IPO cycle has been 12ms for about 30 years. In that time, the processor clock cycles have gotten faster, reducing the per-line execution time of any non-motion code. So, it used to be that a long background task would only run some % of its total loop during each "slice" of the IPO "pie," and would be paused between slices (and resume at each slice). These days, it's possible to run non-motion code very quickly. But if your task takes longer than the "slice", it will take multiple IPO cycles to complete. Also, even if the code runs in 10nanoseconds, you can only get an I/O refresh once every 12ms. And handling asynchronous TCP/IP or UDP from a background task is entirely possible, but can be a little bit tricky. You'll want to program in state machines, with no wait states.

    The robot programming languages also generally lack of lot of tools that high-level languages all have -- for example, KRL has no POW function, and no binary shift functions. You have to write your own loops to handle that.

    Adding a Windows task has other issues. If it ever conflicts with VxWorks over resources, you'll probably crash the robot. At best, you'll get lag. So any Windows-side app would have to be small, self-contained, not require the latest Windows patches or updates, and not used direct hardware resources like IRQs and DMA.

    Then there's the root issue of any parallel-processing operation: dividing up the task into bite-size chunks, then re-assembling the results. That you would probably have to handle entirely outside the robots.

  • Hi Skyfire and HawkME,

    thanks to both for your answers and sorry for my late reply. We had some interviews now and got more or less the same feedback regarding different OSs, accessibility, spare cycles, safety etc. and so on.

    We also found that Siemens recommends a maximum task runtime of 70% of the set cycle time. However, you can set the IPO time individually with the restriction, that the maximum task time for operation does not exceeds the time set. Here we could try to analyse if a smaller time and faster I/O updates make the whole operation more attractive.

    However, it would be great to know the potential of the whole idea. Thats why we started to make thoughts about an ideal use case, that hypothesizes a homogenous and relatively new manufacturing line with no accessibility and connectivity issues. We also found out in interviews, that total runtime during normal operation hours is sometimes only 50%. That means, in the other 50% of time, other calculations would not affect the safety of line operation.

    Thats why I looked for any clustering of robots to one network in such a case, but as expected, I found nothing. That's why I have the question if you know of any projects that may have had a look at this question? In such a case, i.e. Siemens CPU 1518-4 PN/DP would be free that is normally applied for high performance calculations. Do you see any potential there (in such a best case of having state-of-the-art HW) or are still the points above predominating?

    Best regards and still a happy new year!

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account
Sign up for a new account in our community. It's easy!
Register a new account
Sign in
Already have an account? Sign in here.
Sign in Now

Advertising from our partners