What makes good robot/cobot safety guidelines?

  • Hi everyone, I'm a social scientist working with the University of Technology Sydney (UTS) on a project that aims to understand how to design good safety guidelines for working with cobots. We'd love your input in this research.


    I understand this is a broad topic, and that "cobot" is a broad term, but we are interested in a broad range of circumstances. This particular part of the study is focused on safety guidelines in the workplace, and the features that make guidelines good (eg effective, easy to follow, appropriate). This is not about replicating or replacing existing safety standards, but about how workplaces can encourage safe practices and reduce risk.



    My questions for you all are:


    1. Have you ever seen good safety guidelines for working with cobots and/or robots in the workplace? If so, please describe them, especially why and what you like about them. If possible, please share a photo, screenshot or file of them. (If you don’t have an example of a good cobot/robot guideline, please describe another technical guideline that you like).


    2. If you were designing ideal safety guidelines for working with cobots in your workplace, what would they be like? (eg the format, placement, type of information etc.)


    3. Have you ever been in a situation where there were no safety guidelines available, but they could have been useful? Please describe what the guidelines ideally would have looked like, communicated, addressed etc., and why you think they could have been useful.


    Please let us know a bit about your experience working with robots/cobots in your responses. Any feedback would be much appreciated!


    Thank you.

  • So... this is for operators, not integrators, then? People with no technical skills, just hired off the street and put to work with the Cobot?


    Hm... interesting. Obviously, there's a lot of process-dependent details. And a lot will depend on how well (or not) the original integrator set it up. There's also the fact that some Cobots can actually switch in and out of "collaborative" mode. A properly-configured system should ensure that the robot cannot operate in non-collab mode (ie, "harmful") whenever a human is within reach, but not every system is properly configured. Still, that issue seems to lie outside the scope of what you're looking for.


    The first rule, I think, is to keep in mind that you (the operator) are the smart one in this relationship. The robot doesn't want to hurt you, but it's stupid, and mostly deaf, dumb, and blind. A Cobot should detect when it's bumped into you and stop before it applies harmful force, but you should never take that for granted.


    Don't put any part of yourself between the robot and a solid object if you can avoid it.


    Just because the robot tries to be gentle, doesn't mean it always succeeds. No matter how gentle it tries to be, your eyes are still very vulnerable. And if the robot is carrying a sharp/pointy tool, many parts of your squishy self are vulnerable. The robot's grippers might still be able to apply a dangerous force if you get your fingers in the way. A robot carrying a spray nozzle could still spray you with something toxic if a valve malfunctions.


    Management has incentives to increase productivity and reduce cost. This may result in the robot being configured... less than fully safe? Again, you are the smart one, and you have to look out for your own safety. Any robot, even a Cobot, needs to be treated with the kind of respect given a loaded firearm[1]. It may be very safe 99.9999% of the time, but that's no excuse to get careless, b/c if you do at the same time that that 00.001% chance happens... think in terms of seatbelt, and airbags, and insurance. No matter how safe the Cobot is, good work habits will minimize your exposure to the one-in-a-million chance of something going wrong.


    Be aware. If the robot or application looks unsafe, or if you see it start to behave oddly, take precautions. Say something. Someone who thinks they know what they're doing may have modified something, made the robot faster or stronger, and made it less safe. You are the one exposed, so you have to look out for yourself. If you've worked with this Cobot for a while, you might be the first, or only, person who notices that something is "off" about it.


    It's not your friend, or your enemy, any more than gravity is when a piano falls on you from 4 stories up. It's a tool (not a toy), like a chainsaw. Powerful, useful, but dangerous if you get careless.


    Moving away from "operator" instructions, to management instructions:

    The biggest mistake I've seen is people buying a Cobot and thinking it's completely safe because it's a Cobot. NO. A Cobot carrying a sharp tool, or a welding torch, is not safe.


    If you're buying your first robot, get professional help. There is an entire industry of experts who understand the relevant safety specs and can handle them for you. That said, there are integrators who will cut corners, so get educated. Read the summaries of the relevant safety specs (RIA, your local legal requirements, etc). Look at your application, ask yourself (and your integrator) "what if?", and look at how you are protected against that "what if?". Never assume that something is too blindingly stupid for a tired, distracted operator to do -- at 8hrs/day, 5days/wk, 50wks/year, a moment of really egregious carelessness is almost statistically inevitable.


    Establish safety standards for the equipment you buy, and for your in-house maintenance personnel. I've nearly been killed more than once by a system that had been built safe originally, but had been modified "temporarily" by the plant to get around a problem and keep production going... and that "temporary" change was left in place and forgotten for years. You might as well be juggling chainsaws. If you don't have the in-house expertise, there are contractors who supply that expertise, and/or training in same.


    [1] Rule 1 of firearms safety: it's always loaded, even when you know it's not. Because the one time you what you know is actually wrong....

  • Thank you SkyeFire, that's very helpful!


    In addition to the specific guidelines you've provided above, I was wondering whether you've ever come across good examples of how organisations or individuals have communicated these guidelines? We're also trying to understand how guidelines can be communicated/reinforced effectively.


    For example, is it mainly training, or are there other formats or methods that have been effective at reinforcing or communicating guidelines? (The warning sticker for airbags in cars that has some visuals to show you which direction to put a child/baby seat would be an example)...

  • Well, it's generally about writing standards, and enforcing them. The Big Three automotive companies are pretty good about this, to the point you can go to pretty much any Ford (for example) plant in North America and know what the placarding means.


    On the shop floor, clear placarding is important. Consistent placarding is also important -- if the warning icon for a pinch point is different on every piece of equipment, you've created a potential for hazardous confusion.


    Color coding is also used. Most places tend to paint any "moving parts" orange -- no the robot itself, but gripper fingers, or moving pins/clamps on station tooling. Stationary elements (tables, support columns, etc) would have a different color. Going outside robots, building columns where emergency equipment (AED, fire extinguishers, phones) are located are usually painted stand-out colors, with placards.


    On a work cell fence, there should be placards that state where different hazards (electrical, pneumatic, gravity, stored energy) are, where the lock-out points for them are, etc. There's an entire sub-science in balancing detail against clutter on these placards. There's usually a "big picture" placard for everyone, then at/near individual hazards, more detailed placards.


    For any integrators you (as a company using their services) work with, you should have a clear, complete safety specification. The RIA standards are a good place to start, but every locality has its own OSHA equivalent, so you need to add those in. RIA standards are also not prescriptive -- they're more like a series of tests to apply than a paint-by-numbers checklist. Implementation of the RIA standards vary between applications, and companies. There's a combination of hazard probability, and severity, and ease of avoidance, that needs to be carried out, and every facility handles that calculus a bit differently.


    So, as an integrator, I'm familiar with the RIA standards (in general), but every customer has their own "spin" on implementation, and needs to document that for me to apply when I'm working on their machinery. Major customers will often write their own "simplified" equivalent of the applicable safety standards, with lots of "padding" to ensure that that simplified standard will work across all their applications. For example, instead of doing a complex simulation of every robot move vs the fenceline, the customer might simply have a rule that the safety system won't allow the robot to come closer than X mm to the fence under any circumstances, where X has enough "padding" to cover all the robot models, loads, and speeds. More advanced customers might add more nuance.


    Some customers will just photocopy the RIA, OSHA, UL, etc standards, throw that at the integrator, and say "make it work like this." Those customers generally aren't good to work for, and tend to have a poor understanding of the standards.


    There are entire companies that specialize in safety, and have certifications in doing safety audits and signing off on them. Often I see them hired in as a neutral 3rd party to do a final sign-off on a new or modified machine. I imagine some of those companies also offer services to help write end-user standard documents.

  • Thanks for that!


    Would it be fair to say then, as a generalisation, that it works best when organisations (big or small) simplify standards and contextualise them, whether they do that in-house or hire 3rd party companies to do it for them?


    In terms of enforcing them, how do people ensure these guidelines are followed? And what happens when they're not?

  • Well, ensuring the guidelines are followed really depends on enforcement mechanisms. I knew one engineer that would buy any system from any vendor, no matter how lousy it was, if they took him out to fancy restaurants with lots of liquor every night during the buyoff phase. There needs to be a process, and it needs to be followed, and there needs to be some kind of "adversarial" check against rubber stamping.


    For plant-floor operators, you can't watch them all the time. So training is vital.


    In both cases, making the safety as simple and "check-listy" as possible helps. Safety procedures that are too complex push people to ignore them, or even bypass them.


    Likewise, corporate culture plays into this as well. 80-90% of the accidents I've seen involved a system that was designed to be (reasonably) safe, but ended up field-modified b/c in the plant everyone was driven to "don't do it Right, do it Right Now," and "temporary" bypasses of some safety elements became permanent. It's not enough to just set rules, management has to not push personnel in ways that incentivize them to break those rules.


    On the flip side, it's easy to make safety too onerous. And I've noticed a trend to push personnel "too stupid to operate anything, but can't be fired for some reason" into the Safety Department, where they can "do the least damage."

    One plant I recall vividly had a Safety Manager who required that, even after shutting down the power to a robot and padlocking the disconnect, maintenance personnel were to chain the robot arm to the floor before doing anything else inside the safety perimeter. He had even had eyebolts installed in the concrete in front of all 400+ robots(!) for this purpose. His justification was "what if a bolt of lightning hits the building, arcs across the locked-out disconnect, and somehow makes the robot jerk violently when someone is standing right next to it?"

    :icon_rolleyes:

    There's a term, "Hollywood threat", I've heard used to describe this. Usually in the context of security, and usually alongside "security theater."

    This kind of thing is corrosive to good safety. It encourages people to take any/all safety regulations coming that the same source less seriously.


    Consequences? Well, it depends. Too many safety violations never get noticed, b/c of either luck, or b/c of other safety layers catching what falls through. If you drive without a seatbelt, your airbag might still save you, after all. Until the day the airbag fails.

    The most recent lethal robot accident I'm aware of (~ten years ago, in Germany, IIRC) involved a tech who, for unknown reasons, bypassed the safety gate, and walked into a cell in the middle of a welding line that was nearly 100m long. Then someone else, at the other end of the line, hit the Line Start button. The tech was crushed against a support beam by the robot, and no one had any idea until someone came down the line to see why one robot was reporting a torque fault to the central control station.

    That accident required the tech to be bypassing that safety, and standing where the robot would strike him, at the same time someone else hit the start button. How many people get away with doing what that tech did, because they get lucky and don't hit the full combo?


    Consequences for a vendor that fails to meet safety standards are generally financial. If they're not caught, then the consequences may be someone being harmed or killed, followed by government safety department fines and lawsuits.


    Consequences for a plant-level operator that doesn't follow safety rules generally start with suspension-without-pay, and escalate on a per-incident basis. However, as noted previously, it's too common for management to set safety rules with one hand, then drive people to ignore them in the name of productivity with the other. That problem needs corporate culture values, government regulation, inspections, and clear standards to alleviate.


    Consequences for maintenance personnel probably need to be more severe, for bypassing safety devices, but maintenance faces those same management pressures. That is a customer-side problem, as it's not possible for a vendor to create a system that's safe from the people who maintain it. At some level, any safety device can be defeated by someone with the know-how, a basic toolbox, the time, and the motivation.

  • This particular part of the study is focused on safety guidelines in the workplace, and the features that make guidelines good (eg effective, easy to follow, appropriate). This is not about replicating or replacing existing safety standards, but about how workplaces can encourage safe practices and reduce risk.

    robotics safety standards exist for a reason and they are mandated. they are harmonized so most countries have same or very similar standard since they are all built around ISO-10218. in Canada it is CSA-Z434 and the latest revision is a little more than cover page slapped on ISO-10218. And it covers/applies not just to conventional industrial robots but also collaborative robots.

    btw. collaborative robot is just a name, they can be dangerous too. this means that implementation is very important to meet the required standards and ensure safety.



    That would be the requirements or the bottom line.


    You can exceed it as much as you like - sky is the limit.


    In terms of enforcing them, how do people ensure these guidelines are followed? And what happens when they're not?


    Once could have everyone move around the plant in a mobility scooter while wearing hazmat suit, helmet with flashing beacon, have every inch of walls covered with safety labels, motivational posters etc.


    But the best safety policy in my opinion is to not hire idiots. I know it is harsh but that is the truth. Diversity hires only added to workforce to make some quota even if they have no training, skill or experience in required field is a problem.


    If one hires idiots, no amount of safety measures, policies, safety committee meetings, safety labels, guards or fences stand a chance. It is only a matter of time someone does something really stupid and then that act of a single idiot will necessitate a whole bunch of reviews, training, new procedures etc.


    And all of that only frustrates normal people that makes this world function. Those who are actually the types that are responsible, attentive and productive.

    1) read pinned topic: READ FIRST...

    2) if you have an issue with robot, post question in the correct forum section... do NOT contact me directly

    3) read 1 and 2

Advertising from our partners