Posts by ColoradoTaco

    Okay, I have a ton of experience doing RAPID and configuration stuff in RobotStudio, but the modeling and simulation is still really new to me.


    I am trying to model up a tool for a new project. I have a pneumatic gripper and a pneumatic rotary actuator. I need to be able to index the rotary and have the tool/TCP follow, but I can't seem to get it right. Do should all the components be set up under a single tool? Or should I create a mechanism for the actuator, and a tool for the gripper, then attach them in my station?


    I am able to get the physical model of the gripper to move with the actuator, but when I try to get the TCP to move as well, I'm just not getting it.

    Enio do you have a bullseye you can run it through? Typically see that as standard practice after a torch or cup change. If you don't have a bullseye, I would definitely look into it!


    SkyeFire & Skooter are right on with their comments, as well.

    As in the title, looking for CAD model of a CRX-5ia. STEP file would be preferable, but I'll take what I can get. Anyone have one they're willing to share, or know where I can download one for free? My mediocre Google skills have not turned up anything (although I did find a 10iAL on GrabCAD).

    There's a tool called edge locator that might help. You can measure your parts with it.

    We tried using edge locator, but didn't really find any benefit for us. We are currently using a couple of stacked blob tools, then using some logic in the TP program to filter out parts that are stacked too close to each other.


    We've also set up a comparator sub-routine that pulls data from the first part found, and compares every subsequent part to those values. We check blob area, perimeter, and semi-major axis. If any of those fall outside of a 10% tolerance we don't pick the part. It's not elegant, but so far has been very effective and reliable for a huge range of screw styles and lengths.

    There is a teach tool where you can upload a number of "good" images and the software will create an aggregate ideal part. But I haven't really tested the limits of this. For the application I'm working on, I would have to manually load thousands of images.

    TRANSLATION VIA GOOGLE


    Plates 153 and 165 are attached to the robot.


    The robot and the welding power supply can be controlled, but the parameters entered into the robot are not required to power it with electricity.


    After striking the arc, the wire is not fed.

    jaiiyer I pulled one of our SICK floor scanner manuals, and there is just a single mention of air quality.


    Keep the area to be monitored free of smoke, fog, vapor and other air impurities.
    No condensation must be allowed to form at the light emission window. The
    function of the device may otherwise be impaired, which can lead to unintended
    shutdowns

    The other issue is dust accumulating on the glass of the scanner. That, you would need to address with some dust visors, and implement some regular PM.

    Did a service call once at a very nasty galvanized welding facility. Dirty, dirty, nasty process. They had SICK scanners on every cell, with custom air knife curtains around every one of them. Kept the lenses clean, but damn was it noisy!

    Do you know the size of parts it is supposed to be picking at a given point in time?

    Currently, no.


    We are trying to avoid any operator input like barcode scanning or selecting a SKU from a list. The hope is that they can just dump a batch of parts in the feeder and hit Start.

    I've used the histogram tool to filter out parts touching. Basically require a pixel value around the part to ensure they not touching

    We did find that to be useful as well. But since we have to be so forgiving on the dimensions, we end up seeing some adjacent parts make it through as single parts (see the images above). So we need another way to discriminate them. Can't be a fixed length due to the variety of parts, so we are looking for another way. If we can compare individual values to average value of all found parts, then we're onto something!

    Working on error-proofing our vision job, and have been trying to experiment with the Statistics Calculation Tool. Unfortunately it doesn't seem to work the way we were expecting, and need some experienced input.


    - We have a VERY forgiving and generic vision job, to accommodate a wide variety of parts without changing recipes (not my idea, this is how the project was requested)

    - No issues with picking different sizes and lengths, as we are able to identify blobs of roughly the right shape, and get coordinates just fine.

    - Issue is on the rare occasion that parts stack up side by side, or end to end, resulting in a single blob with a vision offset that will result in picking up TWO parts or ZERO parts with our vacuum pen.


    We were hoping to look at the MEAN value of all major axis, or MEAN of all AREA values, but have not been able to figure out a way to do that. The calculation tools seem to only look at a single result?

    If I understand correctly, your "Move Home" routine can be called at any time from the HMI, and will crash the tool into your fixture if called at the wrong time?


    Homing routines are a very big topic. Many ways to accomplish this, but the right answer will depend on your cell design and how difficult it is to move the robot to a safe position.


    Your home routine SHOULD

    - look at current robot position to help choose safe path

    --- Typically save this position , then back out using Offset or RelTool, if you have a clear path

    - be tested from all expected positions during normal operation

    --- backwards handlers can get the robot out of tight spaces where tool/fixture might interfere

    - wait for operator input if robot is outside of programmed ability to get home safely


    Your home routine SHOULD NOT

    - move directly home from anywhere

    - move at high speed


    If you are having trouble with the robot moving home from an HMI input, then hold that input until the robot reaches a safe location, rather than immediately stopping and trying to get to HOME.

    Trying to figure out a clean/simple way to create a user input driven menu to update key positions in my program. This is a silly way to update positions, I know. It's more of an exercise in creating menus.


    Currently I have a series of UIListView commands which set values in an array, then TEST / CASE of the array value to determine what robtarget or jointtarget to modify.

    Once the desired position is selected:

    1. Set the selected value to a local variable

    2. Move to current value of desired position

    3. Prompt user to jog

    4. Update to new position? Yes/No

    5. Return to top menu


    I'll replace the first TEST with something a little cleaner. Probably set a STRING based on the first list input, then run that string through a single UIListView to get my second array value.


    I've never made an in-depth menu like this. Is there a better way, or am I headed down a reasonable path?


Advertising from our partners