Posts by gpunkt

    From the RoboGuide HELP-section:

    Frequently Asked Questions

    Loaded CAD models are slow for graphic drawing and collision check. How to improve the performance?

    Please try to enable [Optimize Imported CAD for performance] in General tab of Option window.

    Setting the Refresh Rate

    Setting the Refresh Rate

    The screen refresh rate is the number of screen updates the 3D CHUIWorld performs per second. This setting effects performance. As the number is increased the robot moves smoother, but slows down because the screen is forced to update faster. Changing this setting takes effect immediately.

    Improving SimPRO Execution performance

    Improving ROBOGUIDE Execution performance

    Click menu [Tools][Diagnostics] to open ROBOGUIDE Diagnostics window. Run simulation with Diagnostics window, the window shows hints to improve ROBOGUIDE performance.

    • Real time for simulation : indicates elapsed real time for the simulation.

    • Graphic Update : indicates elapsed real time for graphic update and the rate.

    • Collision Detection : indicates elapsed real time for collision detection and the rate.

    • Cable : indicates elapsed real time for cable calculation and the rate.

    • Other : indicates elapsed real time for other and the rate.

    • Refresh Rate : indicates the number of refresh screen in real one second.

    Suggestions for improving the speed of ROBOGUIDE simulation software when running TPPs:

    1. Please try to enable [Optimize Imported CAD for performance] in general tab of option window. Loaded CAD models will be optimized.

    2. Please try to enable [Synchronize Time] in run panel. It will improve simulation performance.

    3. Try not to enable edge detection for loading parts in your workcell, if edge detection is not needed for the parts. Each one is very CPU intensive to draw and do collision checks with.

      (*) You can use edge detection enabled parts for only teaching in order to improve simulation performance. You need to set an edge detection enabled part "Visible at Teach Time" is enabled and "Visible at Run Time" is disabled. And you need to load the same part with disabling edge detection. You need to set the part "Visible at Teach Time" is disabled and "Visible at Run Time" is enabled. Then the edge detection enabled part is used at teach time and the edge detection disabled part is used at run time.

    4. Turning off collision detection on the Run Panel will save the CPU while running your TPPs.

    5. Try not to use Spheres in your workcell. Each one is very CPU intensive to draw and do collision checks with.

    6. Try not to use large IGES files. The larger the file is, the more data which has to be drawn and checked for collisions. If the large IGES file is overlap robot moving area, collision detection may need much time. In this case, dividing large IGES files to some IGES files to reduce the overlap area may much reduce collision detection time.

    7. Minimize the number of objects (Fixtures, Parts and Obstacles) in your workcell.

    8. Move the "Object Quality (chordal deviation tolerance)" slide bar on the options->General menu further to the right. This will make objects less "smooth", but will speed things up.

    9. Disabling "Collect TCP Trace" can save up to 15% of the CPU while running your TPPs.

    10. Decreasing the "Refresh Rate (updates/sec)" on the Run Panel will make the screen update less often. This will make the robot motion less smooth, but will free up CPU, allowing the virtual robot to run your TPP much faster.

    11. Close all windows which you do not require. Closing the Program Teach and Profiler windows will help performance slightly.

    12. When recording an AVI, decreasing the AVI size (in the Graphics tab on the Options page) and disabling "Refresh Display" (in Run Panel) will make the recording take much less real time.

    13. If the diagnostics window indicates that graphic update consumes much time, please consider to update graphic driver. ROBOGUIDE may be very slow with old graphic driver. Upgrade graphic card may improve the performance.

    14. If the diagnostics window indicates that cable calculation consumes much time, please consider to invisible some cables.

    15. Please check memory consumption in Windows task manager. Less memory may reduce the performance dramatically.

    16. SSD recommended.

    17. If your PC has multiple graphic chip, please enable higher graphic chip for ROBOGUIDE.

    None of these changes will affect the theoretical time the TPP takes to run shown on the Profiler. These changes simply allow the virtual robot to simulate the run more rapidly.

    I would say that whatever it's called, a calibration is needed to translate what the camera sees (be it fixed mounted or robot mounted) in to positional data that the robot can interpret.

    For a fixed camera, the calibration will accomplish this translation:

    Camera pixel position (Vertical, Horizontal) ---> X- and Y-coordinate in the robot's coordinate system

    For a robot mounted cam, the camera's current position needs to be taken into account:

    Camera pixel position (V, H) + Camera position ---> X- and Y-coordinate in robot

    The term Fault is a little misleading.

    This is because the signal is also present if everything is OK, but the "final" reset signal is not yet present or has not been applied

    For example, when switching from T1 to automatic or when the dead man switch in T1 is released.

    To be fair, the fact that the dead man switch has been released (while in T1/T2) does indeed trigger an alarm. Whether or not an alarm seems mundane or trivial to you will not change the meaning of it...


    Here's an example:

    Let's just for simplicity assume that all cells are equally spaced in X (Row) and Y (Column) direction, and let's say that this spacing is 100mm.

    Start by teaching the reference position in the first cell (Row 1, Col 1).
    Now, let one GI represent the Row number, and the other GI represent the Col number.

    Try to set it up similar to my example to avoid having to work with negative numbers.

    If either GI is "1", then no offset should be added in that direction, hence why you subtract 1 from their value before multiplying with the cell spacing.

    Assuming you're using a GPM Locator, perhaps lower the tolerance for rotation to +/- 90 degrees?
    It seems as if the vision process identified the shape, but interpreted the orientation of it to be 180 degrees compared to the first picture.

    Not sure that FANUC's iRVision-suite offers OCR (Optical Character Recognition).

    Maybe if you train each of the characters that you need to be able to read and assign them to different model-IDs and then sort the results of the vision process not by score but by position (from left to right). Then you can read the results and depending of which model-ID is detected in which order, you can piece together your own string?

    Not sure if TPP can handle string logic though..

    Well, the disabling input works just like that, it disables the (in this case) Cartesian Speed Check (CSC).

    The input selected was set to a Joint Position Check-status (JPC[1]) which wasn't even set up, hence, the status of the JPC[1] would never become 1 or TRUE and thus your CSC was never disabled.

    It's always the case of failsafe when it comes to DCS (and machine safety), where for something to be considered safe, it must be active, just like an E-stop circuit. There needs to be an active signal telling the system that that particular equipment or function is indeed working well. If there are no signal, either someone pushed the E-stop button, or the cable broke. In either case, the system should revert to a failsafe state, which usually means to stop.

    For the CSC there are "add-on"-options to also activate a speed control which will force the robot to move slower, either by forcing the override to a set percentage value or to limit the speeds to a set mm/s value (kind of like when in T1 no motions will be faster than 250mm/s regardless of the programmed speed/override).

    Good thing you found it though.

    There is probably some logic already made for setting the Safe Internal Relay (SIR) that was supposed to act as the disabling input for your CSC.

    Some robots are always referred to as the "Arc-version" of that arm in the FANUC systems, such as AM100iC/iD or AM120iC/iD (the current M10- and M20-series), whereas other models are referred to as the generic (Material Handling)-version (even though they can be ordered as an Arc-version, such as the LR-Mate 200-series which is called ArcMate 50 in Arc-version).

    You need to provide some input to this function so that it knows exactly where to place the mirror shifted positions.
    Pick one of your existing positions ("P1"), then specify where this position will be in the mirrored program ("Q1"), either by providing a Position Register or by jogging to the position and recording it:


    You should have two channels for the EAS-inputs (two channels meaning two individual signals from the sensor). Each channel comprises of a + 24VDC and a 0V line.

    The safety-part of the robot controller will check that the individual signals from the sensor switches "roughly" the same time (this intervall is called the "discrepancy time"). If this condition is not met, then the system will treat the state of the sensor as non-safe one.

    This is part of the whole failsafe philosophy, where you design the safety "better safe than sorry" so that it always revert back to a state which is not harmful.

    Ah! So, IF Pass THEN Pickxx, ELSE PKCSACKQUE(Remove)?

    Now, where to do it? It has to be after the part has been "seen" by irVision and the Pick sequence has been started. Looks like I could do it in PK_CV_PICK11 -- there's an empty section with a "check reject" label in between PKCSCALWPOS and PKCSCHKPOS:

    You should already have "CALL PKCSACKQUE(.....,Success) for when you made a "successful" pick that will remove the part from the que.

    You will just have to introduce a different branch where you remove the part from the que but without picking it.

    Looks like you have a good spot in your program to implement it.

    Assuming that each of the current robots is using User Frame (other than 0 = world frame), then just make sure that the teaching points for the UFs used are well defined and properly marked.

    If not, create a new UF on a well defined physical object.

    Then use the built-in function for translating a program from one UF to another (Utilities -> Frame Offset).

    Not sure if the correct way, but alter your vision process to accept all sizes, then do the discrimination in the robot depending on the measurement, and if NOK part try:


    (But without the picking-sequence)

    This will remove the current part from the que.

    Look up DCS tool change (in the DCS-manual).
    That way you can specify to use [whatever tool is currently active]'s User Model in your CPCs.

    You do need to use safe signals to switch between the various user models/DCS Tool Frames. Also, this will make the DCS Tool Frames linked to your TCP/Tool Frames, so if changing the values of the TCP, you will need to apply changes to DCS.

Advertising from our partners