1. Home
    1. Dashboard
    2. Search
  2. Forum
    1. Unresolved Threads
    2. Members
      1. Recent Activities
      2. Users Online
      3. Team Members
      4. Search Members
      5. Trophys
  3. Articles
  4. Blog
  5. Videos
  6. Jobs
  7. Shop
    1. Orders
  • Login or register
  • Search
Everywhere
  • Everywhere
  • Articles
  • Pages
  • Forum
  • Blog Articles
  • Products
  • More Options
  1. Robotforum - Support and discussion community for industrial robots and cobots
  2. Members
  3. Mister_Robot

Posts by Mister_Robot

  • Default screen at start-up - HMI

    • Mister_Robot
    • July 10, 2024 at 12:06 PM

    Hello,

    Is it possible to launch an HMI page by default after the robot start-up? I would like the production page to be accessed automatically when the controller is started. Do you know if it is possible to set a default page?
    I don't want to have to do too much to access this page in the BROWSER menu for operators.

    I don't know if there is a system variable that allows this. Or to use $TX_SCREEN to define the default screen.

    The aim is to lead to a page allowing access to different menus depending on the status of the user (operator or technician). And therefore to have a default page allowing the user to be directed to different menus.

    I know that you can call the DSP_WEBP() Macro program from the Menu Utility but I don't have the R577 Fanuc option. Is there another way of doing this?

    I thought about HOT START Autoexec program but I don't know which variable to retrieve for my web page (TX_SCREEN)

    Thank you for your help!

    MR.

  • Vision process without calibration

    • Mister_Robot
    • June 12, 2024 at 9:19 AM

    Hi guys !

    I'm currently working on a Pick&Place application for labels on electronic boards. My aim is to calibrate the point at which the robot picks up the label using vision. Unfortunately, the label dispensing system I'm using doesn't allow the installation of a calibration grid, and the inclination of the label following its pre-peel can vary.

    To overcome these constraints, I opted for a vision process without calibration. However, I encountered a problem with the IRvision FANUC Offset Data Calculation Tool. The following error message appears: CVIS-700 Circular target mark was not found.

    In IRvision > Offset Data Calculation Tool I get ‘status : In Process’ (see 2nd screenshot).

    I've tried several combinations: readjusting the working distance of the camera, changing the colour of the table to get a better background contrast with the label, but I'm still stuck.

    I don't understand why this tool is looking for targets to calibrate when the process is supposed to be carried out autonomously, without calibration. Could you help me solve this problem or give me some idea of how to get round it?

    Thank you in advance for your assistance :winking_face:



  • Enhancing Electronic Card Labeling Precision Using iRVision Fanuc

    • Mister_Robot
    • May 31, 2024 at 2:10 PM

    Hello !

    Update :

    - I've used for my tests the multi-view vision process of IrVision Fanuc. This method gave to me the possibility to automatise the calibration by 3 differents camera views (3 > visions RUN FIND).

    - In a second time i've used the VISION GET_OFFSET to update my CONDITION VOFSSET on my differents passage points.

    - Thoses methods works and this calibration offer more flexibility

    ! If you make an application with loop and offsets on points like me, pay attention to the order of the point terminations:

    For example >

    AND DON'T FORGET CALIBRATION IS KEY ! :upside_down_face:

    Thanks for your reactivity !

    MR.

  • Enhancing Electronic Card Labeling Precision Using iRVision Fanuc

    • Mister_Robot
    • April 4, 2024 at 8:41 AM
    Quote from HawkME

    Is the camera mounted to the robot or fixed?

    Are the markers you are taking a picture of on a working surface or held by the robot when you take the picture?

    The answers to those questions determine if you are offsetting the frame or tool.

    A user frame is part of any vision process to determine the vision calibration grid location.

    The camera is mounted on the robot

    The markers that are photographed are located on each electronic card blank, in the two opposite corners of the card (2 markers per flank).

    These markers are not fixed and must be photographed again each time a new PCB arrives via the destacker system.

    -> Process: The unstacker supplies and presents the PCB to be labelled > the robot takes a photo of the 2 markers on this PCB > and adapts its PR offset according to the situation.

    I've included a schematic diagram to make it easier to understand

    Furthermore, I don't know how accurate the label application will be (I know I can't tell without testing, of course). I've seen several vision methods in iRVision > GPM Locator Tool

    I've seen several vision methods in iRVision > GPM Locator Tool > Given the wide range of blank sizes to be processed, I won't be able to create a reference UFRAME per blank based on the markers. So I'm looking for a method that will allow me to use the information retrieved from the marker positions to adapt my first label application position (which has a direct influence on my subsequent PR offset).

    My idea was :

    I create a UFRAME on the unstacker system that holds the pcb blanks.

    And from this UFRAME learned with the 4 point method I shift the original system according to the position of the markers recovered by the camera and I establish a UFRAME on the unstacker system which receives the pcb blanks.

    Images

    • Schematic diagram.JPG
      • 72 kB
      • 1,243 × 696
      • 9
  • Enhancing Electronic Card Labeling Precision Using iRVision Fanuc

    • Mister_Robot
    • April 3, 2024 at 12:14 PM

    Hello,

    I am currently working on developing an application for labeling electronic cards using a suction cup gripper mounted on my robot equipped with an iRVision 2D camera. There are always two markers present on the electronic card blanks, each marker being a 3mmx3mm square. My aim is to utilize these markers to precisely adjust the positioning of labels on the electronic card blanks. I deposit the labels using the suction cup mounted on the gripper.

    As a card blank can contain multiple cards, I employ an offset matrix to determine the necessary shift for each label. The electronic card blanks are fed through a supply system, and the referencing may sometimes shift due to variations in blank sizes. Hence, I am exploring the most suitable vision method for this process. In summary, I aim to correct the referencing offsets by capturing images of the markers using the iRVision camera. I have noted various methods such as Tool Offset, Fixed Frame Offset, etc.

    Currently, no testing has commenced, but I seek clarification on these points before initiating experiments. Additionally, I wonder if it is necessary to create a UFRAME from the supply system for this process.

    Thank you in advance for your assistance!

  • Good use of offsets on FANUC robots

    • Mister_Robot
    • April 12, 2023 at 3:16 PM
    Quote from HawkME

    The accuracy is determined by how well the frame is taught and how well the robot is mastered.

    Neither is inherently better than the other, but they do serve a different purpose. In this situation since they are perpendicular they will do the same thing.

    Ok great!

    Thank you :winking_face:

    MR

  • Good use of offsets on FANUC robots

    • Mister_Robot
    • April 12, 2023 at 1:09 PM
    Quote from HawkME

    If your tool is perpendicular to the user frame then they will both work the same way.

    Yes they are,

    Is one method more accurate and repeatable than the other on offsets or is there no difference?

    Thank you.

    MR

  • Good use of offsets on FANUC robots

    • Mister_Robot
    • April 12, 2023 at 11:23 AM

    Hi !

    I currently have an application to apply glue to a part, this part is contained on a support. There are 6 parts per holder side by side, they are at the same distance from each other.

    The holder containing the 6 cards is gripped by the robot's gripper.

    The robot makes the movements with the holder under the glue nozzle (this nozzle is fixed).

    I programmed the glue points for 1 part and repeated by PR offset the coordinates for the other 5 positions. (OFFSET PR in Y)

    I am currently wondering which offset method to use, I have tested the 2 methods I was thinking of : OFFSET PR / TOOL OFFSET PR.

    However, I am currently wondering which one is more suitable for shifting glue points on different parts.

    As I can't learn UFRAME any other way, I learned UFRAME using the fixed dispenser and the holder held by the robot clamp, I don't know if this is the right method and if it will be a problem in the future.

    How would you go about it? Is there a difference in using these methods for this type of application?

    Thank you, I look forward to your reply.

    MR.

  • Fanuc CRX Force Face Match

    • Mister_Robot
    • September 21, 2022 at 4:49 PM

    Hi,

    I haven't used this feature yet to tell you, but I'm curious how it works.

    I don't know if this will help you but the last 2 videos that have been added to the Fanuc CRX E-Learning website are quite fun and easy to learn:

    Training Account
    crx.fanucamerica.com

    Let me know if you find an answer to your problem.

    MR.

  • Can I switch between HMTL pages with a button on HMI ?

    • Mister_Robot
    • April 12, 2022 at 9:11 AM
    Quote from scotty

    I used:

    Code
    <object classid="clsid:7106067C-0E45-11D3-81B6-0000E206D650" id="FRIPButtonChange1" style="width: 100px; height: 40px">
                    <param name="Caption" value="PROGRAM" />
                    <param name="FontSize" value="12" />
                    <param name="width" value="100" />
                    <param name="height" value="40" />
                    <param name="TrueFont" value="-1" />
                    <param name="FastLoad" value="-1" />
                    <param name="PageName" value="/fr/program.stm" />
                    <param name="BackColor" value="12632256" />
                </object>

    Thank you very much for your help it worked!

    For those who had the problem don't forget to replace the name of your program here -> "/fr/name_of_your_program.stm" in order to choose the right HTML page.

    MR

  • Can I switch between HMTL pages with a button on HMI ?

    • Mister_Robot
    • April 11, 2022 at 3:58 PM

    Hi evryone,

    I would like to know if it is possible to create a toggle button to change the HMI page (for example: Page 1= Production and Page 2 = Maintenance)

    I would like to know if with a button (like on PLC HMIs I can switch from one page to another with a simple press)

    I made these HMI pages in HMTL with microsoft web 4.

    Should I play with the variable $TX_SCREEN or $TXSCREEN[...] or am I in the wrong place?

    Thanks for your help and sorry for my english,

    MR.

  • How to configure and launch program with PNS ?

    • Mister_Robot
    • March 31, 2022 at 2:48 PM

    Hello,

    Here is my problem:

    I'm currently doing the PNS configuration to call programs on roboguide.

    I managed to establish an exchange between UI -> DI -> FLAG via the following configuration:

    UI [1-8]: Rack: 34 / Slot: 1 / Start: 1

    UI [9-16] : Rack : 34 / Slot : 1 / Start : 9

    UI [17-18]: Rack: 34 / Slot: 1 / Start: 17

    (And so on for the other DO/DI/UO configurations).

    -> I manage to visualize the state changes (ON/OFF) of the different I/O via the FLAGs.

    I have configured the UI signals to TRUE

    In -> Menu -> Setup -> Prog Select : progam select mode is on "PNS" and production start method is on "UOP".

    $SHELL_CFG $JOB_BASE = 0

    My program is named PNS0001

    But for the PNS configuration I can't put the robot in remote mode "REMOTE".

    Menu -> System -> Config -> 42 : Remote/Local Setup -> REMOTE

    When I set this parameter 42 to "REMOTE" and I perform a "Cold Start" of the robot the configuration is not taken into account.

    So parameter 42 remains on its default configuration: "OP panel key".

    Furthermore, how do I start the "PNS0001" program?

    The basic configuration is as follows:

    UI = UI[1] IMSTP : ON / UI[2] HOLD : ON / UI[3] SFSPD : ON / UI[8] ENABLE : ON

    UO = UO[1] CMDENABLED :ON / UO[2] SYSTEM READY :ON / UO[8] TP ENABLED : ON / UO[11] ACK SNO1 : ON

    When I do the procedure -> PNSTROBE -> PROD_START nothing happens.

    Also with UI[6] : START.

    Is this the right way to program the PNS?

    Do you recommend another method of programme selection ?


    Thank you very much.

  • Can we work without UFRAME if we only work in space (world) ?

    • Mister_Robot
    • March 31, 2022 at 1:54 PM
    Quote from gcarrier

    Did you setup a UTOOL on your robot?

    Yes i did that !

  • Can we work without UFRAME if we only work in space (world) ?

    • Mister_Robot
    • January 31, 2022 at 11:54 AM

    I thank you for your different views and helps I will try this!

    See you soon!

  • Can we work without UFRAME if we only work in space (world) ?

    • Mister_Robot
    • January 28, 2022 at 3:49 PM
    Quote from Nation

    Is this gel dispenser stationary? Pictures or a drawing of your application would be helpful.

    If the dispenser is stationary, this is a textbook application of remote TCP.

    Thank you for your answer,

    Yes the dispenser is fixed on its support

    What does a remote TCP application mean to you?

    Is there any need to learn a UFRAME?

  • Can we work without UFRAME if we only work in space (world) ?

    • Mister_Robot
    • January 28, 2022 at 8:43 AM

    First of all I want to specify that I work with a CRX Fanuc cobot.

    I have to work on a part with a gel application (the part being in the robot's hand, it passes under a gel which is placed on a support and the robot makes the trajectories to go to the passage points and activate an I/O to apply this gel).

    I would like to know if it is mandatory and advisable to create a UFRAME marker for this position? (Knowing that there is no flat plan to learn this UFRAME).

    Otherwise how can I learn the UFRAME?

    (The gel bottle is suspended and pointing downwards, it is placed on an L-shaped stand allowing it to be installed high up).

    I know that UFRAME allows the robot to know and work in a chosen environment.

    Moreover I know that it is enough to relearn its 3/4 points of reference when moving the work plan.

    Thank you for your help.

    MR.

  • How to set the UFRAME on CRX-10iA by a picture ?

    • Mister_Robot
    • January 11, 2022 at 8:57 AM
    Quote from Nation

    That is typically a light press operation. What are your tolerances?

    Tolerances for parts/bearings are +/- 0.02

  • How to set the UFRAME on CRX-10iA by a picture ?

    • Mister_Robot
    • January 10, 2022 at 4:15 PM
    Quote from pdl

    What is the task? How important is repeatability? How important is accuracy?

    Hi,

    For example it would be to take a part to insert a bearing.

    The importance of repeatability is that if the robot does not locate itself efficiently enough, this application would only work partially or the cobot would fail very often.

    The precision would be more level of luminosity, let us imagine that the cobot face of the shade to the part that will generate a shift on the catch of the part, due to the shifted zone in vision. What type of lighting would you tell me to use to remedy this?

    A lighting on the sides or above, given that the camera takes only in 2D?


    Thank you for your help

  • How to set the UFRAME on CRX-10iA by a picture ?

    • Mister_Robot
    • January 3, 2022 at 8:41 AM

    Hi !

    the question I ask myself is the following :

    How change UFRAMEs of the CRX-10iA cobot with an image take by it.

    To summarize my cobot is installed on a mobile chassis and the goal is to move it where we want so that it is versatile on several applications.

    I think that it would be necessary to use macros to change its UFRAMEs and that each macro would correspond to an application.

    These macros would be linked to photos for example a QR code corresponding to each application.

    Let's imagine that the cobot is in zero position on all its axes in articulation (J1, J2, J3, J4, J5, J6) before each arrival on station (taking into account that it is a mobile chassis), we would launch a photo recognition program and after that the program would choose in which macro to go.

    Thus its trajectories will adapt to the new UFRAME.

    Do you know the limitations of this CRX?

    Would there be any accuracy problems on the pick and place?

    I don't know if I'm using the right method, maybe you know a better method.

    Thank you for your information.


    MR.

    :smiling_face_with_sunglasses:

Advertising from our partners

IRBCAM
Robotics Channel
Robotics Training
Advertise in robotics
Advertise in Robotics
Advertise in Robotics
  1. Privacy Policy
  2. Legal Notice
Powered by WoltLab Suite™
As a registered Member:
* You will see no Google advertising
* You can translate posts into your local language
* You can ask questions or help the community with your knowledge
* You can thank the authors for their help
* You can receive notifications of replies or new topics on request
* We do not sell your data - we promise

JOIN OUR GREAT ROBOTICS COMMUNITY.
Don’t have an account yet? Register yourself now and be a part of our community!
Register Yourself Lost Password
Robotforum - Support and discussion community for industrial robots and cobots in the WSC-Connect App on Google Play
Robotforum - Support and discussion community for industrial robots and cobots in the WSC-Connect App on the App Store
Download