Posts by mikeandersontx

    If you want to swap the order of removal of data, set up another pointer (I used R[319] for the first one). You can write with one and read with the other (R[318] for instance).
    Increment push and pop counters independently and you can retrieve data FIFO or FILO. If you need more help, let me know.

    Can i get the background task to push data onto the stack? or any information for to do it? thanks
    [/quote]


    This is actually a RUN, not BGLogic. R[321]-R[325] are the height values I was reading in from the sensor (analog light curtain).


    1: R[320:HeightReg]=0 ;
    2: R[321:CurHeight0]=R[281:SensorMin] ;
    3: R[321:CurHeight0]=R[281:SensorMin] ;
    4: R[322:CurHeight1]=0 ;
    5: R[323:CurHeight2]=0 ;
    6: R[324:CurHeight3]=0 ;
    7: R[325:CurHeight4]=0 ;
    8: R[319:HeightIndex]=0 ;
    9: ;
    10: LBL[1] ;
    11: IF R[299:STOP_TASKS]=1,JMP LBL[999] ;
    12: IF (F[18:CamSnapped]=ON AND F[15:VisPop]=OFF),JMP LBL[200] ;
    13: IF (F[7:LC Trigger]=ON),JMP LBL[100] ;
    14: ;
    15: JMP LBL[1] ;
    16: ;
    17: LBL[100:init read] ;
    18: R[283:LargestReading]=R[281:SensorMin] ;
    19: LBL[110:read height] ;
    20: IF (F[7:LC Trigger]=OFF),JMP LBL[190] ;
    21: IF (R[282:SensorReading]>R[283:LargestReading]),R[283:LargestReading]=(R[282:SensorReading]) ;
    22: JMP LBL[110] ;
    23: ;
    24: LBL[190:end read] ;
    25: R[320:HeightReg]=321+R[319:HeightIndex] ;
    26: R[R[320]]=R[283:LargestReading] ;
    27: IF (R[319:HeightIndex]<4),R[319:HeightIndex]=(R[319:HeightIndex]+1) ;
    28: JMP LBL[1] ;
    29: ;
    30: LBL[200:Pop Off Que] ;
    31: R[321:CurHeight0]=R[322:CurHeight1] ;
    32: R[322:CurHeight1]=R[323:CurHeight2] ;
    33: R[323:CurHeight2]=R[324:CurHeight3] ;
    34: R[324:CurHeight3]=R[325:CurHeight4] ;
    35: R[325:CurHeight4]=0 ;
    36: ;
    37: R[319:HeightIndex]=R[319:HeightIndex]-1 ;
    38: IF (R[319:HeightIndex]<0),R[319:HeightIndex]=(0) ;
    39: WAIT R[287] ;
    40: F[15:VisPop]=(ON) ;
    41: JMP LBL[1] ;
    42: LBL[999] ;


    Hopefully this will help you a bit. May not be the best way, but it was a quick way to make it work.


    Where would you apply the offset found by the third party camera (a user frame, an offset pick position, etc.) and how would you know which part in the conveyor tracking queue goes with which offset found by the third party camera by the time the part reaches the robot if there are multiple parts between the camera and robot?
    I hope that makes sense.


    When I did this using a Cognex camera, I created a FIFO stack on the robot. I had a background task that would push data onto the stack and move the push pointer, as each new vision snap occurred. I also had a pop pointer that would allow me to always pop the proper position data. This was called when I needed new vision data. This was a simple 10 position register setup for each of the X, Y, Z (from an external sensor) and R values. It may have been cleaner to use PRs, but I was in a hurry and didn't know better, so I used individual registers for the data.
    I applied offset data to the user frame for the X and Y positions. I applied Rz data to the tool frame in R.

    Can you elaborate a little bit on the details of how you do this. I just watched one of Fanuc's Friday Webinars on their cRc site about using vision in Roboguide and they were only using previously logged images, no live feed or showing any way to load in newer images even if you cant use a live feed. How are you connected and how do you access your images within RG? I want to create an image and be able to analyze it, and then if I want to, reposition the camera/lighting and then re-analyze the new image. I am a vision rookie so maybe im making this harder than it needs to be.


    Also I looked at Basler's site about their USB cameras, but I just looked through the paperwork that came with the Sony camera from Fanuc and it shows how to connect it to a PC so I will probably try that first. What kind of Ethernet cam are you using, and what was the cost?


    Lastly, if you don't mind, what are you guys doing with all those M1s?


    I only have a minute to respond, so I will give you what I can from memory:
    I have an ethernet camera attached to a PC. The PC is running RG. You can configure the camera in the iRVision setup the same way you would with a real robot. You can still do image snaps with RG. You can then set up your vision tools and change the variables in your lighting/setup and re-run your vision tools.


    I use Basler acA1300-30gm and acA640-120gc. I pay approximately $200 per camera.


    We use these for high speed pick and place with GPM tools to verify only the correct parts are being picked. There is no room for error in these systems. There is a possibility that the wrong parts could be presented to the system through a human error. Unfortunately, I can't give you much more information about the system, but you will probably see the system in the news in the near future. The site that I am currently installing has well over 200 FANUC robots, 160 of them are M-1iA robots.

    You can use a live camera feed with Roboguide?


    Yes. I am currently running 100+ M-1iA robots that run off a single teach station. The teach station is an ethernet camera run into a PC that runs Roboguide. I use a little custom code to copy the vision process framework into the new vision file. I add the current image and any other options, then I export the new vision file.

    I do exactly that to create remote teach stations on my systems. You can use a Basler USB or ethernet camera, with the same resolution as the robot camera, that will plug into a PC. You then go through and set up the camera the same way you set up the real robot camera, although this is done through Roboguide.

    I am doing something very similar on M-1iA/0.5s robots with R-30iB Open Air Mate controllers (and some R-30iA OA Mate controllers as well). I have a remote teach station that is running Roboguide. A USB camera is set up on that PC. There is code that allows snapping images of objects and inserting into a vision process. The new vision process is created (based on the framework of the generic vision process) with the new image inserted. The vision process is then loaded onto the other robots. The robots are running a depalletize process so that I can easily make modifications for part Z height.
    New parts are now taught with the click of a mouse and the entry of the Z height.

Advertising from our partners