Just a quick question. We have a R30iB that lost its image, but has all of the TP files saved to a USB. Is it possible using windows, to copy the TP files to the image of another robot, then remove the unused TP files and dump the image into the controller of the robot that lost its image? Basically i'm trying to move the image from one robot to another but delete the TP files and insert the original TP files. Will the controller accept the image or will it fault?
Posts by white_raven
-
-
I'm sure you've done this, but - run another job generated by the software (ideally one you know works), and run a job programmed on the bot exclusively. That should tell you where the problem seems to be.
I would totally have tried that if it were possible. However, every program we run is generated by the software, and they were all off. On the bright side, I found and resolved the issue. I found that after reteaching the home position after a motor was changed, the software was not set up to properly reconfigure to new tool frame positions. I added a 10mm offset to the tool frame and "Voila" issue resolved!
-
Ok, so we have forging dies that are repaired by a weld robot when they are damaged. Basically, a person gouges out bad spots in the die, then places the die in a fixture and a scanning head scans the die. It then compares the scan to the actual CAD drawing of the die, and establishes a path and coordinates through the software somehow. It is second hand software that someone jailbroke and modified, then sold to our company. I am the robotics engineeringg tech and I am trying to explain that the robot is going to the exact coordinates it is being told to go, and that the problem isn't with the robot itself, but they just aren't listening to me. I've come to the conclusion that there is either an issue with their software, or their setup procedure for their dies.
-
Ok, so we have a MA2010 with a DX200 controller used for a welding application. Our job coordinates are set by robomove and rhino software. We set the table or fixture coordinates correctly, as well as the TCP. Our issue is that when welding, the path drifts further and further away from where the scan is telling it to weld. This robot has been doing the same jobs for the past 6 years with no issues. Nothing has changed, but yet the robot position still drifts. When I drive the robot to zero position, all witness marks line up perfectly so we are not losing anything on our axes. Does anyone know what could cause this issue? I'm at a loss.
-
As for the 3DL system, since I already have a 2D camera is it possible to add a simple laser to my robot and match the camera and laser to use the 3DL program?
You would need to get the 3DL head and cables to run to the controller. These are specific to the iRVision application, so they would need to be the correct components, otherwise they will not communicate with the controller. That's really about it. Since you are already using the 2D system you shouldn't need any extra software.
-
Thanks for all your answers, I will keep looking. Maybe a method using different 2D photos from different angles could allow us to get the W and P.
I don't think this will be possible either, as the software most likely doesn't include the computational algorithm to perform this calculation. The best option is to just get the correct equipment for what you are trying to accomplish. If you already have the 2D system installed, it shouldn't be hard to install and set up the 3DL system. It is going to be more effective, accurate, and less troublesome in the future since it is already a FANUC system as opposed to trying to create a separate process to accomplish the same goal. Also, you will not see a huge reduction in cycle time with the 3DL system. Take into account the extra time to move to and snap multiple pictures as well as processing time to perform the calculations necessary to establish the W, and P. This might result in a noticeable cycle time reduction which in most industries is unacceptable.
-
I have never seen a way to determine W & P from a single 2D image.
There isn't a way to do it. That's why they have the 3DL vision system. I think they were trying to use the aspect function to calculate the angle of the part, which is not possible in order to set the W, and P. The aspect function is only used to pass/fail objects that are skewed. If you want a part that is skewed +/- a certain percentage to pass you would set your min/max to the amount in which is acceptable for your application. I left my 2D and 3DL books from these courses at home, otherwise I'd just quote FANUC directly.
-
I am fairly sure of this. I believe aspect ratio is used to pass/fail the vision process based off of the aspect of the part. Changing this value allows the aspect to be skewed more but still pass the vision process. I don't believe it can actually use this feature to calculate the angle of the object. If this was the case, there would be no need for the 3DL system at all, and the inclusion of a laser would be only for redundancy. The only way to get accurate 3D results is to use the 3DL process. Also, if you change the value to allow more tolerance in an objects respective angle, it will increase the scan time of the process.
-
The Z position acquired with the 2D camera is also good. The only problem is the orientation in W and P which does not correspond to reality.
For the detection of the Z position, I used the GPM locator in order to teach the robot the size of a given feature at a given height and then teach it a second reference position when the pad is closer to the camera . Thus, the robot is able to create a scale in order to calculate the height of my part according to the size at which appears the pattern that I taught to the robot.
This method works well for estimating height in Z, however I don't understand how the measurement in W and P is done. Logically, if the circle appears oval to the camera it should be able to estimate its orientation with more or less precision, but unfortunately this is not the case.
The camera is going to look for the trained image. If the circle appears as an oval, your vision score will be off because it varies from the original feature. It cannot distinguish how much of an angle the part has using this concept in a 2 dimensional plane because it is not calculating a 3 dimensional object, rather it is calculating the orientation of a 2D image with respect to the trained image. Therefor, even if it sees an oval instead of a circle, it will only reduce the score of the vision process, and will not establish values for a 3 dimensional plane. Since you are using a 2D process, you will only get 2D results. I rarely use the 2D process independently since the 3DL system is more accurate, so I had never thought of using 2 reference points to establish Z.
-
Also it is important to remember that this is all dependent on lighting, and values can and will change based on variations in supplied light. So even if your image appears the same from one piece to the next, minute variations in light will result in different calculations even in the Z, W, and P planes which are not established through the 2D process. I was wrong before when I stated w, p, and r are not established with a 2D process. It can do X, Y, and R, whereas the 3DL process establishes Z, W, and P. My apologies, I have not used the 2D system independently in quite a while.
-
I think you're right, it's strange that FANUC doesn't give more details about the calculations made by the algorithm. And why display the values of W and P if they are wrong.
It does this because it still needs to use those values to calculate a proper offset or physical position, so it assumes the part is flat and pulls data from those other positions to find the actual location of the part.
-
Thank you for your answer but I do not understand why the Gaze Line process which is a 2D vision process sends me back 6 degrees of freedom.
As we can see, it is able to return the position of 6 degrees of freedom. However, the W and the P are wrong and I cannot figure out where this result came from.
I believe those values are relative to your camera position, origin position, and the offset frame you are using. They are not set from the 2D vision process, because as I said before, a 2D vision process cannot find the third dimension to calculate the angle in which the part is lying. That requires the 3DL vision process. Think about the terms they are using, 2D and 3DL (the L stands for laser) so basically 2D and 3D. 2D is a flat image like the original Mario on Nintendo, so you can only locate a flat image, whereas 3D gives depth like the new Mario allowing you to calculate the angle of the workpiece relative to the camera and origin position.
-
I don't think you can get w,p, and r from a 2D vision process. For this you would need to use the 3DL vision process which is essentially a 2D camera with a laser mounted next to it. The camera and laser are calibrated together and when running the vision process, the camera takes 2 extra images that include a laser cross section to establish the W, P, and R. I went through the 3DL class at FANUC in Rochester Hills Michigan, and there is no way for the GPM to find a 3 dimensional "angle" using a 2 dimensional system. You would need the 3DL system for this purpose.
-
It's working fine with no PLC -- just the robot as the EIP Scanner and the Balluff as an Adapter.
But all of Balluff's documents are about connecting the module to a AB PLC. I did get a tech note about using RSNetworx to set up a Balluff device with the R30, but after doing it, what I saw leads me to think that RSNetworx may not actually be necessary. That's the question I'm looking to explore.
I am pretty sure you would still have to use RSNetworx since Balluff is supported by Allen Bradley and FANUC controller are made to be compatible with Allen Bradley equipment. In order to communicate with the Balluff device, the system most likely has to be running the Rockwell software. Without it, you most likely won't see the modules.
-
I doubt that it is dependent on technical matters. More likely it has to do that the automotive industry doesn't want it. So therefor Fanuc lacks the financial incitement to implement the technology.
But I agree, those cables have a tendency to get all tangled up. Although I know that they sell cable reels to fit the controller, for more easy handling of the cables...I work in the automotive industry, and our plant management has asked me about it before because of all the downtime we experience due to cables being cut and tangled together. We already have cable reels installed on all of our controllers but that doesn't really solve the problem. The operators and process techs still leave them strung out to be cut and damaged, and we have no way to track who is leaving them in that condition to discipline them. I explained to them all the reasoning as to why they don't make wireless pendants for industrial applications, but they didn't seem satisfied.
-
As mortoch stated, I would still try loading in a backup of the controller if you have not already. And I would not do just the TP files, I would do a complete controller image backup to ensure the software is correct. Aside from that you could check the fuses on the E-Stop board or replace it entirely. I have had several of them go bad on me which will hold out servo power. Are there any faults on the pendant when it is powered up, and could you attach a picture of the alarm history?
-
I might not be understanding you correctly, but I would set each pushbutton that you are using on the HMI as a DI from the PLC. I would then write a new program and use IF/THEN statements to call which program you want to use for that DI. Say for example you want the operator to be able to start a program from the HMI. I would set that pushbutton as DI[?] then use the statement (IF DI[?] = ON then CALL ****) whatever program you want it to call. This would give the operator the ability to start/stop/jump whatever you want the program to do basically as long as you have it programmed inside a decision loop and have the subroutines set to do what you want them to do.
-
Develop a wireless pendant with docking station so that when the pendant is docked it serves the purpose of the cable, but when undocked it communicates to the controller via Bluetooth or a wireless network. Also, having the ability to control multiple robots from a single pendant by selecting the robot you want to control would be great. I know there are safety device issues that must be hard wired, but I know other robot manufacturers have developed wireless pendants, so it shouldn't be impossible for FANUC to figure it out.
-
I might be mistaken but I believe Balluff is solely supported by Allen Bradley, so in order for it to communicate with the R-30iB it will have to be through Rockwell Software. All of our balluff connected devices are controlled by an external PLC using inputs and outputs from the robot. Is this possible in your application?
-
would that be able to restore power to the axis control board?
If the backup batteries were weak then you might have lost or corrupted some of its system software files. This would prevent any power from being supplied to the servo amp or the axis control board. I would start by replacing the batteries with power on, then load an image backup into the controller. Hopefully you already have a backup, otherwise you might have to get one from a similar robot with a similar job function to load in is place.