Posts by HawkME

    You could teach like normal (x, y origin) but your frame will be rotated 90 degrees. You could either live with that or do the math to correct it.

    Why do you need to do it that way?

    Like I said before, I think you will be hard pressed to be accurate in world frame, UF[0].

    You need to teach a good user frame with a good pointer. Then use that user frame for your Offset frame, and finally update your Z height to match the part surface within that user frame.

    Unless the robot is mounted to a perfectly flat surface and the part is laying on that same surface world frame will cause issues if used for your offset frame. And no matter how perfect you think it is, I'm betting that it is not good enough for 2D vision.

    A robot mounted camera absolutely can find parts relative to a user frame. It must be setup for a fixed frame offset, then a reference must be set.

    Can you show a screenshot of your vision process setup?

    Reducing that distance could help, but you then would also need to adjust the payload settings.

    The payload estimating program doesn't work if the load is small or close to center. I prefer to calculate with CAD, which should give you a more accurate result if done correctly.

    Acceleration will affect it more than speed. You could do the following:

    J P[2] 100% Fine ACC 75

    Default acc is 100 and you can adjust down to 50.

    You need to put Fabians logic in a BGLogic routine. Do not use the statements that have parenthesis!

    The logic in your picture doesn't make any sense. Just delete it. I know for a fact Fabians logic works correctly in BGlogic. So focus on that. If it's not working then you are doing it wrong.

    Post a picture.

    If you don't have an accurate Z value set in the vision process then X and Y will never be accurate. The Z height setting is critical to a 2D vision application.

    You don't need a larger grid, just teach a standard 3 point user frame using a correctly taught pointer. Then you can take your pointer and touch the surface of the part while in that user and tool frame to find out the Z height. Finally go back into your vision process and set the Z height.

    How did you determine Z height? How big is your work area? If it is much larger than the grid then your z height will not be consistent the further away you get. In that case manually teach a UF that matches the size of your work area.

    The part needs to be on a surface that you taught an accurate user frame for and you need to set the Z height on your vision process. Then your offset frame will use that user frame to output x,y, and r.

    You really don't want to use world frame. It would require the robot to be perfectly level to the part.

    Yes, it is a SINT, but it is in the middle of a larger structure. So what I would do is create a SINT tag and move the actual integer value you want into it. Then just XIC each bit of the SINT to turn on the corresponding PNS input.

    Then it it does the bit conversion for you and you never have to think about it again.

    EE is wired to the Robot I/O.

    AS2 is not wired to anything but the AS1 port on the base of the robot.