Posts by .d7

    Maybe not a true solution but I solved the error by creating a new 2D camera and 2D vision process. I still had the discrepancy between automatic camera calibration and override standoff distance ( 1119 mm vs 416 mm). Orthogonal gave similar results to my 416 mm standoff so I decided to go with Perspective with the standoff distance overridden to 416 mm.


    This did give me a mean error of around 1 pix and max of 3 pix but since I am using 2.3 MP camera on a FOV of around 100 mm square, a max calculated error of 0.15 mm is ok.


    I also found that changing the center origin point in the vision process removed the slight offset I was seeing on pickup.

    Hello All,


    I am trying to set up a simple 2D vision process with a fixed camera. I've trained the TCP and User Frame correctly: TCP is just a pointer with a Z offset, User Frame has been calibrated and recalibrated using 3 and 4 point methods and both give the same values. Question: Does it matter what frame you are in while training the User Frame?


    My issue is when setting up my camera; I use a fixed (bolted in place) calibration grid that was also used to teach the User Frame but when I get to the calibration step if I use Perspective with an automatic camera distance calculation I get a value of around 1 m, while my measured height is 416 mm. This method gives good mean and max errors, 0.25 and 0.693 pix respectively. When I use Perspective and override my standoff distance to my measure value my mean and max errors are around 1 and 3 pix respectively. Using Orthogonal gives mean and max errors equal to those I get with my standoff distance overridden.


    My camera is a Basler acA1920-48gm with a Fujinon 1:1.6/35mm lens.


    All three calibration methods (perspective auto, perspective override, and orthogonal), when used in the same vision program give about a 20 mm offset from the part.


    Had trouble uploading pictures but they can be found at these links:

    20230608-121254 hosted at ImgBB
    Image 20230608-121254 hosted in ImgBB
    ibb.co

    20230608-121313 hosted at ImgBB
    Image 20230608-121313 hosted in ImgBB
    ibb.co

    fanuc-error2 hosted at ImgBB
    Image fanuc-error2 hosted in ImgBB
    ibb.co

    Fanuc-error1 hosted at ImgBB
    Image Fanuc-error1 hosted in ImgBB
    ibb.co

    Yes, I was under the impression that safety signals can not be sent over ethernet but I also want to be able to monitor all systems remotely. So, my thought was that I could use the Banner SC10-2ROE for necessary safety inputs (EStop, Doors, etc) and then send relevant data to a Siemens PLC. The Siemens PLC can then pass data to/from other Banner PLCs and be connected to a Node-Red service for purely monitoring purposes.


    The goal I guess would be to pass coords from vision to the robot and signal conveyor movement based on vision as well. Generally, its function is a pick-and-place routine with a couple of extra steps.

    I have been looking at the best options for programming a robotic automation system, it seems that PLC dominates the industry. Most resources as to why come from websites that are selling PLC controllers, so I feel they may be biased. To my knowledge, a microcontroller gives much more flexibility and functionality compared to PLC, and for applications that require less than 30 IOs they seem to be the better choice. Assuming the programming of the controller is not an issue. My only reasoning as to why PLC still dominate the market is that they offer a tried and tested solution with good dependability. Is there something I am missing? Has anyone had experience with using a microcontroller in an industrial automation setting?