# iRVision - Calculation Tools

• Working on error-proofing our vision job, and have been trying to experiment with the Statistics Calculation Tool. Unfortunately it doesn't seem to work the way we were expecting, and need some experienced input.

- We have a VERY forgiving and generic vision job, to accommodate a wide variety of parts without changing recipes (not my idea, this is how the project was requested)

- No issues with picking different sizes and lengths, as we are able to identify blobs of roughly the right shape, and get coordinates just fine.

- Issue is on the rare occasion that parts stack up side by side, or end to end, resulting in a single blob with a vision offset that will result in picking up TWO parts or ZERO parts with our vacuum pen.

We were hoping to look at the MEAN value of all major axis, or MEAN of all AREA values, but have not been able to figure out a way to do that. The calculation tools seem to only look at a single result?

• I've used the histogram tool to filter out parts touching. Basically require a pixel value around the part to ensure they not touching

• I've used the histogram tool to filter out parts touching. Basically require a pixel value around the part to ensure they not touching

We did find that to be useful as well. But since we have to be so forgiving on the dimensions, we end up seeing some adjacent parts make it through as single parts (see the images above). So we need another way to discriminate them. Can't be a fixed length due to the variety of parts, so we are looking for another way. If we can compare individual values to average value of all found parts, then we're onto something!

• Do you know the size of parts it is supposed to be picking at a given point in time?

Currently, no.

We are trying to avoid any operator input like barcode scanning or selecting a SKU from a list. The hope is that they can just dump a batch of parts in the feeder and hit Start.

• I think this is a good job for AI, but you will need the operator input until AI learns and becomes autonomus.

• I think this is a good job for AI, but you will need the operator input until AI learns and becomes autonomus.

Cool, is there a Fanuc AI Vision option? I don't think I can just hook this thing up to a Chat-GPT subreddit and let it resolve itself haha!

• There is a teach tool where you can upload a number of "good" images and the software will create an aggregate ideal part. But I haven't really tested the limits of this. For the application I'm working on, I would have to manually load thousands of images.

• There's a tool called edge locator that might help. You can measure your parts with it.

We tried using edge locator, but didn't really find any benefit for us. We are currently using a couple of stacked blob tools, then using some logic in the TP program to filter out parts that are stacked too close to each other.

We've also set up a comparator sub-routine that pulls data from the first part found, and compares every subsequent part to those values. We check blob area, perimeter, and semi-major axis. If any of those fall outside of a 10% tolerance we don't pick the part. It's not elegant, but so far has been very effective and reliable for a huge range of screw styles and lengths.

• Fyi the AI is error proofing. It looks like it does comparison of models. I didn't read into it much.

• Fyi the AI is error proofing. It looks like it does comparison of models. I didn't read into it much.

Looks like it's also a paid option, which I don't have. (-_-)