The vision process that I created uses a reference position that was set after I did a snap and find. This position is accurate in reference to the user frame that I used when creating the vision process. Once I run the vision process I get a value in x and y that is really inaccurate. If I take the found position and subtract the reference position I get an offset that should go into VR register. However, the value that goes into the VR is incorrect which causes mislocations when the VR is applied. If the subtracted value was actually placed in the VR register life would be good. Also, my found object always scores in the 90 percent range. Any ideas?