Posts by desertgiant

    I have not exactly done what you need, but something similar.


    Robot writes the position for the gripper in the $OUT signals.


    This was defined as a variable in the SimPro Interface. and my python script was reading the input of the variables and was setting the required position of the gripper. And vice versa was done to read the status of the gripper from robot.


    I donot have Simpro license to exactly explain what and where it was done.


    When I was doing it I was referring to the training material from Visual components to do it.


    May be check their site and see if it helps.

    Updating this thread as I landed here after facing similar issue.


    One possible solution could be to clean up the main project folder in the controller and leave the robot with no program selected and then connect via work visual.


    Because some times the unused files in the main folder could have link variables with the Technology packet which prevents Workvisual from loading files.


    After cleaning up the main project folder, I was able to connect via Diagnosis option of Wov.

    Changing the programmed velocity is possible before the start of the LIN movement.


    To do this the velocity values in the respective LDAT params should be written before through a separate function and then the LIN/SLIN movement should be called.


    This requires advanced KRL programming skills and setting wrong values can make robot perform unpredictable movements.


    An alternative could be to play around the override values ($OV_PRO).


    By setting a reduced override on specific condition can reduce the velocity of the movement.

    After a few hustle the digital twin to the real robot with vision was established using the KUKA Office PC.


    the simulated version of the camera here does not support calibration as done through KUKA VIsionTech HMI, hence the real calibration file from the KRC controller must be used.


    When the image written to the Office PC is similar in dimension and pixel to the actual picture from camera the digital version can imitate the real robot movement.


    The quality of the robot accuracy and repeatability in the digital world depends on how similar the image in digital world is to the real image captured from camera.

    I am currently in the design phase to commission a KUKA robot for a USA based client.


    This is a Safe Robot.


    Apart from the standard norms and regulations of EU, are there any special guidelines or commissioning/ software norms to be followed up for USA ?


    Thanks in Advance

    Thanks for the reply.


    That is one possible and the best solution. The simulation software can communicate the actual position of the part to the robot and robot bypasses the vision callings in the simulation mode and use the frame to internally calculate its trajectory.


    However the customer is pressing to use the same kuka software and wants to have the same behaviour in simulation like in real world.


    The simulation software can stream the images to a specific port.


    KUKA Office PC should be capturing this stream and make the KUKA KRL library think the stream is from a real camera and process further.


    I am interested to know how such a solution can be solved with kuka and if there will be additional interfaces like the Y200 will be needed.

    Do I need any additional interface like the Y200 ?


    Because I have to somehow tell the KUKA Office PC that what it gets on KONI Port is not from a camera (Basler or Baumer) but from a simulated port.


    Also when I look at the drop down box in the tools configuration page under VisioTech -> Task Configuration, it always looks for the missing camera.

    Hello Everyone


    Has anyone successfully simulated a Vision Application and the VisionTech software inside simulation environment ?


    This means no real camera will be used but a vision sensor from simulation env will be capturing images and this image will be further processed by the kuka visiontech algorithm in real time from the office PC which brings close to reality in digital twin.


    KUKA KRC5 controller.


    Thanks in Advance.

    The interface from KUKA to VisionPro is limited i.e can call only 5 attributes per task.


    I have currently close to 12 attributes to be extracted.


    Hence, I need to define a unique key between tasks and the rest need to be read from the tasks.

    Currently, I establish EKI connection for every task and do the process.


    The syntax looks like below


    ;Task 1:

    Interrupt Read task Result

    Open EKI

    Trigger Task 1

    Extract Params

    Close EKI


    ;Task 2:

    Interrupt Read task Result

    Open EKI

    Trigger Task 2

    Extract Params

    Close EKI


    .

    .

    .

    .


    Such process works but this could in future cost cycle time.



    Any suggestions or alternatives are appreciated

    KUKA KRC5

    VisionTech 4.3



    Hi All


    I am currently trying to call more than one image processing tasks from the vision tech process.


    The current flow is as below


    - Interrupt On Read Task Result

    - Open EKI Connection

    - Trigger Task 1

    - Trigger Task 2

    .

    .

    .

    - Trigger Task n


    - Close Connection


    The Task 1 executes successfully but the rest fails


    Is it necessary that for each defined task the connection should be closed and reopened again ?


    Thanks for the support

    Yes, I agree. Modifying the C# is the easiest option.


    But these parameters are loaded over the attributes which then can be read from the KUKA Script. This Attribute is limited to 5 numbers and each attribute can hold upto String value of 100 characters.


    Modifying the result in C# and KRL will require the modification of the entire EKI programs which is built for the standard option.


    Currently I am looking for the possibility of building a string split function in KUKA which helps to stitch some parameters together from Vision and then split in the KUKA program.


    Thanks for the info.

    I have a Visiontech project where there is a necessity to communicate more than 5 attributes (Combination of String, integer data).


    I read the qr code and have an inbuilt string parser that breaks and separates the data. This data needs to be communicated to the robot.


    As per the visiontech document, a max 5 attributes can be communicated to the robot from the VisionTech library with every result Parmas.


    Does this attributes also supports array of integers/reals to be communicated ?


    If that doesnt work, one way I see is defining two Cognex tools and calling simultaneously, but this could result in loss of data uniqueness as there is no unique number to relate the data.


    has someone implemented communicating more than 5 attributes without losing the data uniqueness ? This means an additional param to define the uniqueness of the data is necessary.


    Is there a better way to handle this situation ?

    This is tested for Twincat3 Maschine and not recommended for KUKA Robots as I personally have not yet tested. For KUKA Robots, I would recommend the standard KUKA Gray USB stick.

    Hi All


    Thanks for the inputs.


    Finally after searching through and testing, I found two tools to be matching closely the needs.


    Rescuezilla


    Redo Rescue


    Both are linux based but have a very good GUI.


    Rescuezilla is GUI version of Clonezilla and can restore clonezilla backups also. Individual Partitions can be backed up, but cannot be rewritten from the same disk. The backup must be moved to an external USB and then rewritten from this external usb to the disk drive. Compresssion option available.


    A Redo Rescue can save and restore backups from a different partition in the same diskdrive. Cloud backup and restore is also possible.


    Have tested through the weekend and works fine.


    However, if anyone used before and have faced issues with the tools let me know so that I can be aware of the issues.


    Once again, Thank you all.

    This is for machines with Twincat 3 software. and other company specific tools.


    SW developed and tested in one maschine can then be written in series in maschine under production.


    Also if there is an issue at customer, the service techniker can create backup of separate disks and they can be examined/ issue can be recreated in back office.

    Hello All


    I am looking for a Standard backup tool for windows based machines.


    So far I have used the Clonezilla and the KUKA USB Stick (Only for kuka robots) and an internally developed Linux & NTFSClone based tool.


    Based on all the tools used, I find the KUKA Backup USB Stick to be more simple and user friendly.


    I want to check if such a tool or something better is available.


    The goal is, this tool should be usable by commissioning technicians to Developers and Managers if required.


    Simple GUI with easy options.


    If anyone came across such a tool or been using it currently provide details.


    Thank you

Advertising from our partners