Version Controlling KUKA Project code in SVN/Git

  • Hello Everyone


    Trying to understand the best practices followed in the Versioning of KUKA Software and use it in my Projects.


    Currently SVN/ GIT are used for version controlling and with respect to the Twincat Software, the version controlling is integrated into the Visual Studio where Developers can save the Twincat Project and Twincat PLC files and efficiently version them.


    However such possibilities are not available with KUKA and we are way away from it.


    Hence I would like to know how does one version the KUKA Software.


    Do you unpack the entire archive zip into a trunk folder and use them or unpack the R1 & STEU folders and version them.


    Unpacking and zipping archives, I see that the entire folder can be zipped together and saved to a USB stick to write back to the robot / via shared drive.


    Trying to understand if someone implements a more efficient way.


    Thanks

  • Just my 2 cents.


    You can use WorkVisual to get more direct access to the files. With the 'Programming and Diagnoses' workspace you can create a connection to your KRC.


    In your Documents/WorkVisual folder you will find a subdir 'Repositories'. In there there is a directory for each KRC that contain the R1 and STEU folders (and everything below).

  • Well, the last time I had a large mulit-person KRL project, I made a rule set:

    1. the central "ready to deploy" code base was kept in a Git repo

    2. Version numbers were kept in a standardized-format comment at the top of the .SRC or .SPS file (or .DAT file for modules that were .DAT-only)

    3. Any temporary or "testing" changes were done locally on a robot, and a letter was added to the Version number. The letter was removed when the change was accepted or reverted. If the "testing" change was advanced, 'a' would become 'b' then 'c' and so on. Fortunately, we never got past 'f' or 'g', so we never had to deal with the "what comes after Z?" issue.

    4. Version numbers were X.Y.Z, where X was the Major Release version, Y was medium-level changes (potentially breaking), and Z was mostly cosmetic or minor non-breaking changes (obviously sometimes things got done under Z that should have been Y, and the X/Y divide was a judgement call).


    We didn't do forks or branches in Git, but if we'd had the time to train everyone in Git, we might have done so.


    As we had all the robots networked, I created a script that would scan all the Master files in the Git repo, and the matching files in every robot, and generate a report of which robots were Synced, Ahead (testing), or Behind the Master. This populated a color-coded Excel sheet that gave a quick view of what robots needed attention. All the robots were fully Read-access over the network.


    We had a Transfer directory on the D drive of each robot where programmers could push updated Master modules from the Git repo, then load them using the KUKA HMI. This was the only directory on the robot with network Write access.


    Then, when we released the system to pre-production, one idiot at the customer insisted that every version number be rolled back to 1.0.0, breaking all my historical records. If we'd been more sophisticated Git users, I could probably have handled it with a branch, but since we weren't....

  • We had a Transfer directory on the D drive of each robot where programmers could push updated Master modules from the Git repo


    Am I correct that all coding/programming was done offline?

    The original codebase for Git only required on pull of the KRC to initialize it.

    Sound like an effective setup.


    How large was the team if I may ask?

  • The bulk of coding was done offline, but a non-trivial amount was done on the pendant. Mostly minor bug fixes during testing, or small improvements that were faster to do on the pendant than running back to a computer for. The entire deployable package (which hit over a thousand pages of KRL, IIRC) was being developed and debugged on the fly.


    It got a bit complicated, but we had "tiers" of programs

    1. huge Offline Programs that were generated by Simulation postprocessors, with a mix of "Process Points" that could not legally be touched by anyone on the pendant (and were written in such a way as to disallow ILF edits) and "Via Points" that had to be pendant-editable, b/c the Sim never got all the minor obstructions 100% correct, or the massive dress pack would behave in ways the Sim could not model
      1. The OLPs were "hot swapped" using DirLoader, b/c they were so big they broke the RAM limit of the KRC.
    2. The deployable package that all the robots shared. This was the "infrastructure" code that ran all the processes when the OLP moved the robot to a Process Point, all the error handling, user interface, communication, tooling control, etc etc etc.

    I spent most of my time doing offline KRL coding and tracking/wrangling changes, but also did my share of debugging and support. I had a mix of junior coders and pendant-jockey debuggers that were effectively below me, who passed me either bug reports or proposed fixes/improvements. About 6 "robot programmers" of various stripes, plus another 4 or so Sim team members who were writing the postprocessor in conjunction without our software development, and 3-4 PLC programmers handling the controls side. Everyone had access to the Git repo, but usually only the robot programmers were directly dealing with it.

  • I should also mention that I made extensive use of file-comparison tools to catch situations where the KRL had changed but the version number had been overlooked. .DAT files are a particular issue here, since they are actually altered when static variable values are changed. So we used certain tricks, like breaking the .DAT files up into "Static" and "Constant" sections. That way, if you saw a file difference in the Constant section, that was an immediate red flag, whereas most differences in the Static section could be ignored on a first-pass look, if the differences were limited to the right side of the "=" sign. Obviously, added/deleted lines would be an immediate red flag.


    We also had network-based automatic backups every 24hrs, and a tool to make a backup from the pendant with the date&time as part of the backup name, so we could almost always have a "trail of breadcrumbs". Every programmer and tester/debugger was supposed to trigger a backup with this tool immediately before making a local change. So it was (usually) possible to quickly compare the robot Before Change and After Change to find exactly what had changed, because with people working long hours for months on end, stuff happens.

Advertising from our partners