DMC, Inc. //ultraskinx1.com RSS feeds for DMC, Inc. Blog 60 //ultraskinx1.com/latest-thinking/blog/id/14725/join-dmc-at-ni-connect-2025#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=14725 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=14725&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/14725/join-dmc-at-ni-connect-2025 Join DMC at , April 28-30 at the Fort Worth Convention Center in Fort Worth, Texas.

NI Connect is a technical conference hosted by . The annual event features technical sessions, keynotes, networking opportunities, and tech demos featuring the latest in NI and test technology.

DMC has worked closely with NI for more than 25 years. As an NI Platinum System Integrator and one of 12 teams recognized as a Certified Center of Excellence, DMC looks forward to this yearly opportunity to connect with our partners on the NI team and stay up to date on the latest developments in NI hardware and software.

In addition to attending technical sessions and keynotes, DMC will participate in the Leadership and Partner Forums.

We also look forward to showcasing a Demo at the ADG section of the Expo Floor -- Visit us in Hall C at booth number 2.  DMC's demo showcases a compact and configurable HIL Automated Test System that can be programmatically adjusted to match individual testing needs.

Registration

Will you be attending NI Connect 2024? Reach out and let us know! Learn More about DMC's NI Partnership.

]]>
Becca Stussman Mon, 21 Apr 2025 10:43:00 GMT f1397696-738c-4295-afcd-943feb885714:14725
//ultraskinx1.com/latest-thinking/blog/id/14723/getting-started-with-twinsafe-safety-programming-basics#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=14723 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=14723&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/14723/getting-started-with-twinsafe-safety-programming-basics In the automation world, machine and process safety are of the utmost importance. When a risk assessment requires it, specialized safety hardware and software may be necessary to minimize the risk to people and equipment. Unfortunately, understanding safety hardware and safety programming can be challenging without proper training and experience.

The goal of this blog series is to provide insight into the TwinCAT development process so that when you build your TwinSAFE program, you are properly informed about the program you are developing.

Machine safety is a serious matter. When you are architecting your safety program, please make sure you are qualified to verify the safety of the system or are consulting a certified safety expert who can work with you to make sure your system is safe. If you are not properly equipped to develop a system on your own or would like assistance from our certified safety experts, please feel free to contact us for help developing your safety solution.

Introduction

This article assumes that you have already created your TwinCAT project and configured the basics, such as Alias Devices and Group Ports. If you have not, please refer to Creating and Configuring a Project from Scratch before continuing.

When developing a safety program in TwinCAT, it’s important to understand the terminology and structure. TwinSAFE programs are split into a series of TwinSAFE Groups. TwinSAFE Groups serve a purpose similar to a task in a standard project. Each TwinSAFE Group has its own operational status and logic- TwinSAFE Groups can operate independently of one another or in tandem:

Each TwinSAFE Group contains a “.sal” file that contains the safety logic. This file consists of a series of networks that serve as a visualization of the function blocks and variable connections. It is from this file that you will develop your safety program:

The remainder of this post will focus on the development of the logic within these “.sal” safety logic files. Discussion topics will include the following:

A Note About TE9000 Versions

Beckhoff actively updates and releases new builds of its software offerings, and TwinSAFE is no exception. It is highly recommended to use the latest build of the TwinCAT 3 Safety Editor, also known as TE9000. A link to download the latest build can be found here.

As for this writing, the most recent build is version 1.4.8. If you are using an outdated version of TE9000, it is likely that you will be missing the ability to integrate the latest safety hardware and firmware revisions.

For example, it is necessary to use TE9000 1.4.8 or newer to use an EL1918 card as both a safety logic processor and an IO card. TE9000 1.4.8 or newer is also required to use ELM 72xx servo drives in a safety project.

Adding and Configuring Function Blocks

Function blocks can be added from the Toolbox window. Blocks can be dragged from the toolbox into the safety logic networks:

The name of a function block can be edited by clicking on it. You can do the same with the names of networks.

Editing inputs and outputs can be done by directly typing the name of a variable into the text entry field adjacent to the port icon (this is hidden but will appear if you click next to the port icon), or you can double click on the port button to create a connection to the input/output of another function block:

Changing Function Block Input Behavior

Some ports have configurable behavior, depending on the type of function block. You can locate the port configuration by right-clicking on the port icon and selecting “Change InPort Settings”. In the Estop example, each pair of inputs can be configured to operate as a dual channel, individual channel, or disabled port. By default, all the ports on this block are disabled and need to be configured:

Many function blocks have configurable InPort settings. Some common options are dual-channel monitoring, timeout settings, and input negation (NO vs NC evaluation). These options are not clearly advertised on the function block in the editor, so make sure to check the documentation for each function block to see what options exist. The documentation can be found or on the .

See the section Using the Estop Block for examples of configured InPorts. 

Function Block Execution

In structured text, the operation of instruction occurs from the top to the bottom, each line occurring before the subsequent one.  In ladder logic, rungs also occur in order from top to bottom. However, unlike structured text or ladder, the order of execution of function blocks in a TwinSAFE program is not determined by their positioning in the program:

Each TwinSAFE Group has an ordered list called the function block execution order. The function block execution order is independent of their positioning within the program — instead executing based on their position within the execution order list. Their position within the list is indicated by the number in the upper right-hand corner of the safety function block.

To view and edit the list, you can right-click within the program and select the option “Change Execution Order of FBs”. This will reveal a window that allows you to reorder the operation of blocks within the program:

Ignorance of the function block execution order can lead to unintentional bugs and race conditions within the program. Additionally, programmers may make use of this feature without clearly documenting its use, making the program’s behavior difficult to interpret.

The example below shows a User FB that utilizes the execution order in a “clever” way that makes the program logic unintuitive and prone to errors with future edits:

This User FB monitors a signal (SignalMonitor) for a rising edge. At first glance, it seems that the logic should not work, as the safeAND is monitoring SignalMonitor AND NOT SignalMonitor, meaning the output (RisingEdgeDetected) should never be true. The trick here is that the author of the program has ordered the FBs such that the evaluation of the safeDecouple occurs after the evaluation of the safeAnd.

During execution, the input AndIn2 of the safeAnd is updated at the end of the cycle, meaning that its decoupled value of SignalMonitor is the one from the previous scan. The safeAnd block is really evaluating SignalMonitor (this scan) AND NOT SignalMonitor (last scan), giving us the rising edge detection.

This usage of the function block execution order is not intuitive, makes programs less maintainable, and more prone to bugs. Do not be like this programmer. Use the function block order appropriately.

Using the Variable Mapping

Variable Mapping is a tool that you will use to define and manipulate the assignment and usage of variables. Variables can be created within the TwinSAFE Group by either typing them directly into a function block input or output in the safety logic (XAE shell will automatically add the variable) or by adding a variable within the Variable Mapping window.

Variables can be mapped to function block inputs/outputs or to Alias Devices. This is the main pathway by which you will connect the input and outputs from safety logic to the physical I/O points. While you can connect function blocks to one another directly, variables can also be used for the same purpose — allowing you to connect ports across networks as well.

In the following example, a variable is defined in the program by typing it directly into the function block port. Once it has been typed, it will appear within the variable mapping with a listed usage on the corresponding function block:

We can also work the other way around. By clicking on the green plus icon, we can add a new variable to the mapping. After the variable has been named, we can specify a usage of the function block input we want. Once “OK” is selected, the variable will appear at the port in the logic:

From the Variable Mapping, we can also tie variables to inputs or outputs from the alias devices with the same technique. Simply open the Assignment or Usage mapping and select the input from the Alias Device:

This example now has the first two inputs from the local safety controller tied to the two inputs on the safeAnd block.

Multiple Selection for Usages

While each variable may only have a single assignment, it can have multiple usages. To select multiple usages from the window, hold “ctrl” while you click:

You may find this useful if you need to use the same status as a reference for a number of function blocks. For example, if you have a localized safety circuit that is responsible for de-energizing a series of devices, the aggregate status of the safety circuit may be used by several function blocks that control the outputs for each of those dependent devices.

Networks

Just like with ladder logic, the safety logic can be organized into networks. To add a network, right-click within an existing one and select “Add After>Network” or “Add Before>Network”:

To rename a network, click on the network name and edit the text. Names of networks should be compliant with C variable naming rules (no spaces or special characters):

Using the Estop Block

We will discuss the Estop block, its function, and its configurable settings. For example, The Estop function block determines that an Estop has been pressed; it can change the operational state of the TwinSAFE Group as a whole.

If an Estop output has been latched, it will require a rising and falling edge signal to its “Restart” input. This behavior acts as a safety acknowledgement and prevents automatic restart of the program after the estop input signal has been reset. This input can be linked to either a safety or standard variable, allowing the safety reset to come from the standard program.

The estop block has 8 inputs that can be configured and combined into dual-channel pairs: EStopIn1 / EStopIn2, EStopIn3 / EStopIn4, EStopIn5 / EStopIn6, and  EStopIn7 / EStopIn8. The values of the two inputs can only deviate from one another for a configurable time called a delay time. If the delay time is exceeded for any input pair, the FB, and by extension, the entire TwinSAFE Group, will enter the error state.  This can only be cleared via the “Err Ack” signal in the TwinSAFE group port:


 The estop block has three outputs: Error, EStopOut, and EStopDelOut. If Error is true, an input pair has exceeded the delay time or there is an error with one of the EDM inputs. For outputs that need to be immediately set, use EstopOut. EstopDelOut considers the delay time before activating.

The EDM ports (External Device Monitoring) can be used to implement a feedback loop for the outputs of the Estop block. EDM 1 is the feedback loop for EstopOut and EDM 2 is the feedback loop for EStopOutDelOut.

The EDM ports are expecting feedback on the safety circuit response. These inputs should see feedback from contractors indicating that the response to the safety signals has been properly initiated. If the estop function block does not see the proper feedback on its EDM channels, it will force the TwinSAFE Group into an ERROR state, requiring the group to receive the “Err Ack” signal into the group port to return to standard operation.

To disable an EDM port, click on EDM1/EDM2 in the FB and set “Reset Time” to 0 ms in the properties tab:

Passing Signals between Standard and Safety

Using Alias Devices, variables can be passed between the TwinSAFE project and the standard project. To begin, create a Standard Alias Device in your safety project by following right clicking the “Alias Devices” folder in the safety project tree, click “Add New Item” and selecting the desired variable datatype from the “Standard” tab. Note that we are using the “Standard” tab as opposed to the “Safety” tab since we are going to be communicating with the standard project:

The Alias Device will serve as an intermediary between a variable in the standard project and a variable in the safety project. You will have to link the Alias Device to both an I/O variable from the standard project and a variable from the safety project. First, select the desired Alias Device from the “Standard Signals” folder in the solution explorer. Then, there will be an option to link the Alias Device to an I/O variable from the standard project. Note that you must select an input variable from the standard project if your Alias Device is an output variable and vice versa.

To link the Alias Device to the safety variable, go to the “Variable Mapping” tab in the safety project and click the ellipsis in either the “Assignment” or “Usages” column for the safety variable you want to link to your Alias Device. Input Alias Devices should use the “Assignment” column and output Alias Devices the “Usages” column.  Now, your Alias Device is connected to both a standard variable and a safety variable. This provides a useful way of sending information between the safety and standard projects.

Conclusion

With this information under your belt, you now possess the knowledge to develop your safety solution.

Safety is no joke. When you are architecting your safety program, please make sure you are qualified to verify the safety level of the system or are consulting a certified safety expert who can work with you to make sure your system is safe.

If you aren’t properly equipped to develop your own safety solution or want to take your Automation project to the next level,  contact us for help developing your safety solution.

Special thanks to Dominic Del’Olio for his contributions to this blog’s content.

]]>
Ryan Druffel Fri, 18 Apr 2025 15:30:00 GMT f1397696-738c-4295-afcd-943feb885714:14723
//ultraskinx1.com/latest-thinking/blog/id/13722/getting-started-with-twinsafe-creating-and-configuring-a-project#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=13722 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=13722&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/13722/getting-started-with-twinsafe-creating-and-configuring-a-project TwinSAFE platform offers powerful tools for building safety-critical logic. However, starting a new TwinSAFE project from scratch can feel overwhelming, especially without prior exposure to the safety editor, group configuration, or alias device linking.  

This blog walks through the complete process of creating and configuring a TwinSAFE project from the ground up. Whether you're new to safety development or just new to TwinSAFE, this guide will help you navigate the key steps—from setting up your project and selecting the target hardware to linking devices and configuring group ports. As always, if you're unsure about verifying the safety of your system, be sure to work with a certified safety professional or reach out to our team for support. 

This process will be broken down into the following steps:

  1. Initial Project Creation 
  2. Creating TwinSAFE Groups 
  3. Selecting Project Target 
  4. Linking Safety Hardware 
  5. Configuring Group Ports 

The screenshots below are TwinCat Build 4024.50 & TwinCAT Safety Editor 1.4.8.:

1. Initial Project Creation

Start by right-clicking on the “SAFETY” section in your project and selecting “Add New Item…”. From the resulting wizard, I will select “TwinCAT Empty Safety Project” and choose an appropriate name for the project. 

TwinSAFE

TwinCAT safety project wizard

You should now have an empty safety project like so:

2. Creating TwinSAFE Groups

Background

A TwinSAFE project is composed of a set of TwinSAFE Groups that each operate independently of one another. Each TwinSAFE Group will have an operational status that determines how the group’s outputs are set.

It is important to understand the fail-safe behavior of individual TwinSAFE Groups. When a TwinSAFE Group is not in the “Run” operational state, the group’s outputs will all default to their fail-safe values. Since each group has its own operational state, one group could be in the “Run” state and setting outputs according to its internal logic, while the other group could be in the “Error” state and setting its outputs according to the fail-safe behavior.

The operational state of a TwinSAFE Group can be observed live after the project has been downloaded to a target by monitoring the online data or by mapping the state information to the I/O tree via the group’s properties.

Building Groups

Let's create a TwinSAFE group. Right-click on the safety project and select “Add>New Item…”. Select “TwinSafeGroup Preconfigured Inputs” and give the group an appropriate name. You may notice that this option indicates that it will include preconfigured ErrorAcknowledgement and Run/Stop Alias Devices. These are two inputs into the TwinSAFE Group that will need to be configured later for the group to run once downloaded to the target. See the later section “Run/Stop and ErrorAck Group Variables” for more information on how these are used.

You should now see the TwinSAFE Group appear in your project:

3. Selecting Project Target

The project target indicates the safety controller which you want to download the project to. Some systems may include numerous safety targets. Some drives will contain safety cards internally that projects can be downloaded to. Sometimes, a safety project will be too complex for a single controller to handle, and other times, it just makes more sense to distribute the safety logic to multiple controllers.

To select the target system for a safety project, double-click on the safety project. From the resulting window, you will need to configure a couple of settings. First, select the type of target system. For this example, I will choose the EL1918 since I have one configured in my IO tree.

Next, select the target select button next to the listed “Physical Device”. A window will pop up that will allow you to select a device matching the type of device you specified in the previous step.

Once the physical device has been selected, confirm that the Safe Address matches that of the physical device. Finally, I suggest that you select the options detailed in the image below. By selecting these options, the project will automatically update the device in the IO tree to match whatever Alias Devices you have configured for communication:

4. Linking Safety Hardware

Next, we will link some hardware into the project. When linking hardware signals, we can choose to manually create our “Alias Devices” or import configuration from the IO tree. I have set up some sample hardware in my project to use as an example:

The easiest way to link this hardware is to right-click on the “Alias Devices” in your TwinSAFE Group and select “Import Alias-Devices(s) from I/O-configuration”. The resulting wizard will allow you to select which devices to import:

After the import, your Alias Devices should include the devices you selected:

If you are performing this import offline, you may have received a notification that the addresses could not be found. Regardless, you should double-click on each of the new connections and verify that the FSoE Address matches the configuration of the dip switches on the hardware:

Configuring Locally Aliased Device

Certain target systems, such as the EL1918 card, have onboard IO that will need to be linked. These devices may not appear in the “Import Alias-Devices(s) from I/O-configuration” wizard. You will have to configure this device manually. Follow the instructions in the next section to add the Alias Device of the correct type, then change the “Linking Mode” to “Local” instead of “Automatic”:

Configuring Alias Devices Manually

The alternative to using the “Import Alias-Devices(s) from I/O-configuration” option is to add each alias manually. Right-click on “Alias Devices” and select “Add>New Item…”. From the resulting wizard, select the type of device you would like to link. To find the right safety device, you may need to navigate to a specific location in the left navigation pane. Most Beckhoff safety devices will be located under “Safety>EtherCAT>Beckhoff Automation GmbH & Co”.

If you do not see the device you are looking for in the wizard, you may need to delete the Alias Device Templates folder (C:\ProgramData\TwinCAT Safety\AliasDeviceTemplates) and install the most recent version of TE9000. Deleting this folder will clear some cached templates and allow the installer to generate the most up-to-date versions:

After adding the Alias Device, the physical device needs to be mapped. Double-click on the Alias Device and navigate to the “Linking” tab. From here, set the FSoE Address to match the dip switch configuration on the physical device, then select the target for the physical device using the window that appears after pressing the “Select Target” button. After you save the changes, you will see the input and output linkings update automatically:

A Note About Process Images

In the Alias Device, you may notice the “Process Image” tab. The process image describes the layout of data that is being passed to/from the physical device. For the TwinSAFE Group to run, the Process Image of the Alias Device needs to match the configuration on the physical device: 

From this view, the name of each of the inputs can be configured to make the TwinSAFE Group logic more readable.

5. Configuring Group Ports

Each TwinSAFE group has its own set of Group Ports. Group Ports provides status and control of the TwinSAFE Group. While most of these Group Ports are optional, every TwinSAFE Group will need to have both the “Run/Stop” and the “Err Ack” ports configured. 

When the safety controller is first powered on, no TwinSAFE Group will be running. All TwinSAFE Groups will boot into a safe state, outputting all fail-safe values. In order to trigger the safety logic and write outputs accordingly, the TwinSAFE Group requires a high signal (1) to be written to the “Run/Stop” Group Port. As long as this is held high and there are no errors within the system, then the TwinSAFE Group will run according to its logic.

If the TwinSAFE Group were to error out, then it would enter an ERROR state and not return to its normal behavior until the “Err Ack” Group Port sees a rising edge. After this has been signaled (and the “Run/Stop” Group Port is still high), the TwinSAFE Group will restart and enter RUN Mode.

For this example, I have a simple PLC project with logic to write to these Group Ports. Link the outputs from the PLC project instance to the corresponding standard outputs on the safety controller that is the target for the safety project from the Solution Explorer:

Configuring the Group Ports Manually

If you are working with an existing project and do not have the autoconfigured group ports, we need to add standard Alias Devices to the TwinSAFE Group, connect them as we had above, and then connect the Alias Devices’ data to the Group Ports from the Variable Mapping window. We have not discussed the Variable Mapping window yet, as it will be covered later in this blog post; however, it can be accessed by opening the safety logic for the TwinSAFE Group (.sal file), at which point it will appear in the IDE.

To add the Alias Devices, consult the section “Configuring Alias Devices Manually” and add two “1 Digital Input (Standard)” devices, one for the “Run/Stop” and one for the “Err Ack”. Once they are added, link them as described above.

Next, open the safety logic for the TwinSAFE Group by double-clicking on the “.sal” file. Once it is open, look for the Variable Mapping window. The Variable Mapping window may appear elsewhere in the IDE and may be an unfocused tab. Mine appears in the lower center section in the same panel as the Error List and Output windows:

We are going to add two variables to the variable mapping. These variables will take the standard input as their assignment and the Group Port as their usage. This configuration will pass the value of the standard input into the group port.

To add the first variable, select the green plus button in the top left. Name your variable with a name along the lines of “GroupPort_RunCmd”. Select the ellipses button for the Assignment cell and select the proper Alias Device input for the run command. Select the ellipses button for the Usages cell and select the “RunStop” Group Port input:

Repeat these steps for the “Err Ack” Group Port:

Conclusion

At this point, you should have a Safety Project with the following:

  • Safety Project
    • Target System Configured
  • TwinSAFE Group
  • Standard Alias Devices Configured to connect to Group Ports
  • Safety Alias Devices
    • Local Device Alias
    • Remote Devices Aliases

Now that the safety project is set up, you are ready to get programming.

Want help getting your safety system up and running? Ready to take your Automation project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals.

]]>
Ryan Druffel Fri, 18 Apr 2025 15:10:00 GMT f1397696-738c-4295-afcd-943feb885714:13722
//ultraskinx1.com/latest-thinking/blog/id/14722/adg-mission-critical-applications-it-all-starts-with-documentation#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=14722 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=14722&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/14722/adg-mission-critical-applications-it-all-starts-with-documentation In Aerospace, Defense, and Government (ADG) projects, reliability isn't just a goal—it's a requirement. These high-stakes missions leave no room for error. As the old adage goes, “A failure to plan is a plan to fail,” and at DMC, we believe that plan starts with documentation.

From the earliest phases of a project, clear and persistent documentation lays the groundwork for alignment, accountability, and compliance. That's why one of the first software engineering requirements in NASA's NPR 7150.2, SWE-013, emphasizes documentation. It ensures that critical decisions and plans are recorded in a referenceable, auditable format—independent of human memory and resilient across the full lifecycle of a project.

Documentation is the backbone of DMC's A.R.T.ful Engineering philosophy, which focuses on Accountability, Reliability, and Traceability. It helps teams coordinate effectively, supports rigorous traceability, and creates a lasting record that enables better decisions at every stage.

The Right Tools for the Job

Good documentation isn't just about writing things down—it's about using the right tools to manage evolving information. Documentation tools must be thoughtfully selected and integrated to meet the unique demands of ADG applications. The best tools combine a wide range of capabilities, including:

  • Easy Development
  • Version Comparison
  • History Management
  • Policy Enforcement
  • Collaboration
  • Structural Evolution
  • Ownership & Accountability
  • Workflow Management
  • Media Integration
  • File Sharing
  • Traceability & Auditing
  • Security & Access Control
  • Automated Summaries
  • Distribution & Editing Control
  • Event Notifications
  • Reuse & Automation
  • Interoperability
  • Search & Linkage
  • Single Sourcing

However, functionality is only part of the equation—tools must also be user-friendly for authors, reviewers, and administrators.

Managing Diverse Information Types

Documented Processes and ToolsADG projects rely on a broad spectrum of documentation. Examples include:

  • Requirements Traceability Matrices
  • Risk Assessments & Mitigation Plans
  • System & Software Designs
  • Configuration Items & Change Requests
  • Test Plans & Results
  • User Manuals & SOPs
  • Budgets, Staffing Plans & Gantt Charts
  • Internal and External Communications
  • Meeting Notes & Developer Tasks

Each type of information has a unique structure, workflow, and lifecycle. This variety often leads organizations to use multiple tools—each one suited to a specific task. While specialization is helpful, a proliferation of tools can silo data, hinder collaboration, and obscure the bigger picture.

DMC's Approach: Integration and Insight

At DMC, we specialize in breaking down these silos. We help our clients:

  • Evaluate the best tools for their documentation needs
  • Integrate multiple platforms into a unified, seamless ecosystem
  • Build custom solutions that align with their business requirements

Internally, DMC uses a blend of industry-standard COTS (Commercial Off-the-Shelf) tools and custom-built integrations to manage all aspects of our projects. Our toolset includes:

  • Atlassian Confluence – For collaborative, structured documentation, and planning
  • Monday.com – For workflow management, communication, and traceability
  • GitLab – For source code control, code reviews, change tracking, and continuous integration / continuous delivery (CI/CD)
  • Microsoft SharePoint – For enterprise file storage and access management

PlatformsWhat makes our ecosystem powerful is the integration between these tools. For example, a requirement originating from a SharePoint file is translated to Monday.com and then directly linked to a Confluence page describing the design that satisfies the requirement. Source code implementing the design and managed in GitLab is then linked to the Monday.com requirement. Finally, test plans and results stored in Monday.com are linked to the requirements. This creates a living, traceable thread across the entire project, from origin to artifact.

Transforming Documentation into a Strategic Asset

With the right tools and processes, documentation becomes more than just a compliance task—it becomes a strategic advantage:

  • Enabling reliable, traceable decision-making
  • Keeping internal and external stakeholders aligned
  • Supporting compliance with standards like NASA 7150.2
  • Facilitating faster onboarding, reviews, and audits
  • Evolving with the project to remain current and actionable

DMC's expertise lies in creating documentation environments that do all this and more. Whether it's helping you select a platform, integrating your existing tools, or developing a custom solution from scratch, we ensure your documentation works as hard as your team does.

In mission-critical environments, success starts with the plan, and the plan begins with documentation.

Learn more about DMC's Test & Measurement Automation Solutions and contact us for your next project.

________________________

Explore our other key topics in NPR 7150.2 compliance through the remainder of our ADG Mission-Critical Applications series:

  • Building the Right Way with ARTful Design Understand NASA NPR 7150.2 standards and their use for mission-critical ADG systems. We demonstrate how using A.R.T.ful design, COTS tools, and robust documentation strategies helps to meet and exceed those requirements.
  • Entrusting Your Mission's Objectives to a Partner – Communication is the first step in traceability. Learn how DMC captures client intent from day one, linking early conversations and contract documents into Requirement Traceability Matrices using Monday.com. This aligns directly with SWE-050 and SWE-052.
  • Seeing Your Vision Take Shape Through Designs – Your vision must translate into tangible designs. Learn about how DMC maps requirements to designs using Confluence and custom integrations, generating a live Requirements Coverage Table and validating coverage bidirectionally per 7150.2.
  • The Plan Is the Plan Until the Plan Changes – Change happens. Learn how DMC's configuration and change management systems allow us to adapt while preserving traceability and alignment with SWE-079.
  • Turning Your Vision into Reality – Design becomes code. Learn how DMC's traceability extends into source code across multiple platforms (NI LabVIEW, Python, NI VeriStand), and how we maintain this alignment without overburdening developers.
  • Quality Matters – Close the loop with testing and assurance. Learn how DMC utilizes technology such as GitLab, automated testing frameworks, and custom tooling to perform code reviews, trace test results to requirements, and verify quality compliance with NASA-STD-8739.8.
]]>
Chris Cilino Wed, 16 Apr 2025 15:23:00 GMT f1397696-738c-4295-afcd-943feb885714:14722
//ultraskinx1.com/latest-thinking/blog/id/13721/illusions-and-laughs-a-welcome-party-at-chicago-magic-lounge#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=13721 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=13721&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/13721/illusions-and-laughs-a-welcome-party-at-chicago-magic-lounge One of the best parts of starting a new role at DMC is getting the chance to celebrate with your new coworkers. Welcome parties are a long-standing tradition that lets each group of new hires plan a fun event to meet and connect with the rest of the team outside the office in a social setting.

For our welcome party, fellow new hire Loren and I wanted to do something a little out of the ordinary—and when they suggested the Chicago Magic Lounge, I couldn’t help but be on board!

Behind the Hidden Door 

The event kicked off with a little mystery. Upon arrival, our group was led into a room resembling a laundromat with washing machines and industrial dryers. After a few curious looks and some poking around, we found the secret door that led us into the actual venue: a cozy, speakeasy-style theater with table seating and an intimate stage setup.

Before the main show began, magicians made their rounds performing close-up tricks right at our tables. For about an hour, we were pulled into an interactive experience full of card tricks and quite a few “how did they do that?” moments. 

  

Magic Moments from the Lounge 

The venue was one of the most unique places we’ve visited, with serious hidden gem energy. The magic tricks were so impressive that they felt like real-life glitches, leaving us wide-eyed and entertained from start to finish.

Trent James, the headliner, blended comedy and illusions into a high-energy performance that had the whole room laughing. And just when we thought we’d seen it all, the appearance of a ventriloquist dummy (aka William, his “lifelong friend”) completely caught us off guard—in the best way. It quickly became one of the most talked-about moments of the night.

  

A Welcome Party Worth Remembering

The night perfectly exemplifies DMC’s “have fun” core value in action. We weren’t just watching a show—we were laughing together, trying to guess the magician’s next move, and bonding as a new group. It was the type of evening that makes you feel welcomed and excited about what’s ahead.

The fact that DMC hosts a welcome party for everyone who joins the company is one of those standout perks that makes an impression. Getting to plan a fun and unique event helped us feel like part of the team from day one—and the Chicago Magic Lounge was the perfect choice. From hidden entrances to jaw-dropping illusions, the night was full of surprises. It was a fantastic way to kick off our time at DMC and a great reminder that shared laughter and a little bit of magic go a long way in building connections. 

Learn more about DMC’s company culture and check out our open positions!   

]]>
Kandra Salazar Mon, 14 Apr 2025 22:22:00 GMT f1397696-738c-4295-afcd-943feb885714:13721
//ultraskinx1.com/latest-thinking/blog/id/13720/hitting-the-slopes-from-coast-to-coast#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=13720 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=13720&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/13720/hitting-the-slopes-from-coast-to-coast Every winter, DMC teammates love to hit the slopes together across the country. While the Denver team hosts a weekend of fun at a Yearly Office Event (YOE), other teams planned ski trips to the East Coast and Midwest using our monthly Activity Fund budget. All of these events are partially company-sponsored, and they encourage DMC colleagues to have fun and get to know each other better while doing something they enjoy.

DMSki in Colorado

Once a year, the Denver office is visited by the DMSki fairy, who brings fluffy powder for skiing, plenty of friends from other offices, and lots of fun. This year, the fairy specified an iconic destination: Steamboat Springs, Colorado.

Pre-DMSki: Copper & Winter Park

The most adventurous DMSkiers came to visit Denver a full week in advance to take advantage of our legendary “Pre DMSki” ski trip. They got to experience the FULL experience of ski commuting in Denver: I70 traffic, parking lot cookouts, and bluebird days spent shredding the gnar.

This year, the Denver office hosted our visiting friends at Winter Park and Copper Mountain. We were able to use our Activity Fund from February to fund a parking lot cookout, so everyone was well-fueled for a day full of skiing.

denver ski  skiing

Find us our Micheline Star (we even had a handwarmer!)

group skiing

The winter park crew

An Evening at Lucky Strike

The Denver office and all our visiting friends spent the eve of DMSki at Lucky Strike in downtown Denver. We used our March activity fund to provide food, drinks, and fun for the DMCers in attendance.

We enjoyed graham cracker porters while we battled to kill each other in the Killer Queen arcade game, but the greatest victory of the night came when we pooled all our resources (tickets) to purchase a VERY IMPORTANT sign for the Denver office.

We salute Finn and Natalie for their ticket-gathering service at Goatz and Ropes.

  

Locked in

Shredding at Steamboat

On Wednesday, DMCers loaded up their cars and made the three-hour drive from the Denver office to Steamboat Springs, CO. In Steamboat, we marveled at the immaculate terrain for skiing, as well as at the MASSIVE hot tub at the house.

DMSki was packed with unforgettable moments, starting with Maya leading an impromptu yoga session at the Strawberry Hot Springs. Over at the park, Kohmei and Tim J had perfectly synchronized epic wipeouts on jump #3. Tim J redeemed himself by landing a clean 360 alongside Josh W, both sticking the trick like pros. The crew also took a thrilling ride down the bobsled track. Ramone kept the momentum going by shipping it on the Blues.

With an honorable mention of seeing 40+ DMSkiers take over the mountain in their incredibly sick hockey sweaters. 

  

But first, let me take a selfie

  

WARNING

Taking the Fun Back East: DMSKeast Hits Okemo

While the Denver event crew made memories out west, our East Coast teammates weren't about to miss out on the snowy fun. This year’s DMSKeast trip brought together around 20 teammates for a weekend full of connection, celebration, and a bit of adventure in Okemo. 

  

Life is better on the slopes

Out of office. Skiing with the crew

The event stood out for many reasons—most memorably, we celebrated the birthdays of Emily Blackman and Kohmei Kadoya! My dog Daisy, also made the trip, adding her charm to the weekend and quickly becoming a favorite among the group and our new mascot. One of the more unexpected highlights was a group of us trying cross-country skiing for the first time. It was challenging and humbling but a great reminder of the fun that comes with learning something new alongside friends. The weekend wrapped up with tired legs, full hearts, and a stronger sense of community.

  

Celebrating with cake and board games!

The final ride

Midwest Joins the Mountain Fun: Da Wiskinson CheeseSkinson

Not to be outdone by the East or West, the Chicago event crew brought their own flavor to DMC’s ski season. 

Keep calm and ski on 

This winter, the Chicago team decided to join in on the DMC ski tradition with our own spin: “Da Wiskinson CheeseSkinson”. About 12 of us made the trip to Alpine Valley Resort in Wisconsin for a full day of skiing, snowboarding, and team bonding.
 
 We met up early in the morning and were quick to hit the slopes. Some of us were seasoned winter sports legends, and others were completely new to the sports. A few tried snowboarding for the first time, and after a couple of early falls and some quick tips, they were crushing it by lunchtime. We took a break at midday to enjoy lunch and a few Spotted Cows, Wisconsin’s iconic beer, while swapping stories and warming up. To wrap up the day, we all headed down the mountain together for a final group run – an awesome way to close out a trip that lived up to its name.

Winter adventures with my snowmies

From Colorado to Vermont to Wisconsin, DMCers made the most of Winter with unforgettable trips, new experiences, and plenty of laughs along the way. Whether it was tackling fresh powder, testing out cross-country skis, or simply sharing stories over lunch, these events captured the spirit of what makes DMC culture special: great people, shared adventures, different backgrounds, and memories we’ll be talking about for a long time. We’re already counting down to next year’s ski season. 

Learn more about DMC’s company culture and check out our open positions

]]>
DMC Mon, 14 Apr 2025 18:15:00 GMT f1397696-738c-4295-afcd-943feb885714:13720
//ultraskinx1.com/latest-thinking/blog/id/13719/a-magical-evening-at-the-museum-of-illusions-in-seattle#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=13719 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=13719&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/13719/a-magical-evening-at-the-museum-of-illusions-in-seattle The DMC Seattle team had an evening that you’d have to see to believe at the . After seeing some intriguing photos of the museum’s quirky exhibits and optical illusions, we decided to organize a team outing using the monthly Activity Fund budget for company-sponsored social outings. Little did we know the magic that was in store for us. 

A Mind-Bending Experience 

The museum is located a short walk away from DMC’s Seattle office, so we planned to visit it on a weeknight after work. We kicked off the evening by ordering dinner at the office to fuel up before our adventure. Then, a group of eight of us, including some of our guests, set out to explore the museum. 

Our eyes couldn’t believe what we saw after arriving at the Museum of Illusions. There was a variety of optical illusions to interact with and pose for fun photos.  Some rooms made you feel like you were shrinking or growing while others played tricks on your eyes. Our team had a blast together trying to figure out how the illusions worked and snapping silly photos of each other. 

museum of illusions in seattle museum of illusions in seattle

One of the highlights of the outing was a tunnel at the end of the museum. It had a spinning circular wall that created a dizzying effect, making it tough to maintain your balance. People struggled to walk through it, with some of us nearly stumbling over as we tried to keep our footing through our laughter. 

museum of illusions in seattle museum of illusions in seattle

This Magic Moment 

After regaining our balance, having our minds blown, and taking way too many pictures, we all made our way home. Our magical evening at the Museum of Illusions was the perfect way to embody DMC’s core value of “Have Fun.” It was great to enjoy the museum’s whimsy as a team and literally see things from a different perspective. 

museum of illusions in seattle museum of illusions in seattle

Learn more about DMC’s company culture and check out our open positions.

]]>
Steven Fuchs Wed, 09 Apr 2025 13:37:00 GMT f1397696-738c-4295-afcd-943feb885714:13719
//ultraskinx1.com/latest-thinking/blog/id/12641/cloud-deployments-in-minutes-with-serverless-framework-and-aws-lambda#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=12641 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=12641&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/12641/cloud-deployments-in-minutes-with-serverless-framework-and-aws-lambda Let's say that you have a RESTful API deployed to some reserved compute resources in AWS. The API's surface is well-designed and is perfectly capable of meeting all of the needs of any clients that consume it. But then, you're told that there is some small internal feature or action that must be implemented in your cloud infrastructure.

For example, say you need to set up a service that sends an email containing data from the same data sources accessed by your API when triggered by a HTTP request. The API's codebase could very easily be extended to provide this functionality, but doing so can lead to several types of tech debt.

  • Design - The API surface should abstract the complexities of data retrieval/syncing, so adding endpoints to manage this process goes against the design pattern.
  • Security - If this is an external-facing API, you might run into serious security issues with allowing processes that consume your API to trigger such an action!
  • Cost optimization - This is probably not a concern if your code just needs to send an email, but if you need to perform a more compute or memory intensive action, like a sync between two data sources. If this data sync is a very bursty workload, then running it on the same resources provisioned for nominal API usage may lead to surge pricing.

So, looks like you need a scheduled process within your cloud infrastructure that runs this operation separately from your API. This now seems like a lot more of an undertaking. The source code for this tiny task should be very quick to implement, but the overhead of provisioning, testing, deploying, and maintaining new resources dedicated for that task seems too high.

Serverless Functions to the Rescue!

Serverless functions like AWS Lambda (which is what we'll focus on) are perfect for these kinds of actions. They're quick and easy to develop and can cost almost nothing compared to reserved compute like EC2 for small tasks like sending an email or two.

The code below is an example of an entire Lambda function implementation, minus any package dependencies. See how quickly these can be to develop?

 
var AWS = require('aws-sdk');
AWS.config.update({region: 'us-west-2'});
var ses = new AWS.SES({apiVersion: '2009-12-01'});

exports.handler = async (event) => {
    var params = // email params
    try {
        var data = await ses.sendEmail(params).promise();
        return {
            statusCode: 200,
            body: JSON.stringify("Email sent! Message ID: " + data.MessageId),
        };
    } catch (error) {
        console.error(error);
        return {
            statusCode: 500,
            body: JSON.stringify("Error sending email: " + error.message),
        };
    }
};

But wait! What if a second data sync action needs to be added? A third? Serverless functions are great for implementing one-off features like this in a vacuum, but managing a whole flock of serverless functions with their own source code, their own VPC configurations, their own deployment processes, etc., can become an unmanageable mess very quickly.

Serverless Framework

The package provides a suite of Infrastructure-as-Code capabilities built to allow you to deploy and configure one or more serverless functions defined within a single directory. This means that you can easily keep your functions' implementations and configurations all tracked within one repository!

Let's walk through the setup of a project using serverless framework to show how simple it is.

Create a Node Project, and Install the Serverless Framework Package

In your project directory, with your unix shell of choice, run the following.

 
npm init
npm install serverless

Use the Serverless Framework CLI to Pick the Project Template

The CLI will provide a list of templates to choose from. For this post, we're going to create a Lambda application targeting Node.js with an Amazon API Gateway integration for triggering our functions.

 
? What do you want to make?
  AWS - Node.js - Starter
> AWS - Node.js - HTTP API
  AWS - Node.js - Scheduled Task
  AWS - Node.js - SQS Worker
  AWS - Node.js - Express API
  AWS - Node.js - Express API with DynamoDB
  AWS - Python - Starter
  AWS - Python - HTTP API
  AWS - Python - Scheduled Task
  AWS - Python - SQS Worker
  AWS - Python - Flask API
  AWS - Python - Flask API with DynamoDB
  Other

Name the project once you've selected the template. 

 
? What do you want to make? AWS - Node.js - HTTP API
? What do you want to call this project? example-project

This should generate the following files in your Node project.

The configuration of each Lambda function and this gateway are all specified in the generated serverless.yaml file.

 
service: example-project
frameworkVersion: '3'
provider:
  name: aws
  runtime: nodejs18.x
functions:
  api:
    handler: index.handler
    events:
      - httpApi:
          path: /
          method: get

This config tells the API gateway to invoke our Lambda (referenced here as the auto-generated function name of index.handler, which matches the function code's file name and export field) in a Node 18 runtime, whenever a GET request is sent to the API gateway. The auto-generated function (in ./index.js) looks like this.

 
module.exports.handler = async (event) => {
  return {
    statusCode: 200,
    body: JSON.stringify(
      {
        message: "Go Serverless v3.0! Your function executed successfully!",
        input: event,
      },
      null,
      2
    ),
  };
};

Creating the Functions

Let's update index.js to include both of our emailing functions.

 
var AWS = require('aws-sdk');
AWS.config.update({region: 'us-west-2'});
var ses = new AWS.SES({apiVersion: '2009-12-01'});
const sendEmailAndReturnResponse = async (params) => {
  try {
      var data = await ses.sendEmail(params).promise();
      return {
          statusCode: 200,
          body:
    JSON.stringify("Email sent! Message ID: " + data.MessageId),
      };
  } catch (error) {
      console.error(error);
      return {
          statusCode: 500,
          body: JSON.stringify("Error sending email: " + error.message),
      };
  }
};
           
exports.mail1 = async (event) => {
    var params = getEmailParams1(event); // getEmailParams1 is a function that you'll define for your own system
    return sendEmailAndReturnResponse(params);
};
exports.mail2 = async (event) => {
    var params = getEmailParams2(event); // getEmailParams2 is a function that you'll define for your own system
    return sendEmailAndReturnResponse(params);
};

Configuring the Functions

Updates the serverless.yaml file to map both functions to POST requests sent to their endpoints, and to route the API gateway endpoints to each function correctly.

 
service: example-project
frameworkVersion: '3'
provider:
  name: aws
  runtime: nodejs18.x
functions:
  api:
    handler: index.mail1
    events:
      - httpApi:
          path: /
          method: post
  api:
    handler: index.mail2
    events:
      - httpApi:
          path: /
          method: post

Enter your AWS account credentials, and both of your functions will be deployed to a new lambda application for you!

Update Your Functions

Updates to the serverless.yaml or the function source code can be easily applied to the deployments by simply re-running the command above. Serverless Framework tracks the configuratin of your functions with AWS CloudFormation, so it knows exactly what to update when changes have been pushed.

Learn more about DMC's application development expertise and contact us for your next project. 

]]>
Sam Wallace Tue, 08 Apr 2025 13:05:00 GMT f1397696-738c-4295-afcd-943feb885714:12641
//ultraskinx1.com/latest-thinking/blog/id/13717/team-building-through-lego-building#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=13717 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=13717&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/13717/team-building-through-lego-building This winter, we brought the popular Lego building challenge back to the DMC Chicago office as an Activity Fund event. The Activity Fund is a budget for monthly, company-sponsored social activities in each DMC office. The Lego event was such a hit last time that I knew we had to do it again, and I decided that it was the perfect indoor activity to enjoy together during the colder weather. 

chicago lego build

Beating the Cold

I didn’t anticipate how cold it would be on the day of the event. I considered rescheduling when the temperature outside was a brisk -7°C! However, everyone in the office was committed to building together in the office despite the cold weather.  

chicago lego build

chicago lego build

A Full House

We hosted the event in the DMC Chicago office’s largest space, and the turnout was overwhelming—more than 80 people showed up! With such a large crowd, we were at capacity, and some teams had to spread out to other rooms to create a little more space. Attendees got cozy, huddling around whatever counter space they could find. Despite the crowded atmosphere, the room was full of excitement as people dove into the activity. 

chicago lego build

chicago lego build

Blockbuster Event

The event kicked off with a group picture in front of our massive pile of colorful Lego boxes. Afterward, everyone rummaged through the pile of boxes to find their kit. It was fun to see so many people building together and to watch the Lego kits come together in real time. The space and plant-themed Lego sets were popular, but I chose a car from the Fast and the Furious.  

chicago lego build

Cross-Office Connections

Another highlight of the event was spending time with colleagues from other offices who were visiting Chicago for the Electrical Engineering Team Summit. It was fun to get to know them better by building Legos together and for them to be welcomed by the Chicago team. Looking around at the packed room, it was clear that this event had once again brought our team together. 

chicago lego build

chicago lego build

Learn more about DMC’s company culture and check out our open positions.

]]>
Aleks Konstantinovic Mon, 07 Apr 2025 13:00:00 GMT f1397696-738c-4295-afcd-943feb885714:13717
//ultraskinx1.com/latest-thinking/blog/id/13716/2025-denver-labview-user-group-meeting-at-dmc#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=13716 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=13716&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/13716/2025-denver-labview-user-group-meeting-at-dmc DMC is excited to host this quarter's ALARM LabVIEW User Group meeting on Thursday, April 17, 2025! The ALARM (Advanced LabVIEW Architects of the Rocky Mountains) group brings together enthusiasts to discuss programming techniques, design strategies, updates and share experiences.

This event offers a fantastic opportunity to learn, receive updates on NI and NI Software, network with professionals in the Denver area, and have fun! Food and drinks will be provided, and after the presentations, you'll have a chance to relax on DMC's rooftop deck or play a game of pool with the team! If you're interested in attending, please register .​

Agenda:

  1. Meet and Greet (DMC Food & Beverages Provided)
  2. Presentation 1 (Welcome and DMC Overview, Time Permitting: EtherCAT for NI)
  3. Presentation 2 (Pickering PXI Demo with Dan R.)
  4. Presentation 3 (Time Synchronization with Josh R.)
  5. Presentation 4 (Community Sourced)
  6. Closeout and socialization time.

Event Logistics:

  • Location: DMC Denver, 2601 Blake St., Suite 301, Denver, CO 80205​
  • Date: Thursday, April 17, 2025​
  • Time: 6:30 PM - 8:30 PM
  • Parking: The entrance to DMC’s parking lot is located on Blake St. Please refer to the image below.
    • The main entrance of the office building is outlined in blue.
    • Guest parking is in the red area.
    • If those are taken, it is fine to park in the outlined green area in the image below.
    • Street parking near the office is also readily available and free!

Casey_Langenbahn_0-1743453328861.png

We're looking forward to a great evening with the Denver LabVIEW community. This event is a chance to connect, share ideas, and learn more about how others are using NI tools in real-world applications. Whether you're presenting or just joining the conversation, we're glad to have you be a part of it. 

Ready to join us? . 

]]>
Casey Langenbahn Fri, 04 Apr 2025 13:15:00 GMT f1397696-738c-4295-afcd-943feb885714:13716
//ultraskinx1.com/latest-thinking/blog/id/13715/dmc-quote-board--april-2025#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=13715 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=13715&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/13715/dmc-quote-board--april-2025 Visitors to DMC may notice our ever-changing "Quote Board," documenting the best engineering jokes and team one-liners of the moment. 

DMC quotes

Learn more about DMC's company culture and check out our open positions

]]>
DMC Wed, 02 Apr 2025 17:07:00 GMT f1397696-738c-4295-afcd-943feb885714:13715
//ultraskinx1.com/latest-thinking/blog/id/10268/configuring-plcsim-advanced-for-modbus-testing#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10268 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10268&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/10268/configuring-plcsim-advanced-for-modbus-testing After writing about the benefits of hardware simulation, and PLCSIM Advanced in particular, I spent some time thinking of other great ways I've used this tool to improve the development process. Another area that PLCSIM Advanced really shines is testing Modbus communications.

Perhaps you're in the development phase and want to simulate, or maybe you're commissioning and want to confirm everything will go as planned before downloading to critical systems. In both cases, PLCSIM Advanced is the tool you'll want to know how to use. This blog piggybacks heavily on Nikhil Holay's blog 5 Tips For Getting Started In PLCSIM Advanced, but you might also check out my previous blog Configuring PLCSIM Advanced for PLC and Ignition Development over OPC UA if you've still got questions.

What About Modbus?

In addition to PLCSIM Advanced, you’ll want to grab another tool, , and install it on the same VM (or host machine) as PLCSIM Advanced. For this example, I’ll simply read the first 10 holding registers I configured on my PLC.

First, I created the MB_SERVER object in my project. A TCON_IP_v4 object is required to configure the Modbus server, and you’ll want to also include a non-optimized DB with your holding registers as well. For a great in-depth discussion of what’s going on with this block, check out Jason Mayes’s blog Using an S7-1200 PLC as a Modbus TCP Slave.

Screenshot: MB_SERVER object in Siemens TIA Portal.

In this example, we’ll be communicating over TCP/IP, so set the ConnectionType to 11 and LocalPort to 502 (although these should be set by default).

Screenshot: TCON_IP_v4 connection parameters object in Siemens TIA Portal.

When you first open a connection in Modbus Poll, you’ll be prompted to configure the connection settings. Just enter the IP address of your PLCSIM Advance controller, the same port you configured in Portal, and verify you’re using Modbus TCP/IP.

Screenshot: Connection Setup dialog in Modbus Poll.

After you press OK, it will return to the main screen where you’ll see a table with 10 entries corresponding to the first 10 holding registers on your Modbus server. If all of your communication settings are correct, the data in Modbus Poll should match what’s online with your simulated PLC.

Screenshot left: Modbus holding registers in Modbus Poll. Screenshot right: Modbus holding registers in Siemens TIA Portal.

If you’d like to extend the number of registers or test something other than Read Holding Registers, navigate to Setup -> Read/Write Definition screen to modify these settings.

Screenshot: Read/Write Definitions setup menu in Modbus Poll.

Conclusion

PLC emulation is a great development tool, and PLCSIM Advanced makes it easy to test Modbus communications without physical hardware and without interruptions of service to critical systems.

You can learn more about our Siemens expertise, or contact DMC to discuss your next project.

]]>
Keith Krotzer Wed, 02 Apr 2025 13:06:00 GMT f1397696-738c-4295-afcd-943feb885714:10268
//ultraskinx1.com/latest-thinking/blog/id/13714/adg-mission-critical-applications-building-the-right-way-with-artful-design#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=13714 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=13714&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/13714/adg-mission-critical-applications-building-the-right-way-with-artful-design Aerospace, Defense, and Government (ADG) projects demand unwavering reliability, as failure is not an option. From early-stage research to deployment and beyond, rigorous testing ensures mission-critical assets perform flawlessly throughout their lifecycle. In this blog series, we will explore the key concepts of successfully creating mission-critical applications in the context of complying with NASA NPR 7150.2, beginning by discussing the COTS solutions (Commercial Off-the-Shelf) coupled with custom extensions that enable documentation, a foundational element in efficiently achieving compliance.

NASA's Procedural Requirement (NPR) 7150.2

Developing test systems for large, complex, multi-use scenarios involves integrating performant, flexible hardware with extensible, user-centric software. Given the complexity and scale of the projects, timelines and budgets must be carefully considered and monitored. Many aerospace companies and US government agencies have codified their lessons learned during test system development in various standards, such as NASA Procedural Requirement (NPR) 7150.2, which is aimed at standardizing the software engineering process. The 7150.2 requirement addresses how to ensure the software's robustness and the process by which the software is created, with its foundational belief being that the software engineering process is as vital as writing the code itself. Without the proper process, it's not possible to create the right solution, deliver on time, or deliver on budget.

This makes documentation critical to creating plans and maintaining alignment during all phases of project execution, across all members of your team and partners. Choosing the right documentation tools and creating custom integrations can greatly reduce the hurdles to achieving 7150.2 compliance.

Building Through A.R.T.ful Design

NASAAt DMC, we employ an A.R.T.ful engineering philosophy which focuses on Accountability, Reliability, and Traceability in all that we do. An example of this is in our collaboration with Northrop Grumman on the NASA Space Launch System (SLS) Booster Obsolescence Life Extension (BOLE) project, where DMC met and exceeded industry standards processing approximately 100 documents (each with one or more versions) flowed down from Northrop Grumman defining not only the end article's requirements, but also the process by which the end article must be constructed (7150.2 being one of those documents). The documents were synthesized into our databases, and each requirement catalogued.

DMC created numerous plans and designs in response to these requirements while involving Northrop Grumman in the requirement refinement process. Our team of expert developers created both hardware and software solutions, clearly indicating in code where the solutions met designs. Before, during, and after development, our team followed quality assurance processes so that the system would be sufficiently tested, keeping a traceable record of results back to requirements.

Putting Principles into Practice

Successfully executing mission-critical ADG applications requires more than just technical skill—it takes a disciplined, process-driven approach centered on Accountability, Reliability, and Traceability. At DMC, we've proven through countless successful projects that our engineering philosophy enables us to exceed stringent compliance standards while remaining flexible and collaborative throughout the project lifecycle. In short, as A.R.T.ful engineers, "We say what we do, we do what we say, and we prove that we did it."

Learn more about DMC's Test & Measurement Automation Solutions and contact us for your next project.

________________________

Explore our other key topics in NPR 7150.2 compliance through the remainder of our ADG Mission-Critical Applications series:

  • It All Starts with Documentation Learn about the tools and digital ecosystems that form the baseline for persistent, searchable, and traceable project knowledge and planning. We’ll show how our use of platforms like Confluence, Monday.com, and SharePoint lays the groundwork for compliance with SWE-013 and other documentation-related requirements.
  • Entrusting Your Mission's Objectives to a Partner – Communication is the first step in traceability. Learn how DMC captures client intent from day one, linking early conversations and contract documents into Requirement Traceability Matrices using Monday.com. This aligns directly with SWE-050 and SWE-052.
  • Seeing Your Vision Take Shape Through Designs – Your vision must translate into tangible designs. Learn about how DMC maps requirements to designs using Confluence and custom integrations, generating a live Requirements Coverage Table and validating coverage bidirectionally per 7150.2.
  • The Plan Is the Plan Until the Plan Changes – Change happens. Learn how DMC's configuration and change management systems allow us to adapt while preserving traceability and alignment with SWE-079.
  • Turning Your Vision into Reality – Design becomes code. Learn how DMC's traceability extends into source code across multiple platforms (NI LabVIEW, Python, NI VeriStand), and how we maintain this alignment without overburdening developers.
  • Quality Matters – Close the loop with testing and assurance. Learn how DMC utilizes technology such as GitLab, automated testing frameworks, and custom tooling to perform code reviews, trace test results to requirements, and verify quality compliance with NASA-STD-8739.8.
]]>
Chris Cilino Tue, 01 Apr 2025 17:30:00 GMT f1397696-738c-4295-afcd-943feb885714:13714
//ultraskinx1.com/latest-thinking/blog/id/10263/configuring-plcsim-advanced-for-plc-and-ignition-development-over-opc-ua#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10263 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10263&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/10263/configuring-plcsim-advanced-for-plc-and-ignition-development-over-opc-ua The ability to simulate hardware is an invaluable tool to keep in your development kit. It allows you to develop remotely, test changes without interrupting critical systems, and more. This blog serves as an addendum to Nikhil Holay’s informative blog 5 Tips For Getting Started In PLCSIM Advanced.

What if PLCSIM Advanced Doesn’t Work Out of the Box?

Generally, configuration of PLCSIM is straightforward. However, you may run into a few hiccups along the way. In both of the following cases, PLCSIM provides helpful advice to a smooth resolution. For example, you may attempt to configure your adapter and receive the following notification.

Screenshot: Siemens PLCSIM Virtual  ethernet Adapter was not found.

If that’s the case, reinstalling the software is a quick way to getting on with your project. You may also come across the following.

Screenshot: Siemens PLCSIM Virtual  ethernet Adapter is disabled.

This is even easier to handle. Browse to your Network Connections and simply enable the correct virtual adapter.

Screenshot: Enable the Siemens PLCSIM Virtual Ethernet Adapter in Network Connections.

Configuring an S7-1500 Processor

With those details out of the way, you should be ready to go! But where exactly are you headed? Are you looking to develop code for a Siemens PLC that supports OPC UA? Are you building an Ignition HMI project? Do you want to build each of these out on the same machine?

If you answered an emphatic "Yes!" to the above, PLCSIM Advanced has you covered. Just make sure to select a processor in TIA Portal that has an integrated OPC UA server. For an S7-1500, you’ll need V2.0 or above. The same is true for an ET 200, with the caveat that the ET 200S series controllers don’t support OPC UA. PLCSIM Advanced does not support S7-1200 controllers at the time of publishing.

If you're adapting an existing solution to fit your development needs, fear not. Changing the version of your virtual controller is even easier than swapping real hardware. In the Devices & Network view, right click on the PLC and select “Change Device.” On the left, Portal will give you a description of the current device, and on the right you can select a new device and its version from the dropdown menu.

Screenshot: Changing S7-1500 PLC device version in Siemens TIA Portal.

How Do I Configure the OPC UA Server Settings?

After creating your device, you’ll want to configure the OPC UA server options, again in TIA Portal. Head to the device Properties and click on OPC UA. You’ll want to Activate the server and proceed, leaving the remaining settings at their defaults.

Screenshot: Activating OPC UA server in Siemens TIA Portal.

You’ll also need to select the appropriate license under the Runtime licenses menu.

Screenshot: Activating OPC UA license in Siemens TIA Portal.

After everything is configured in your project, you’re good to go! Download your project to the PLCSIM Advanced controller you already created as if it were real hardware–see Tip One, Step 3 in Nikhil’s blog.

Configuring PLCSIM Advanced for Development Across Machines

If you’re planning do development in TIA Portal and Ignition on the same local machine (or VM), use the option for TCP/IP communication with the <Local> network adapter selected. If your Ignition server resides on a different machine or VM, you’ll want to select the network adapter of the machine running PLCSIM.

How does all of this work on the Ignition side? Thankfully, it’s as straightforward as working with real hardware. On your Ignition server, add a new OPC UA Connection, point the Endpoint URL to your virtual PLC, and proceed as usual.

Screenshot: Configuring a new OPC Connection in Inductive Automation Ignition.

If the two pieces of software are on the same machine or VM, you should be ready to go. If your Ignition server and PLCSIM are on separate VMs and Ignition indicates there is an issue, you’ll want to make sure the two are both set to share the host’s network (via NAT or another method). Check your VM’s Network Adapter settings to be sure.

Conclusion

PLC emulation is a great development tool, and PLCSIM Advanced makes it easy to develop your Siemens PLC and Ignition projects concurrently.

You can learn more about our Siemens and Ignition expertise, or contact DMC to discuss your next project.

]]>
Keith Krotzer Tue, 01 Apr 2025 13:40:00 GMT f1397696-738c-4295-afcd-943feb885714:10263
//ultraskinx1.com/latest-thinking/blog/id/13713/avalonia-ui-noteworthy-differences-from-wpf#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=13713 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=13713&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/13713/avalonia-ui-noteworthy-differences-from-wpf

Overview

Avalonia UI is a cross-platform UI framework that is considered a “spiritual successor” to WPF. If you are brand new to Avalonia UI, you should check out this blog, Avalonia UI: Introduction and Initial Impression, to learn the basics of what Avalonia UI is. This blog builds on that foundation and will help you to better understand the noteworthy differences between developing with Avalonia and WPF.

Styling

In Avalonia, a Style is more similar to a CSS style than a WPF style. The Avalonia equivalent of a WPF Style is a Control Theme.

A Style should be used to style a control based on its content or purpose within the application whereas a Control Theme should be used for shared theming between all controls of that type. For example, a TextBlock might have a Control Theme to set a shared font type and font color, but a TextBlock Style would alter the font weight and font size. Examples of both can be seen below:


    
    
    

Having a more layered styling approach in Avalonia is beneficial since it allows you to use Styles to substitute a control’s property values without needing to override the entire theme. Conversely, in WPF, you can get stuck needing to override an entire theme if there is a theme applied to a control without an x:Key defined. If there is an x:Key defined in WPF, you can take advantage of the BasedOn property to build upon a pre-defined theme.

Avalonia Styles are placed in the Styles collection of a control and Control Themes are placed in the Resources collection of a control. Comparatively, in WPF, the Styles are all placed in the Resources collection.

Styles: Conditional Classes

A feature that stood out to me significantly is conditional classes for Avalonia Styles. This allows you to alter Style of a control based on a bound condition. In WPF, doing something similar is overly verbose and complicated and requires the use of a DataTrigger. In Avalonia, there is a lot less markup code that is needed.

The examples below demonstrate conditionally changing the TextBlock foreground based on a bound property.

In Avalonia, the following code will use the Error Style based on the bound property. Since you can conditionally pass in the property, both the DeviceState text and the SystemState text can share the Style with little code.

	




In WPF, you must rely on a DataTrigger to change the Foreground value. Since the SystemState text and the DeviceState text rely on different bound properties as their condition, they cannot share the Style which leads to less code reuse.

		

    
    


    
    

Controls

Controls in Avalonia are very similar to WPF, but there are a few tweaks that make the framework quicker to work with but potentially less feature-rich.

Visualization and Animations

Avalonia does not support the VisualStateManager, and it instead relies on styles and pseudoclasses such as :hover, :focus, and :checked. Additionally, Avalonia does not use Storyboards, but rather it has simpler animations that use Transitions and Animation.

Grid Row and Column Definitions

A low-hanging fruit that Avalonia improved was how to define rows and columns for the Grid control. Instead of using multiple lines to do it, Avalonia has allowed you to do it within one line. The following examples show how to define the same grid layout in Avalonia and WPF.

In Avalonia,

	

In WPF,


    
    
    

Compiled Bindings

Another nice feature in Avalonia is the choice to use compiled bindings. This can be very helpful since it allows the developer to catch binding errors faster since they are caught at compile time instead of runtime.

There are some limitations with compiled bindings though. For instance, they require a static DataContext and a defined data context type using x:DataType. However, for most use cases, they will be helpful in debugging and development!

Takeaways

When comparing Avalonia and WPF, the running theme is that Avalonia prioritizes flexibility to support multiple platforms and succinct code. This makes Avalonia lighter weight and a great option for cross-platform development. For complex, feature-rich Windows development, WPF has the edge over Avalonia.

At DMC, we are always looking towards the future and learning new technologies to better support the wide variety of needs our customers have. We are excited to continue exploring Avalonia UI to provide expert solutions for cross-platform, desktop development.

Ready to take your Application Development project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals.

]]>
Hannah Laverty Mon, 31 Mar 2025 18:31:00 GMT f1397696-738c-4295-afcd-943feb885714:13713
//ultraskinx1.com/latest-thinking/blog/id/10389/migrating-tag-history-in-ignitions-tag-historian#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10389 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10389&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/10389/migrating-tag-history-in-ignitions-tag-historian The Ignition Tag Historian module is a powerful tool that efficiently stores tag history in designated SQL tables. This data can be accessed through scripting, reporting, and historical bindings and has built-in features such as automatic purging and partitioning to avoid overloading the database. The module also allows for easy viewing of tag trends by allowing users to drag and drop tags on Vision's Easy Chart and Perspective's Power Chart components.

Despite the great capabilities of the Tag Historian, the format in which data is stored can complicate the process of directly querying these tables. As a result, migrating tag history from one tag to another after say, renaming or moving a tag, may prove challenging. These types of migrations may need to be performed after reorganizing the tag structure of a project or replacing a sensor while maintaining continuous tag history data.

The Example

For simplicity's sake, we will go over an example in which a new tag is added to a project under these conditions:

  1. The old and new tags are standalone tags not found in any UDTs.
  2. The tag group of the tags did not change.
  3. Neither the gateway nor the gateway name changed.
  4. Only one gateway is used.

It's worth noting that tag history can still be migrated when some or none of the above conditions apply! There are several nuances each of the conditions requires that one must consider when performing migration. Knowledge of all Ignition built-in historian tables is key to ensuring a successful migration for more complex conditions. Due to the simplicity of the example used, we will only focus on two tables.

Let's start off with a single tag called "TagX". We want to transfer the history of this tag to a new tag, "TagY", that we'll introduce later. According to this Power Chart image, TagX has been active since at least 12:40PM and has been holding at a steady value of 2.

Power Chart displaying history for TagX

Tag Historian Tables

We can find this tag in one of Ignition's historian tables, sqlth_te. This contains all of the non-data information about a tag. The information in this table is a great starting point to understanding the other tables created by the historian. The most important columns are:

  • id: A unique identifier for a tag
  • tagpath: The tag path (without the provider), all lowercase
  • scid: id of the tag group that tag is in
  • created: An epoch timestamp of when the tag began storing tag history
  • retired: An epoch timestamp of when the tag stopped storing tag history or was deleted

Database query browser displaying sqlth_te table

We also have the tables in which the history is actually stored. If monthly partitioning is set up, the name of the table will look something like sqlt_data_A_B_C where A is the driver id or gateway identifier, B is the year, and C is the month. In this example, our data table is sqlt_data_1_2022_11, so we know that this data was taken in November of 2022. The columns here log the actual values of the tags storing history. The columns in this table are:

  • tagid: The id from the sqlth_te table
  • ___value: The value recorded for the tag
  • dataintegrity: The quality code of the value recorded
  • t_stamp: The epoch timestamp at which the value was recorded

We can also see TagX, which has a tagid of 5, started at a value of 0, and then transitioned to a value of 2 at 12:40PM.

Database query browser displaying data table

Adding TagY

Next, we introduce a new tag called "TagY", assigned a tagid of 6 by Ignition, that begins collecting information at 12:52.

Database query browser displaying the sqlth_te table with TagX and TagY

From 12:52 to about 12:53, both tags are reading a value of 2. The sensor for TagX goes offline just before 12:53, leaving only TagY online. At about 12:58, TagY then drops down to 1.

Power Chart displaying history for TagX and TagY

Database query browser displaying the data table for both TagX and TagY

Performing the History Migration

To transfer the history of TagX to TagY, all we have to do is run a simple update query to replace the tagpath of TagX with the tagpath of TagY. The following script examples use Ignition's script console to query the database. Because these scripts are only querying the database, one may use whatever query tool they prefer.

Script console running a query

The sqlth_te table now looks like this.

Database query browser displaying the sqlth_te table for TagY

While sql_data_1_2022_11 will still look the same, the tag history of TagX has transferred over to TagY. The history for TagY now goes beyond 12:52PM, its start time and TagX is no longer in the tag browser for the Power Chart component. We can also see that Ignition continues to collect and store history for TagY when the value rises to 3 at 1:11PM.

Power Chart displaying that TagX's history has been transferred over to TagY

Learn more about DMC's Ignition expertise, and contact us to get started on your next HMISCADA, or MES project!

]]>
Isabelle Chan Fri, 28 Mar 2025 13:00:00 GMT f1397696-738c-4295-afcd-943feb885714:10389
//ultraskinx1.com/latest-thinking/blog/id/12654/exploring-thingsboard-an-iot-platform-for-your-next-project#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=12654 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=12654&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/12654/exploring-thingsboard-an-iot-platform-for-your-next-project The Internet of Things (IoT) is transforming industries by enabling smarter, data-driven decisions. To fully harness the power of IoT, you need the right platform—and that’s where ThingsBoard comes in. As a comprehensive IoT platform, ThingsBoard excels in device management, data visualization, and more, making it an ideal choice for various software development projects.

At DMC, we offer a wide range of software consulting services, and our experience with ThingsBoard is just one part of our extensive toolkit. Whether you need help with IoT solutions, custom software development, or anything in between, we’ve got you covered. In this post, we’ll explore the capabilities of ThingsBoard and how you can leverage it to build a professional IoT solution.

What is ThingsBoard?

 is an IoT platform that stands out for its versatility in device management, data visualization, and customer management. These features are all available out of the box and can be set up with minimal overhead. It supports multiple device protocols, including MQTT, CoAP, HTTP, and others, making it compatible with a wide array of hardware and software systems.

The platform offers flexible hosting options—whether you prefer cloud-based solutions on Azure or AWS, or a self-hosted environment. This flexibility allows you to scale your IoT projects as needed, making ThingsBoard a strong choice for businesses of all sizes.

How DMC Leverages ThingsBoard for IoT Success

At DMC, our software consulting services encompass a wide range of technologies, and ThingsBoard is one of the many tools we use to deliver top-notch IoT solutions.

AirVac Vacuum Sewer Systems

For, a subsidiary of Aqseptance Group, we utilized ThingsBoard to develop a custom IoT dashboard for managing vacuum sewer systems. The project involved real-time telemetry data and a detailed vacuum station dashboard, demonstrating ThingsBoard’s robust visualization capabilities.

The  Device Map Dashboard allows an operator to easily view all their deployed devices and any active alarms or relevant telemetry on them.

Live telemetry plots with adjustable time ranges are incredibly simple to implement in Thingsboard as shown below.

More Traditional SCADA style dashboards can be implemented as well as shown below.

Why Choose ThingsBoard for Your IoT and Software Development Needs

Data Visualization

One of the key strengths of ThingsBoard is its powerful data visualization tools. The platform’s built-in widgets and customizable dashboards allow for real-time monitoring and analysis, making it easier to gain actionable insights from your IoT data. to see ThingsBoard in action (see it in Dark Mode by clicking the icon in the top right).

Asset and Device Management

ThingsBoard makes asset management straightforward with its built-in user access controls. Whether managing a handful of devices or thousands, the platform’s scalable architecture ensures that your IoT solution grows alongside your business. 

Alarm and Notification Handling

ThingsBoard’s alarm handling features are another highlight. With trigger-based alarms linked to device telemetry, you can integrate notifications via services like Twilio to stay informed about critical events in real-time.

Partner with DMC for Your Next Software Project

Whether you’re exploring the potential of IoT with ThingsBoard or need a partner for a complex software development project, DMC is here to help. Our team of experts is ready to guide you through every step of your project, ensuring that you achieve your goals efficiently and effectively.

If you’re searching for a reliable partner to implement an IoT solution or any other software project, look no further. Let’s collaborate to bring your vision to life with a platform and a partner that understands the full spectrum of software development needs.

Learn more about DMC's IoT expertise and contact us for your next project. 

]]>
Brayton Larson Thu, 27 Mar 2025 13:12:00 GMT f1397696-738c-4295-afcd-943feb885714:12654
//ultraskinx1.com/latest-thinking/blog/id/10354/nunit-testing-and-using-moq-in-c#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10354 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10354&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/10354/nunit-testing-and-using-moq-in-c *To the tune of Willy Wonka singing*
 Come with me, and you’ll be, in a world of unit testing informationnnnn.

Unit testing! Unit testing is a great way to ensure that any updates or new functionality added to your code runs smoothly. With well-written tests, you can catch anything that may have been broken by changed methods. 

Getting started can be a little tricky as there are some caveats and neat tricks that are hard to identify at first. In this blog you’ll learn how to get started with NUnit unit testing in C#, use Moq to help enhance these tests, and get testing like a pro. 

Getting Started

To begin, open your project in Visual Studio Enterprise. If the project is opened in a Community or other edition of Visual Studio, you will not be able to view specific breakdowns of the code coverage by section. If you are not concerned with looking at code coverage, this shouldn’t be an issue.

Once the project is opened, select “Test” in the top menu and navigate down to “Test Explorer” to view a layout of all tests.

 Navigating to the Test explorer in Visual Studio Enterprise

This should open the Test Explorer for you.

The Test Explorer

The Test Explorer is where you can view all the tests built in your project. To run all the tests at once, select the multi-layered play button in the top left corner. To run individual groups of tests, you can open nested tests and run either groups or individual tests by using the right-hand play button shown in the snapshot below.

Once you run all the tests using the play button in the top left corner, you will be able to view the passing or failing status of the test grouping based on the icon next to the grouping. 

In this case, all our tests have passed and have a green check mark next to them. If a test inside the grouping fails, the grouping will be marked by a red X. 

You can also use the highlighted icons to filter by passing tests, failing tests, or tests that are not run.

Where Tests Are Located

To locate tests, drill down into the test explorer and double click a test. This will take you to its location in your project

Viewing Code Coverage

To view the code coverage, once you have successfully opened the Test Explorer and run all your tests, return to the “Test” menu and select “Analyze Code Coverage for All Tests.”
 
 This will bring up the “Code Coverage Results” window, which you can drill down into to view coverage by sections of the project.

Installing Necessary NuGet Packages

In order to run the tests, you will need to have a few NuGet packages installed in each section of the project with tests present. In this case, tests are present in both Mars.NUnit and in NUnit under Reports, so we will want our packages installed in both sections.

To do this, right click on the portion of the project you would like to install the packages and select “Manage NuGet Packages."

From here, the packages you will need to install are:

  • Microsoft.CodeCoverage
  • Microsoft.NET.Test.Sdk
  • MSTest.TestAdapter
  • NUnit
  • NUnit3TestAdapter

These packages and their respective versions are also listed in the screenshot below.

Writing your Tests and Using Moq

Writing unit tests is straightforward; the process can be as simple or as complex as you would like. For example, I created a method called AddTwo, which does exactly as the name implies: adds two to my input. 

Writing tests using Moq

I also wrote a test with three test cases. This checks that when I add two my answer is what I expected.

Writing tests using Moq
  As you can see, this is straightforward, and my tests are all passing. Let’s say, however, I had a method called ‘AddThree,’ which depended on a function ‘AddOne’ (the desired result), and ‘AddOne’ had either not been completed or was still in development. 

This situation is a great example of where we can use Moq to make our lives easier. As shown below, I’ve created my function AddThree():

Writing tests using Moq
 I’ve also defined AddOne to be a virtual method. This is so that we can use Moq to mock the method. If your method is private and unable to be scoped to, or virtual, you won’t be able to use Moq to get around this issue. I’ve now written a new test called TestAddThreeMethod and used Moq to mock a call of our AddOne method.

In this case, I’ve updated my AddOne function to erroneously try and add 5 to my function, which would throw off our expected addition of just one.

Writing tests using Moq
Using Moq, we can get around the AddOne method that our AddThree method uses and isolate AddThree, only testing AddThree's functionality.

Writing tests using Moq
 What I’ve done in the above image is set up a Mock class of my AddingFunctionsClass. In the line below, I’ve instantiated that whenever AddOne gets called inside my mockFunctionClass, it will instead default to using what I have entered in my Returns(), which is my number + 1, the correct output of AddOne. 

You could also hardcode AddOne to return a single value each time, however, this would then no longer show our tests as passing.

Using SetupSequence

SetupSequence is another powerful tool to use in Moq. Let’s say we have a method that gets called multiple times in a function, but we want it to return variable elements for each time it is called. This can be accomplished by using the SetupSequence. For example, we could set our AddOne method to return different results for each time it was called, as shown below.

Using SetupSequence in Moq

If we had called AddOne three times in this scenario, the first time it would return our input plus one, then 3, then 5. 

Another handy feature of Moq is being able to simply skip over methods altogether. Let’s say you have a method that doesn’t return a result but does some Initialization features. You could follow this process, and simply not set a return for the method. This will then simply skip over the method any time it is called. 

 Using SetupSequence in Moq

This statement says any time we call AddOne, don’t return anything, and don’t run AddOne either.

These are some basics and helpful tricks with Moq to get you going. There is a multitude of more options that can make Moq a powerful tool. With these tools, you’ll be able to write effective and useful tests quickly and make sure that your code runs smoothly and as expected!

NuGet NUnit window

Troubleshooting

It is possible that when unzipping the project and after building the project, the tests refuse to run. Here are some common tricks that we used to ensure the tests were properly compiling:

  • Attempt a Rebuild of the whole project and then try running all tests again.
  • Attempt doing a Clean of the whole project and then building. 
  • Try uninstalling the NUnit and NUnit 3 Adapters Nuget packages and clean the project. Reinstall the packages, rebuild the full project, and then try running all tests again.
  • Some tests may not run after getting errors about other sections in the Report folder. Try building these sections in the Reports folder individually, doing a Build all, then running the tests.

Learn more about DMC's C# programming services and contact us today for your next project!

]]>
Kevin Service Mon, 24 Mar 2025 14:31:00 GMT f1397696-738c-4295-afcd-943feb885714:10354
//ultraskinx1.com/latest-thinking/blog/id/10628/simulating-siemens-wincc-unified-hmi-with-real-plc-data#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10628 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10628&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/10628/simulating-siemens-wincc-unified-hmi-with-real-plc-data In this blog we will explore simulating a Siemens WinCC Unified Version 19 HMI program using data from a connected physical PLC. In TIA version 19, the Siemens Communication Settings tool or Set PG/PC Interface is the easiest way to configure the online access path for the PC simulation to communicate with your hardware.

The Simatic Step 7 manager uses the S7ONLINE access point for communication on the PC/PG interface. By default, the access path on S7ONLINE is set to none as simulating a WinCC HMI program will run disconnected of the PLC program through Simatic manager. The HMI in this state will not display tags read from the PLC but will use HMI local tags and should be fully navigable.

Connection Parameters

Siemens Communication Settings Windows Control Panel

The easiest tool to use for establishing a simulator connection is the Siemens Communication Settings tool located in the control panel. Opening the tool will provide you with all possible connection modules on the computer as well as their respective parameters and diagnostics tools. From the address menu you can configure your IP address in the network connections menu of your PC.

To establish simulator-hardware connection, open the Access points menu. Select S7ONLINE and open the dropdown arrow. From here you can select the interface parameter which should match the same pathway that the PC uses to communicate with the hardware. The module property should auto populate and match the network connection name you configured.

Siemens Communication Settings Tool

In older versions of TIA Portal there is an interface configuration tool found in the Windows control panel named Set PG/PC Interface. From here, you can set the same pathway that will enable communication between the simulator and PLC.

After opening the tool, navigate to the Access path tab and select S7ONLINE as the access point. In the box below you will see all available interfaces for the path to take. Select the module in the list that matches the communication method you are using to talk to your hardware.

Set PG/PC Interface tool

After selecting the correct module, you can close the tool and launch your WinCC simulation from Simatic Manager or TIA Portal. Provided that you are in contact, the S7ONLINE point should update your simulation with real data coming from the PLC.

Troubleshooting

If the Set PC/PG tool opens but does not display the S7ONLINE access point, it may be resolved by downloading a Siemens PC_Identifier hotfix support package linked below.

Troubleshooting the PLC connection can follow the same path as a typical process to go online with hardware. Ensure that you have a stable ethernet connection to the PLC you are attempting to communicate with. On the S7ONLINE path that you configured, note the module name that auto populated when the interface parameter was assigned. In your Windows taskbar search "view network connections." Then, in this menu locate the ethernet driver that has the same module name as on the S7ONLINE path you are trying to use. In the popup window, go to properties then IPv4 and assign your own IP address on the same subnet and mask as the desired PLC. Once complete you should be able to ping the IP of your PLC from the command prompt.

If the simulated instance of your HMI still does not communicate with the PLC, stop the simulation and verify that the PC/PG interface selected when trying to go online with the PLC is the same as configured in either of the tools. Redownloading the program and verifying run mode is active may also assist with troubleshooting measures.

Learn more about DMC's PLC and HMI expertise and contact us for your next project.

]]>
Evan Ripperger Fri, 21 Mar 2025 13:00:00 GMT f1397696-738c-4295-afcd-943feb885714:10628
//ultraskinx1.com/latest-thinking/blog/id/13711/using-a-switch-matrix-for-automated-testing#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=13711 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=13711&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/13711/using-a-switch-matrix-for-automated-testing Using a Switch Matrix for Automated Testing 

In the world of Test & Measurement Automation, a turnkey test system may have tens, hundreds, or even thousands of individual signals or signal pairs that need to be verified during a test. Switching and multiplexing are common ways to change a system's hardware configuration by routing signals between the device under test (DUT) and a measurement device, power source, or other signal types to perform automated tests without having to manually disconnect or reconnect signals. 

What is a Switch Matrix? 

A switch matrix is a controllable hardware module that is comprised of switches organized into rows and columns that form a node or cross-point where each x-line (column) and y-line (row) meet. A switch matrix follows the matrix schematic,. A sketch of a 10x6 switch matrix is shown below: 

10x6 switch matrix

This 10x6 matrix gives a total of 60 “cross-points.” Each cross-point is an intersection between an x-line and y-line. These cross-points can be energized to connect an x-line to a y-line or x-lines to other x-lines. This example shows a 10x6 matrix, however, switch matrices come in many different varieties and sizes.

For example, PXI(e) / LXI / PCI matrices that vary by electrical ratings (current and voltage ratings) and number of crosspoints. To get a sense of the scale, a single high-density matrix can have more than 1,000 x-lines with over 4,000 cross-points! When specifying a switch matrix for your application, it is important to have a clear understanding of the maximum number of signals a system may need for a given test system along with the system's electrical characteristics and requirements. There are even specialized switch matrices for high-current, high-voltage, and radio frequency (RF) signal types. 

Below we will explore the fundamentals of a switch matrix and how they can be used in an example automated test system:

Connecting X-lines to Y-lines 

For example, energizing nodes (9,2) and (10,1) will create a connection between X9 and Y2, and another connection between X10 and Y1. 

connecting x-lines to y-lines

Connecting X-lines to X-lines 

Similarly, energizing nodes (1,1), (2,2), (9,2), and (10,1) will create a connection between X1 and X10, and another connection between X2 and X9. 

connecting x-lines to x-lines in switch matrix

Example Test System Hardware

Consider a DUT that has four data signal outputs and one power input. The test sequence required for this system is as follows: 

  • Step 1: Connect DUT to power source. 
  • Step 2a: Measure DUT Signal 1 with Measurement Device A. 
  • Step 2b: Measure DUT Signal 2 with Measurement Device B. 
  • Step 3a: Measure DUT Signal 3 with Measurement Device A. 
  • Step 3b: Measure DUT Signal 4 with Measurement Device B. 
  • Step 4: Disconnect DUT from power source. 

A test system that is capable of performing these steps may look something like this: 

example test system hardware

Where each of the required hardware connection steps are shown below: 
 

Step 1 – Connect Device Under Test (DUT) to a Power Source 

  • Connect the DUT to the power source by energizing relays (9,2) and (10,1). 
     

Device under test

Step 2 – Measure DUT Signal 1 and Signal 2 

  • Connect Measurement Device A to Signal 1 by energizing relays (1,6) and (2,5). 
  • Connect Measurement Device B to Signal 2 by energizing relays (3,4) and (4,3).  

measure DUT

Step 3 - Measure DUT Signal 3 and Signal 4 

  • Disconnect Measurement Device A from Signal 1 by deenergizing relays (1,6) and (2,5). 
  • Disconnect Measurement Device B from Signal 2 by deenergizing relays (3,4) and (4,3). 
  • Connect Measurement Device A to Signal 1 by energizing relays (5,6) and (6,5). 
  • Connect Measurement Device B to Signal 2 by energizing relays (7,4) and (8,3).  

measure DUT signals

Step 4 - Disconnect DUT from Power Source 

The example above illustrates how a switch matrix can be used to measure multiple signals with a single measurement device. This example uses Measurement Device A to measure DUT Signal 1 and Signal 3; Measurement Device B measures DUT Signal 2 and Signal 4. This example could be modified in several ways to configure the test system for different requirements.

For many applications, all measurement devices and DUT signals are connected to x-lines to reduce the number of y-lines which reduces the number of cross-points and overall hardware complexity. It is always important to read the switch matrix’s user manual to understand any restrictions the specific matrix may have. For example, many switch matrices do not support connecting multiple y-lines to a single x-line. 
 

disconnected DUT

Why use a Switch Matrix? 

A switch matrix is used in a test system to connect and disconnect multiple devices to and from each other automatically, often during a test sequence. For complex automated test equipment (ATE) racks that utilize multiple test devices and high signal count, a switch matrix can help minimize cycle time and eliminate manual setup that would otherwise require an operator. In cases with extremely high signal counts, a switch matrix transforms the impossible into the possible! 

Conclusion 

A switch matrix is a versatile piece of hardware that can help maximize an automated test equipment's (ATE) efficiency by changing the hardware configuration during an automated test. If a test workflow has a high signal count and requires manually connecting and disconnecting signals as part of the test, consider employing a switch matrix as part of your test system. 

For examples of case-studies which use a switch matrix, check out the links below: 

Ready to take your Test & Measurement Automation project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals. 

]]>
Derek Tulla Thu, 20 Mar 2025 21:27:00 GMT f1397696-738c-4295-afcd-943feb885714:13711
//ultraskinx1.com/latest-thinking/blog/id/13712/beckhoff-xts-part-3--simulation-and-plc-logic#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=13712 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=13712&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/13712/beckhoff-xts-part-3--simulation-and-plc-logic This is Part 3 of Getting Started with XTS. The first two posts in the series discuss Beckhoff XTS, walks through the installation process, setting up a base project, and configuring an XTS system. Today we will continue by setting up an XTS simulation and begin adding logic into our application  

Beckhoff XTS Series

Part 1 - Downloads and Starter Project
Part 2 - Setting Up a Physical XTS System

Setting Up an XTS Simulation 

Start the XTS Simulator 

XTS has a built-in simulator that you can start through the XTS simulation builder. This is the gear icon with a blue arrow. 

beckhoff XTS simulator

Start the XTS Simulation Builder

xts simulation builder

This starts the XTS simulation builder. You can select from a few template projects, but modifying the base project will give you the most control over the XTS system being set up. To do this, click the pencil next to modify base project. 

modify xts projects

Configure XTS Modules

configure xts modules

The first tab of the simulation builder is to configure the modules. Here you can add XTS track modules to build your track. In the top bar, you can select from the catalog of XTS modules and hovering over each module will give you the part number. To add a module, either select the module and click the add module plus button or double click on the module. 

configure xts modules

The last four digits of each module’s part number represent its length in millimeters. There are some modules that look the same but have different lengths, so be sure to select the correct length of each part for your application. Below is a simulation track for the XTS larger starter kit. It uses two AT2002-0250, 2 AT2050-0250, and 10 AT2000-0250 modules to form a 4m loop. 

configure xts modules

Configure XTS Movers 

Next, navigate to the Movers tab. 

configure xts movers

Here you can add movers to the XTS track. Similar to the modules, at the top you can select which mover from the Beckhoff XTS catalog you’d like to add to the track. To add a mover, select the type and then click the add mover plus button. Below are the simulation movers for the XTS large starter kit. It uses 10 AT9014-0070 movers. 

configure xts movers

Configure Parts, Tracks, and Stations 

For this example, we’re not going to be using any parts, tracks, or stations. Here’s a quick rundown of what each of them are with links to the Beckhoff documentation for more detail. 

  • Groupings of modules. Since our large starter kit is a loop, we only have the one part which was added by default. 
  • A route that can be used by movers that is made up of one or multiple parts. Since our large starter kit is a loop with only one part, we also only use the single default track. 
  • A station in this configurator is still in the beta phase and is currently purely cosmetic. Stations will display in the XTS tool window, but the movers cannot use these stations in their logic. We will set up stations in the PLC logic later, so we don’t have to set any up here. 

Configure XTS Real-Time Settings 

Lastly, we need to set the real-time settings for the XTS system. This is functionally the same as going to SYSTEM > Real-Time, but it’s conveniently here in the configurator as well. First, navigate to the System tab and click the download button to sync the number of cores on your target to the configurator. 

xts real-time settings

Say "yes" to the popup that asks if you want to overwrite the current CPU config. 

overwrite CPU config

The XTS Task 1 has to be configured onto an isolated CPU by itself with a 250μs base time and 1 cycle tick. Isolated cores are represented by an orange rectangle on the right side of the core. You can select which cores are being used with the checkbox on the left side and move tasks around using the up and down arrows on each task. The remaining tasks on the system can be set up to fit your system's needs, but the XTS Task 1 should be configured like the image below. 

xts real-time

If you’re simulating on your engineering computer, keep in mind that all tasks must be on isolated cores, not just the XTS Task 1. The image above has all tasks on isolated cores for this reason. 

Finish the XTS Simulation Builder 

Click the next button a couple times and the XTS simulation builder should process the information to add the XTS system to your solution. This may take a few minutes, but afterwards the simulation builder should close. To start sending movers around the track, go to step 5. 

Adding Logic to the XTS System 

This section assumes that you’re using the . It’s highly recommended to use this since it provides well-tested code for commonly used XTS tools like movers, stations, zones, position triggers, etc. 

Check IdDetectionMode 

Under SYSTEM > TcCOM Object > XTS ProcessingUnit 1, go to the Parameter (Init) tab. Under the section for Mover ID Detection, we will want to set IdDetectionMode to the correct detection mode for our system. 

Check IdDetectionMode

If you’re simulating an XTS system, the IdDetectionMode should be set to Standard. If you’re working with a physical XTS system, then it depends on your movers. Some movers are identified with mover 1 magnet plates. If you have no mover 1 magnet plates, set IdDetectionMode to Standard. With one mover 1 magnet plate, set the IdDetectionMode to Mover1. With multiple mover 1 magnet plates, set the IdDetectionMode to MultipleMover1. 

Setting the GVL Parameters

Under the Main Project, open GVLs > GVL. In this global variable list, you’ll need to set the following variables 

  • NUM_MOVERS – This should be the number of movers in your XTS system. In our case, 10. 
  • NUM_STATIONS – This should be set to a number greater than or equal to the number of stations you’ll add to your system. In our case, we will keep it at 10. 
  • TRACK_LENGTH – This should be the length of the track in your XTS system in millimeters. In our case, 4000mm. If you don’t know the track length, you can find it under SYSTEM > TcCOM Object > XTS ProcessingUnit 1 > Track 1 in the Parameter (Online) tab. The Length parameter should show an online value which should be entered for TRACK_LENGTH. 
  • NUM_TRACKS – This should be set to the number of tracks in your system. In our case, 1. 

set GVL parameters

Update the Track OTCID in MAIN 

In the MAIN program inside the POUs folder, on line 138 there is an assignment to Track[1].OTCID. This will likely need to be changed. 

update track OTCID

You can find your track’s OTCID under SYSTEM > TcCOM Objects > XTS ProcessingUnit 1 > Track 1 in the Object tab. 

update track OTCID

You can then copy this value and assign it to Track[1].OTCID in MAIN. Make sure to change the “0x” prefix to a “16#” to match structured text formatting for hexadecimal numbers. 

Link the Mover Axes 

Rebuild the solution first by going to the top toolbar and selecting Build > Rebuild Solution. This makes sure that our change to NUM_MOVERS regenerated the correct number of AXIS_REFs to link to. After rebuilding, go to MOTION > NC SAF > Axes and you’ll see a list of axes that are in our solution. They should already be linked to their I/O components, which should be Mover Controllers. The next step is to link them to their PLC AXIS_REFs. Select all the axes using Shift + Click, then right click to select Change Axis PLC Links. 

link mover axes

You should see the mover axis references show up. Click each reference to link all the movers to their NC axes. The linking should look like the image below afterwards. 

link mover axes

Run the Program 

We can now activate our configuration. Do this by clicking the icon shown below. 

run beckhoff xts

Click OK in the next popup. 

xts configuration

If you haven’t already purchased licenses from Beckhoff, you’ll likely get a popup asking if you want to activate trial licenses. Click yes and follow the steps to get a free 7-day trial license. Once you deploy the code onto a production machine, make sure to get a license from your local Beckhoff sales representative. 

beckhoff license

After activating configuration, the target device should switch into run mode, which is indicated by the green gear icon next to the activate configuration button being highlighted instead of the blue one. Once in run mode, we can enable and start the movers. 

Start the Movers 

The solution we pulled from the Beckhoff GitHub already has a simple HMI to get the movers started. Let’s log in by clicking the green door with the arrow pointing towards it. 

start the movers in xts

After logging in, open the visualization under Main Project > VISUs > Controls.

xts visualization

This provides four buttons for interacting with the XTS system. 

  • Enable – Will run the enable sequence for the movers, which initializes their order and assigns movers to tracks. 
  • Start – Once the movers are enabled, this will start running the station logic for the movers, 
  • Stop – This will stop the movers but will keep them enabled. 
  • Disable – This will remove the movers from their tracks, which allows for errors to be reset. If your track is mounted vertically, it’s worth noting that this will depower the movers and they will succumb to gravity. 

Click the enable button. Then click the start button. If you have a physical XTS system, you should see the movers start to move. Both physical and simulation systems can take advantage of the XTS tool window to see a live view of the track. 

Live Monitor the XTS Track 

Go back to the XTS tool window. You should be able to click the live view button to see the current state of the movers on the XTS track. 

xts live monitorAt this point, you should see the movers going around the track according to the station logic that was already in the code we downloaded from Beckhoff’s GitHub. 

 

xts live monitor

This should help get your physical or simulated XTS system moving. However, all XTS systems are different and this default station logic likely isn’t going to work for your application. Next time, we will explore how to set up stations, position triggers, zones, and other things that can help customize the XTS system further. 

Beckhoff XTS Series

Part 1 - Downloads and Starter Project
Part 2 - Setting Up a Physical XTS System

If you’d like help with the next steps for your XTS system, DMC is proud to be a Beckhoff System Integrator and has worked on multiple XTS projects and applications. Learn more about our Beckhoff partnership and contact us for your next project. 

]]>
Carter Silvey Thu, 20 Mar 2025 17:00:00 GMT f1397696-738c-4295-afcd-943feb885714:13712
//ultraskinx1.com/latest-thinking/blog/id/10631/getting-started-with-the-paintable-canvas-in-ignition-vision#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10631 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10631&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/10631/getting-started-with-the-paintable-canvas-in-ignition-vision Inductive Automation’s Ignition is a reliable, proven HMI/SCADA platform with powerful scripting capabilities. Ignition’s most well-known Vision visualization module has been used in plant floor HMIs and desktop screens for over 10 years. One of the most customizable components in Vision’s palette is the Paintable Canvas.

While the Paintable Canvas’s heavy reliance on scripting might be intimidating for a new Ignition developer, this short tutorial will put you on track to create complex dynamic displays in no time!

Paintable Canvas Overview

The Paintable canvas makes use of the , which allows you to create complex shapes, load and edit images, add text, and more.

To add a paintable canvas to your Vision project, simply drag the Paintable Canvas component anywhere on your window. You’ll notice that this component comes with an example script that renders a simple pump graphic. Turn on preview mode to see your paintable canvas area filled with the pump image below. Notice that you can resize the canvas window to any shape or size and the pump will automatically stretch to fill the new area. This is because Java2D is a vector drawing library, allowing components to scale with ease.

 Graphical user interface, text, application, emailDescription automatically generated

A screenshot of a computerDescription automatically generated with low confidence

Now, let's take a look into how this graphic is being created. Open up the script editor by right-clicking on your paintable canvas component in the project browser.

Graphical user interface, applicationDescription automatically generated

Here you will see that the repaint script is prepopulated with the script that creates our static pump icon. You’ll notice that this script is split up into two sections. The first part defines the that make up our pump, based on a 100x100 pixel area. Then, the second half scales the shapes to the size of the canvas and renders them using .

Notice that the order in which the objects are painted is significant. I recommend you take some time to play around with this script, editing some values to see how the pump graphic changes. See if you can change the shape of the pump from a circle to a square or rotate it upside down.

                               

Updating the Canvas from a Dynamic Property

Now, let's see how we can dynamically update the pump at runtime. First, let's add a custom property to our Paintable Canvas by right-clicking on the Paintable Canvas component in the Project Browser and selecting “Customizers > Custom Properties." 

Then, we will add a toggle button to our window and bind its value to this property. Now we can test pressing our toggle button and verify that our new paintable canvas property changes.

Now it’s time to bring this property into our paintable canvas by editing the “repaint” script below, adding a simple if statement to change the color and text of the status icon based on our new dynamic property.

Now you should be able to change your pump mode at runtime by pressing the toggle button. This works because the repaint script for a Paintable Canvas runs any time a property of the canvas changes, including our custom bAuto property.

Updating Dynamic Properties from the Canvas

Now that we know how to update graphics in the paintable canvas from dynamic properties, let's take a look at writing to properties from within the paintable canvas tool. To do this, we will open back up the component scripting tool and navigate to the mouseClicked event. Let's say we want to change our mode only when the mode indicator circle is clicked. We can do this very easily by using the x and y properties of our mouseClicked event object.

Here, we create a Point2D object with the coordinates of our mouseClicked point (event.x and event.y) and scale it to match the 0-100 scaling of our initial ellipse definition. We can then use the ‘contains’ method of Ellipse2D to determine whether our point lies within the Ellipse. In this case, if the user clicks within the ellipse, we update our mode property.

Now you can click the status ellipse on the pump icon to update the mode, rather than needing a separate toggle button.

Dynamically Resize and Reorient Objects

Now, let's return to our original pump display and let's try updating our canvas to display a dynamic number of pumps. We will again start by adding a custom property to our Paintable Canvas and connect it to a numeric text field on our main window. This will be our user input for number of pumps. Then we will make a few small edits to our repaint script below.

Define a new variable numPumps and link it to your new custom property.

Scale your pump graphic in the X-direction so that each pump takes up 1/numPumps of the screen. The vector-oriented nature of Java2D makes it so easy to resize an entire object all at once, rather than adding a scale factor to each shape within the pump.

Next, add a for loop around the ‘Paint Shapes’ portion of the script to paint each pump one at a time. At the end of the for loop, we will need to ensure any relevant properties (in my case the font size) are reset to the values they are at the start of the loop and then translate our origin to the right by one pump width.

Now try changing your new user input at runtime and see your Paintable Canvas update!

 

Now you have the tools you need to create simple dynamic objects using the Paintable Canvas tool. With the powerful Java graphics library at your disposal, your creativity is your only limit with what objects you can create!

Learn more about DMC's Ignition programming expertise and contact us today.

]]>
Natalie Pippolo Wed, 19 Mar 2025 13:58:00 GMT f1397696-738c-4295-afcd-943feb885714:10631
//ultraskinx1.com/latest-thinking/blog/id/13707/creating-a-custom-bootloader-for-a-cortex-m-microcontroller#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=13707 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=13707&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/13707/creating-a-custom-bootloader-for-a-cortex-m-microcontroller ARM Cortex-M processors are popular choices for embedded applications, and the ability to easily update the code running in these devices without specialized equipment can be an invaluable asset. To achieve this, it’s common practice to use a bootloader, a specialized program which typically runs before the main application and has the capability to overwrite it. If a device becomes corrupted or new functionality is needed, it can be updated in the field, and if the update is interrupted or firmware which doesn’t work properly is loaded, the bootloader can be used to update the device again and restore functionality.  

This blog will cover the basic components of bootloader implementation—memory layout, in-app programming, and launching the app from the bootloader—using the STM32F779, a Cortex-M7 part, as an example.  

Controlling Memory Layout 

bootloader

The first challenge to solve is placing both the bootloader and the main program within the flash memory of the device. Microcontrollers typically contain a primary (built-in) bootloader that will always execute first and then jump to a certain location in flash memory. As such, for the custom bootloader to run before the main app, it should be placed in the first location where the chip looks, and the main application should be placed at a higher address. 

This doesn’t require any extra effort when setting up the bootloader but will require some changes when building the main application. Once a program has been converted to binary form, it typically needs to be located at a specific address to function correctly. This location information is baked into the binary during the linking step, so to change this, you’ll need to modify the linker script. The simplest approach is building the bootloader and main application completely separately so each has its own linker script. In this case, only small tweaks from the IDE/manufacturer provided script will be needed. 

For the STM32F779AI, this can be done by changing the base address and length of the ‘FLASH’ section for each program so the linker won’t place anything in the region reserved for the other program.

custom bootloader

Main application linker file:

{  
  RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 512K
  FLASH (rx) : ORIGIN = 0x08020000, LENGTH = 1920K
} 

custom bootloader

Bootloader linker file:

{
  FLASH (rx) : ORIGIN = 0x08000000,   LENGTH = 128K
}

Adding Support for Firmware Updates 

Now that the bootloader and main application are in place, the next step is giving the bootloader the ability to update the main application. There are any number of ways to supply new firmware to replace the main application, so we’ll move past this and assume that the bootloader already has access to a new binary. 

Regardless of whether it’s stored on a connected USB flash drive, being sent byte by byte over UART, or retrieved from any other source, once the device has the firmware file, the next step is to write it over the main program area. Most embedded devices have support for some form of IAP (In Application Programming) functionality which allows a device to write to its own flash memory while running, though it will be named and implemented differently from chip to chip.

On the STM32F779, for example, IAP is accomplished through a flash controller peripheral, but the chip’s HAL (hardware abstraction layer) provides functions to manage this peripheral, so a developer can just call “HAL_FLASHEx_Erase” and “HAL_FLASH_Program” (and some supporting functions) with the data to write and the addresses to erase and write over. 

Launching the Main Application 

Once the new program has been saved, all that’s left is to run it. There are a few steps to doing this safely, and I’ll cover them in execution order. First, interrupts need to be disabled. The exact implementation will vary between chips, but most will have a simple way to do this. On the STM32F779AI, ST’s HAL provides the following function for this purpose: 

 __disable_irq();  

Disabling interrupts is necessary because the next step is to swap to the new vector table. The vector table is a chunk of memory which contains critical information for the program to execute, mostly pointers to interrupt handlers. There are different ways to do this depending on architecture; the STM32F779AI is built on the Armv8-m architecture, which includes a vector table offset register (VTOR) for this purpose, and the ST’s HAL exposes it as a structure that can be written as follows: 

 SCB->VTOR = APP_ADDRESS; //chip will now look at main application’s vector table when handling interrupts  

Finally, jumping execution to a different program requires accessing the core’s registers to change the program counter, a value which indicates the location of the next instruction to run, and the stack pointer, which indicates where temporary information is stored.

For chips using the armv8-m architecture, the first word of the binary is the initial location of the stack, and the second word is the address of the reset handler. Loading these into the stack pointer and program counter will cause execution to continue from the new program as if it had initially booted there. This can be done directly with inline assembly instructions, demonstrated below. 

void boot_jump (uint32_t address)
{
  //set the stack pointer to the initial stack location, pointed to by the data at ‘address’
  asm volatile ("LDR SP, [%0]" :: “r” (address));
  //set the program counter to the reset handler pointed to by the data at ‘address’+4
  asm volatile ("LDR PC, [%0]" :: “r” (address+4);
} 

From here, the main application should run normally. Be sure to turn interrupts back on, since they wouldn’t normally be disabled on startup. 
 
 With that, the basic functionality of a bootloader is complete, and additional features like encryption or a user interface can be added to suit a variety of needs 

Included below is a code summary of the elements discussed, and how they’re implemented on the STM32F779AI.

Custom Bootloader Code Snippet

Ready to take your embedded project to the next level? Contact us today to learn more about our solutions and how we can help you achieve your goals. 

]]>
Ben Dyer Tue, 18 Mar 2025 15:48:00 GMT f1397696-738c-4295-afcd-943feb885714:13707
//ultraskinx1.com/latest-thinking/blog/id/10595/road-warrior-a-long-term-follow-up-review#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=10595 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=10595&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/10595/road-warrior-a-long-term-follow-up-review DMC's Road Warrior role allows controls engineers to travel to client sites and work directly with their automation hardware and software. My blog, Life on the Road: Who are DMC's Road Warriors and What Do We Do?, explains what DMC's Road Warrior role is and details my experience during my first 6-months working as a traveling controls engineer. I now have almost two years of experience in the role, and I wanted to follow up with a “long-term product review.”  

In addition to sharing some of the incredible opportunities and experiences I’ve had on the road, I want to provide a candid look at some of the tougher times and share how future Road Warriors can learn from my past endeavors. 

The Review

I have broken my review into three main categories:

Project Work

Benefits and Perks

Lifestyle

I will give each category a rating out of five stars and then provide my overall verdict. 

Project Work ★★★★★

While onsite, Road Warriors generally do one of a few things: fix existing automation, add onto existing code, test automation, or write code for new automation. In all scenarios, I find that being near hardware makes controls engineering more fulfilling and interesting. Something I enjoy about developing onsite is that code can be tested on equipment shortly after downloading. This short feedback loop helps you write effective code quickly with assurance that it functions as specified. 

As an example, I recently commissioned and wrote the code for a batch making system where heating and temperature control were critical. I rapidly iterated and optimized the heating control logic because I could run mock-heating trials on the tank hardware as often as I wanted. I have found that in the highly automated facilities that I have worked in, there are no shortage of controls issues and improvements to be made. 

As an onsite controls engineer, you are challenged each day to learn quickly and apply skills to resolve pressing issues. Often, control issues may be holding up production or resulting in scrapped product, and you must rely on your problem-solving skillset to deliver. I appreciate this challenge. 

Another reason  I love onsite controls engineering is because you get to be right in the action. You can see systems operate, look into and measure equipment in panels, and speak with those who operate, manage, and maintain equipment. Such immersion in automation allows the onsite engineer to really get a sense of what automation is all about and develop in the right ways to become a better controls engineer. The day-to-day as an onsite controls engineer so much fun, and there is never a lack of exciting problems to be solved. 

While onsite, I have had the opportunity to collaborate and network with some extremely intelligent engineers and software developers, both within and external to DMC. Through these connections, I have sharpened my skillset as an automation engineer and grown my appreciation for the industry. 

Road Warrior projects are generally larger scale and longer-term than the average project. Such projects allow an engineer to deep dive into the systems and integration at hand. A thorough understanding of automation and a rewarding problem-solving experience is the result. 

Benefits and Perks ★★★★★

Not only is onsite project work rewarding and exciting, but the perks and benefits are too. I touched on some of these in my first blog, but I want to emphasize additional income, time off, and travel rewards available to Road Warriors. 

Engineers signed up as Road Warriors get a monthly bonus. On top of that, you get per diem and site bonus while onsite which amounts to thousands of dollars of income each month. Further, DMC’s Road Warriors are also eligible for Extra Effort Bonuses, which are evaluated monthly and are intended to cover situations where engineers are working longer weeks or shifted schedules due to customer needs. 

I have been thankful for the time off that I have accumulated as a Road Warrior. Travelers at DMC accrue vacation 50% faster than non-travelers. Along with the increased accumulation rate, Road Warriors are eligible for an exemption on the vacation time roll over limit, so if you get staffed on a demanding onsite project, you can save it for the next year. 

Road Warriors also maintain eligibility for DMC’s project recouperation policy. DMC allows you to take a full day off without using PTO, no questions asked, if you are onsite for long enough or are subject to challenging schedules. I have used recoup days in the past to build long weekends into my life allowing me to return home, enjoy the city I am traveling to, or go see friends and family. 

One perk available to all DMCers is that you get to book their own travel. On a surface level, this may sound like a lot of work but let me assure you that it is highly beneficial for a few reasons. 

First, you get to pay with your own credit card and leverage credit card perks. Traveling is expensive, so you will be spending a lot of money and then getting refunded by DMC at the end of each month. Although other DMCers are more strategic with their credit card rewards strategy, I like the simplicity of a 2% cashback credit card. I end up banking good money from my credit due to the high volume of travel expenses. 

Booking your own travel allows you to build rewards with different companies which makes booking future vacations more cost effective. I choose to stay at Marriott hotels, and I have earned status, so when I check into hotels I often get free snacks or points. I’m a big snacker, so naturally I go for snacks almost always. 

A bag of food and a can of sauceDescription automatically generated

Welcome gifts from check in at a Marriott Hotel.

Also, on a recent Marriott stay that I booked with points on a vacation in Hawaii, I got a free bottle of champagne and local Hawaiian chocolates because of my status. Not bad!  

Lifestyle ★★★☆☆

The Road Warrior lifestyle is like black coffee—it is an exciting alternative lifestyle with improved benefits and hands on project work, but it comes with certain challenges that may not be palatable for everyone. 

One aspect of the Road Warrior lifestyle that is undeniably delightful is trying different restaurants for free. When onsite, DMCers get per diem (in accordance with GSA ) which is enough to cover excellent meals. I usually eat out for most meals other than breakfast, and this has allowed me to experience some incredible dishes from restaurants around the country at no cost to me. I've shared a few highlights below:

Wonderful meals while on the road that we paid for with per diem

When onsite with other DMCers, team dinners are a great way to get to know colleagues better and build comradery. A highlight of my last project was linking up with other onsite DMCers at a local brewery for taco Tuesday every week. 

Business travel is another aspect of the role I enjoy. DMC uses the IRS for reimbursing mileage which is lucrative for long drives to site, especially if your car is fuel efficient. I live in Carmel, IN, and I was recently staffed on a long onsite project in Iowa City, IA, a 5.5-hour drive. I listened to a ton of audio books and podcasts during this drive which was great. 

Road Warriors generally have a say on where they lodge. I have worked with some Road Warriors who enjoy cooking and preferred to stay in Airbnbs, but I generally tend towards hotels to rack up points and have free breakfast and housekeeping. I do, however, have a love-hate relationship with long-term hotels. It is so wonderful coming home from a long day onsite and having your entire room spotless and your bed, with fresh sheets, neatly folded by housekeeping. Also, many of the hotel breakfasts that hotels are wonderful, and it is great to wake up and not have to worry about preparing your own food.

There are aspects of hotels that I like less. If you stay in enough hotel rooms, you may have a run in with bed bugs like me. It's not the most fun situation (see the Tips for the Road section below). Constantly checking into and out of hotels can also be a bit of a burden. I feel like I am constantly packing and repacking my belongings and shuffling my things around. I do concede that I could simplify my life by not lugging a bike around with me.

Another challenge of living a nomadic lifestyle is developing a consistent friend group. I try to bring my bike with me when I travel so I can join a cycling group and meet people. Being a Road Warrior has given me the opportunity to develop connections with other incredible engineers and different types of people across the country. I will cherish these connections, and if I will know people already if I return to that city.

road warriors around a fire
Stopping for a refreshment and some fireside chat after a group ride.

However, just as I feel like I have gotten to know folks, I have to shoot off to another project which is always bittersweet. I have the opportunity to experience new people and places, but I have to say goodbye to others. I also do my best to stay in contact with friends near my home base in Indianapolis.

Work-life balance, although tricky to maintain while traveling, can be achieved. It is important to stay dialed in while onsite, meeting customer needs, and staying safe. Often, customers will have demanding shift requirements (including night shift), so it is important to find balance and avoid burnout. Recoup days are helpful for this. In the past I have used recoup days to return home, enjoy the city I am traveling to, or visit family and friends.

On many projects, if the cost to travel to a nearby area is less than that to return home, the customer may cover the costs for you to visit that area. Not only does this option save the customer money, but it reduces travel-related fatigue. I am an avid fly fisherman, and it was much cheaper and easier for me to head up to the Driftless area in Wisconsin while working in Iowa than bill time and mileage back to Indianapolis, so I did this trip several times.

Highlights from a weekend fly fishing in the Driftless area

As a Road Warrior, you are able to take vacation as long as you communicate this information with your managers and customer well in advance. As with any job, it is important to unwind and take time off. This past year, I took advantage of three DMC YOEs (company-sponsored weekend events), and I often was able to expense flights to these destinations since travel to them was less expensive than returning home.

Colchuck Lake hike
Colchuck Lake hike at the SeaYOEttle

This past year, my brother was getting married in Montana. I traveled to the wedding, and I was able to take the week before the wedding off to explore Montana. It was an incredible experience and a great way to unwind from some tough onsite work. Accumulated miles and rental car points made this vacation very cost effective.

montana hike
Mountain goats seen during a vacation taken to Montana during a long onsite

The time in between projects can be very relaxing allowing you to reset and get settled in your life. When I am offsite, I often ramp up for my next project, learn new skills, work on internal initiatives, and write blogs (both technical and non-technical).

I would like to touch briefly on is dating and relationships as a Road Warrior. I have a long-term girlfriend in Indianapolis who I am thankful to have supporting me along my travels. I like to think of dating while traveling as “long-distance plus.” Although you do spend most workdays apart from your significant other, travel perks make getting to your person easier and more cost effective given travel perks and the ability to expense flights. Although it varies by circumstances and project, some customers will allow you to return home every weekend that you would like to. Something that has been a great experience for my girlfriend and me is having her visit while I have been onsite. A few recent highlights are below.

Fun activities while my significant other visited me in Iowa. Left: watching Caitlin Clark play in a University of Iowa basketball game. Right: sampling cider and snacking at Wilson’s Orchard & Farm.
 

Tips for the Road

After almost two years, I wanted to share some tips I picked along the way to mitigate risk and have a more enjoyable experience on the road:

  • Go onsite. With remote access to a PLC and HMIs being readily available, even while onsite, it is tempting to stay at your cozy desk. I encourage any onsite controls engineer to go into the field or factory and be in proximity with the automation you are working on. Not only is this safer when downloading, but you will be able to identify and solve problems more easily by just seeing the equipment operate. Also, you will be able to speak with operators and managers who often have vast expertise with equipment and the industry you are serving.
  • Save time with digital hotel keys. Most hotels offer a “digital key” option which allows you to check in on the app and use your phone as they key to enter your room. No need to stop by the front desk.
  • Use your sick days. If you are traveling and working hard, chances are you will get sick. Don’t be afraid to use your sick days. After all, a customer does not want you to spread a sickness throughout their company.
  • Be ready for anything. When you show up to a new city, you may likely not know anyone in that city. This means that you need to be prepared to handle any situation by yourself. If you drive a lot for your projects, then I recommend to have you covered with car-related break downs/issues. Make sure you have a portable battery for your phone. You never want to be in an unfamiliar area with a dead phone and laptop. 
  • Be rental car aware. Rental car companies will try to get you to upgrade your car at the check in desk. Don’t fall for their sales tactics! Always take a video of your rental car, inside and out, before leaving the lot.
  • Check your hotel for bed bugs -  has information on how to check for bed bugs in your hotel room and mitigate your risk of getting them. This article has on how to get rid of them. 
  • Be careful of injury. Maybe don’t hit that deadlift PR while you are traveling or avoid the double backs on the weekend during your Colorado onsite. If you can’t walk, you likely won’t be much help onsite, let alone make it onsite.
  • Hike safely. If you are going to go for a hike in a new area, tell at least one other person exactly where you are going and how long you expect it to take, and don’t change your plans. Ever seen 127 Hours?
  • Be aware of your surroundings. Learn what parts of a city to avoid.
  • Find a hobby. Find one thing that you love to do while you are on the road that isn’t work. I keep my sanity by lifting weights, riding my bike, and catching hot yoga classes. Others play video games, run, climb, cook, etc.

Enjoyed some post-work recreation. Left: group gravel ride with another Road Warrior. Right: riding into the sunset.

  • Avoid car repair. Do not service your car while on the road unless it is necessary to keep it running safely. Mechanics are people and accidents can happen. You do not want to deal with car repairs when you are on the road if your mechanic made a mistake.
  • Don’t skimp on insurance. This can cover you in rental cars and if you are driving your personal vehicle to a project site.
  • Stay calm. You 100% will run into issues if you travel long enough. Thanks, Murphy. Be mentally prepared for this and stay calm when things go wrong and be creative.
  • Ask questions. No question is a bad one when you are learning from someone more experienced.
  • Stay healthy. Eat healthy, sleep well, and stay active. Staying healthy will make every part of your onsite more enjoyable, and your brain will work better! I do my best to either utilize hotel gyms while onsite or join a gym.

lifting weights
Lifting some weights at a gym I joined while onsite.

  • Prioritize safety. Safety incidents do occur in industrial automation, and by being onsite more often, you put yourself at a higher risk of being involved in one. If you are ever doing something that feels unsafe, stop immediately and talk to your project manager. Nothing matters more than staying safe while onsite.

Overall: ★★★★☆

The lifestyle of a Road Warrior is intense and often demanding, but the project work and experience makes it all worth it. The increased income and benefits are not given, but rather they are earned by providing consistent onsite support, meeting customer needs, and dealing with traveling challenges. Although I have found myself in some challenging situations, I do not regret being a Road Warrior, and I encourage anyone to take the plunge and give it a go.

Learn more about DMC’s company culture and check out our open positions!

]]>
Sam Alvares Tue, 18 Mar 2025 13:45:00 GMT f1397696-738c-4295-afcd-943feb885714:10595
//ultraskinx1.com/latest-thinking/blog/id/12642/externally-authenticated-access-to-s3-objects-over-the-internet#Comments 0 //ultraskinx1.com/DesktopModules/DnnForge%20-%20NewsArticles/RssComments.aspx?TabID=61&ModuleID=471&ArticleID=12642 //ultraskinx1.com:443/DesktopModules/DnnForge%20-%20NewsArticles/Tracking/Trackback.aspx?ArticleID=12642&PortalID=0&TabID=61 DMC, Inc. //ultraskinx1.com/latest-thinking/blog/id/12642/externally-authenticated-access-to-s3-objects-over-the-internet Amazon S3 is great for storing any type of binary data or file you can need in a centralized location in the cloud. There is a dedicated URL for each object, which can be easily shared with anyone that needs to access it.

However, say that your bucket is storing private/proprietary information. You wouldn't want just anybody to be able to retrieve that data with an HTTP request, would you? In this blog, we'll explore how we can securely and efficiently access S3 objects with either direct AWS or 3rd-party autentication/authorization.

Bucket Policies

Bucket policies are the first step to restricting public access to objects. They apply to entire buckets in S3, and can be set up to only allow certain AWS IAM users, user roles, or methods of access to retrieve objects within the bucket it's applied to.

Here's a straigtforward bucket policy to restrict any access to an S3 bucket except for those that provide IAM credentials which match to a specific user:

{
    "Version": "2011-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::ACCOUNT_ID:user/USER_NAME"
            },
            "Action": ]
                "s3:GetObject"
            [,
            "Resource": ]
                "arn:aws:s3:::BUCKET_NAME/*"
            [
        }
    ]
}

Now this solution works for any situation where the client fetching the S3 object is able to provide permanent IAM credentials in their request. However, this is very rarely a valid solution. What if we have an existing authentication solution which we want to use to determine access to a S3 object? Or what if we don't want/need to edit this policy whenever a new user needs access?

Accomodating 3rd-party Authentication

To abstract the IAM authentication layer entirely, we could proxy the S3 files through some intermediate compute resource within our VPC that:

  1. can authenticate a request with a 3rd party auth solution.
  2. can access the buckets directly, with its own static set of IAM credentials.

Below is a diagram of this dataflow. Only the solid lines are data transfers that contain the data of the S3 object being requested.

This is, strictly speaking, an effective solution. However, S3 objects can be very large, so passing the whole object through some intermediary compute resource may incur unacceptable memory/data transfer costs. What we'd want is a way to securely access the S3 object while still being able to pull it directly from the bucket to the end client making the request, in order to take advantage of S3's ultra-cost-efficient retrieval pricing.

Enter - the presigned URL!

Presigned URLs are used to temporarily authorize operations across AWS to anybody who has the URL. To generate a presigned URL for a specific action, your IAM policy must be authorized to perform that action. To mitigate the chance that the power of a presigned URL falls into the wrong hands, they are configured to only serve their purpose for a defined timeout.

Using presigned URLs generated to provide access to individual S3 objects at request time, we can extend the secure access approach above, such that:

  1. The client sends an authenticated request using the 3rd-party auth to a compute resource (lambda works perfectly) with full read-access to our S3 bucket.
  2. The compute resource authenticates this request against 3rd party auth.
  3. The compute resource generates a pre-signed URL for the S3 object requested and returns it to the client.
  4. The client fetches the S3 object using the presigned URL before it times out.

 

Notice how the file is requested directly from S3 in this setup! With that, we have a secure and performant solution.

To see an example implementation of this design, see my follow-up post, Embedding private media files securely in a React frontend with Amazon S3 and AWS Lambda.

Ready to take your Application Development project to the next level? Contact us today to learn more about our solutions!

]]>
Sam Wallace Tue, 18 Mar 2025 08:30:00 GMT f1397696-738c-4295-afcd-943feb885714:12642