Engineers taking things apart to find out how they work is a longstanding (and somewhat justified) stereotype. However, trying to reverse-engineer electronics and products associated with the industrial Internet of Things (IIoT) can be difficult. In this article, we’ll go beyond buzzwords to find out what truly makes connected products and the IIoT work. Experts will comment on hardware, software, and security, in addition to explaining what engineers need to know about the plethora of options in order to form a network for connecting their products.
Reading Between the Products
The first thing to understand is that few companies offer everything from sensors and software to security in-house. This means the standards or ease-of-use across multiple platforms, whether the product is open-source or proprietary, will be important. Also, the way a product connects and communicates must work with the system and subsystems already in place – unless a new product is meant to operate completely independently.
“Systems which are designed to be modular, flexible, and open are important,” said John Harrington, VP of product management for IoT software company Kepware. “With today’s technology, you should be able to assemble multiple best-of-breed components that meet the specific needs of a business problem you are looking to solve. The connectivity between the systems is no longer a major time commitment, cost, or risk to the project.”
“Keep it simple, scalable, and use open and standard interoperability,” added Ole Borgbjerg, senior sales director for Kepware in Europe. “The IIoT era has just started, and you must be ready for adding more functionality and advanced features that will come over the next decade.”
With so many things to connect, where do you start? Larry O’Connell, product marketing director at Microsemi Corp., broke down some of the features engineers should look for or ask about when building networks into three points: Switches, determinism, and security.
Older equipment was typically daisy-chained together, put on a node, and assigned a unique address. Training was expensive because the networks were unique, and controls engineer had to know many types of networks. Then Ethernet appeared. To keep things simple, many vendors used the transmission control protocol and internet protocol, commonly called TCP/IP. Protocols help give a common language between different layers—electronics, applications, internet, physical devices, etc.—so everything is able to communicate with each other. An easy way to set up a group of these layers, commonly called a stack, was to put multiple unmanaged switches to connect the layers.
One helpful analogy is to compare an unmanaged switch to a power strip—e.g., it can shut everything off or on, but the flow of electricity/information cannot be directed/managed to a specific plug. This makes it difficult to troubleshoot a stack, and impossible to remotely troubleshoot or configure individual plugs.
Unmanned switches cannot be seen remotely. Think of it like a car’s brakes: If you are only given the fluid level in the reservoir, it’s going to make it difficult to find which line is leaking. A managed switch, as used in this simplified analogy, would be like a valve that could control which line the hydraulic fluid would go into, therefore making troubleshooting easier. But unmanaged switches can’t be seen in a network, preventing engineers from optimizing the configuration of a network to control data traffic on Ethernet. This makes a network increasingly difficult to maintain as it grows.
That being said, this is fine for small networks, such as the network you may have in your home. Consumer routers or hubs are typically unmanaged for simple plug-and-play—though most routers have the ability to be configured, and if configured, would be considered a managed switch—while large offices and manufacturers rely more on managed switches that provide more options, such as remote troubleshooting.
“Unmanaged switches served as an easy default for more than half of the ports that went into the field for the first 5 to 10 years,” said O’Connell. “There are three reasons why we are now seeing a switch to managed switches. One is education. The control engineers were becoming more networking savvy and less intimidated by working with managed switches. Two, the controls vendors are coming out with default configuration right on the box, so engineers don’t have to do much to configure or manage the switches.
“Three, there is a substantially growing segment of the market called lightly managed switches, or web-only switches, that require minimal configuration,” he continued. “Engineers are going to see more managed switches, and engineers will need to know what type of new managed or smart switch is going to work best for their application.”
Wire vs. Wireless
Ethernet isn’t the only tool to consider. Oftentimes hybrid (wire and wireless) systems are necessary. Part of the strength of the IIoT is remote access, or monitoring. This will often lead to a cellular or other wireless option for connectivity. In addition, some manufacturers might want to incorporate many end-points over time.
Installing wires for many devices can be expensive. While existing lines can serve as the backbone of a network, existing lines could have wireless data hubs attached, such as Bluetooth, ZigBee, or Wi-Fi. Wireless connectivity allows IoT devices to be installed at their most effective locations, while saving money and time from installing new wires.
“If your device or application is not moving, a wired connection is always better, and I’m a wireless guy,” said Eran Eshed, cofounder of Altair, a semiconductor company that specializes in cellular chips. “Wires are the most reliable connection. However, running wire is expensive. In some applications, the cost might offset the business case.”
“The strength of applications in the IIoT are similar to wireless: flexibility, mobility, remote access,” Eshed adds. “This is why we will see more wireless connected devices as the IIoT progresses. But wires will always have a role to play in the industry.”
Another trend in connectivity is to decentralize controls. This allows local processing. Dr. Shipeng Li, CTO of IngDan, offered the following example: “If we are using computer vision technology to process human interaction with IoT devices, we need to put at least a significant part of processing on local computers. One is to save bandwidth to the cloud, but more importantly, lower the response time significantly. However, PCs or PACs may not be necessarily the only form factor we could use to process the data—other computing forms may be more convenient or pleasing.”
Computer or machine vision uses a lot of bandwidth. Engineers should be aware that if an application needs to processes large amounts of data, it might benefit a production line to decentralize the controls. System-on-a-chip (SoC), field programmable gate arrays (FPGAs), and other technologies will allow for faster, efficient processing, and the inputs and outputs are still able to connect back to the main PLC/PAC/etc.
“The power of IoT system is not from a simple aggregation of a bunch of IoT devices,” noted Dr. Li. “But from the intelligence from the sharing of data among different devices and the collaboration between them. If we want to enable natural user interactions with the IoT system, we no doubt will deal with constantly big data from visual, speech, audio, and other digital sensors. On the other hand, we could not transmit everything to the cloud to process, or have enough time to process.”
Transmitting data effectively, whether locally or to the cloud, sounds like it is standard operating procedure. However, early Ethernet was not considered acceptable for high-precision control. Early Ethernet used half-duplex (transmits signals in both directions, but not simultaneously), Hub-based networks. This means there was no way to prioritize data.
For anyone who’s ever tried to send an e-mail on a dial-up modem when someone tried to make a phone call, you have seen the flaw with this type of system: The call doesn’t go through, nor does the e-mail get delivered. If data is being sent from point A to B, another user could interrupt the line. A user has to just keep resending data until it finally arrives uninterrupted. This will not work for the modern IIoT.
“Over the last 10 years we’ve had faster-speed Ethernet that is bi-directional, full duplex, and supporting quality of service (QoS) that allows you to prioritize traffic,” said Microsemi’s O’Connell, “However, there are a large number of applications today where even fast Ethernet with QoS isn’t enough. There is a movement to create a suite of standards from the IEEE called time sensitive networking, commonly called TSN. It started in broadcasting, then automotive, and now it is moving into industrial automation.
“The suite of standards is for tight precision around managing data traffic on networks to guarantee bandwidth around data,” he continues. “This is all done to simply transmit data effectively and guarantee when you send data that the data will be received within a predictable time. This has been done on the proprietary level for about the last 10 years. The new standards from IEEE will greatly help consolidate a fragmented industry around one technology. You’ll see this consolidation happen over the next couple years as the standards get ratified in IEEE.” With this in mind, engineers will want to ask about prioritizing data when developing a system to send and deliver important information first.
While the IIoT grows and forms a larger foundation, companies are concerned about flexibility. Some companies prefer to start with small project with minimum installation and effort that will offer the greatest value or return—the low-hanging fruit IIoT projects. According to Kepware’s Harrington, “The trend I am seeing is the desire to take an agile approach to the implementation of software. With an agile approach, you define a very specific problem or user story you want to solve, then you take a short period of time—two to six weeks, a sprint—to implement the hardware and software to solve the use case.
“This approach is in contrast to the typical Manufacturing Execution System (MES) or Enterprise Resource Planning (ERP) implementation, where you take many months or years to perform plant or corporate-wide technology rollouts,” he continues. “IIoT technologies are allowing this to happen at this fast pace at a reasonable price point, and allowing the solution to scale as more user stories are implemented in a factory.”
Once everything is effectively connected and the network is remotely manageable with the right switches and determinism, engineers must protect the network. Segmenting a network with firewalls is one cost-effective way to protect against outside threats. “Efforts to secure these networks on a broader scale typically involve costly network topology changes or network downtime,” said O’Connell, “This negatively impacts revenue, productivity and even safety in certain situations. We expect to see a trend towards centralized security orchestration with distributed execution, based on minimal software and hardware upgrades.”
Whether the change is big or small, no one has a 100% secure stamp, and probably never will. Security is a constantly moving target. Standards are difficult because if it is to general, it may not have much value. If it is too specific, it might make it easier for hackers to identify software using different standards, and learn how to break into them. This is one benefit to using proprietary security. Overall, if you are willing to make yourself venerable through connectivity, the payoffs must be worth it. It is important when selecting security services that security—or lack of security—cost doesn’t offset the savings from becoming connected.
Just like a physical security system, reducing entry points while installing locks and monitoring entry points has proven effective. But it can be easy to forget to lock a door, or perhaps a door needs to be left unlocked for a contractor. Similar concerns exist with the IIoT. Encryption, limiting access points, segmenting the network, putting up firewalls, and having a system to monitor access points sounds easy on paper. However, when trying to cover access points and security for every employee and contractor, network security can quickly become a complex task.
“A controls network was often separated from business networks and the outside world,” said Harrington. “This is no longer the case. We leverage various protocols like OPC UA (open platform communications Unified architecture, a machine-to-machine communication protocol) and HTTPS (hypertext transfer protocol, for secure communications over a computer network) to move data through the firewall to a node on the other side in a secure and authenticated fashion.” This type of secure remote access, commonly called tunneling, is something you may have done without knowing it. If you’ve ever used a virtual private network (VPN), you were accessing data through a secure “tunnel.”
“You could talk about security forever, but what is comes down to the fact that currently, you still need to have strong IT skills to secure a network well,” said O’Connell. “This is where the partnership between the control’s engineers and the IT engineers has to happen. There is reluctance to going full IT, as an IT engineer may not know what the control’s engineer is concerned about securing or accessing.”
O’Connell continued: “What we are going to see next is a migration from IT-centric security policies that work well, but are a little hard to maintain to a more cloud-based or central security policy engine. This can allow for more sophisticated and flexible security options. For example, you could pass down polices through the network for a defined area. By having this type of flexibility, people interacting with the devices in a defined area will not have to be technology savvy to accomplish tasks and won’t jeopardize security.”
There are other benefits to moving towards more cloud-based services. “There is a growing interest in taking advantage of newer technologies that—until now—have only been used outside the industrial space,” said Borgbjerg. “Cloud-based enterprise platforms that store huge amount of data, and even augmented reality (AR) that have typically been used in the consumer space, are being adapted to provide better solutions to manage and service industrial assets.”
“I also believe that technologies that enable more advanced analytics of industrial data—like monitoring your production line, machine, or water pump station as a digital twin on your tablet or another mobile device—will change the way that users of industrial assets works in the future,” he added. “And it will be simpler and faster to make decisions as you have the right information available.”