Cyber Safety Cage for Networks
Objective
We propose a solution using machine learning and test generation, leveraging machine learning expertise from UIUC and testing and verification from KTH. Unlike previous approaches, we focus on explainable AI in our safety cage so that the cage itself and its effects on network traffic can be inspected and validated. Lightweight approaches guarantee that our safety cage can be embedded in programmable networks or operating system kernels. Machine learning will learn behavioural models that have their roots in formal modelling (access policies, protocol states, Petri Nets) and thus are inherently readable by humans. The test-case generation will validate diverse traces against the model and showcase potential malicious behaviour, validating both positive and negative outcomes.
Background
Industrial robots usually operate within a “safety cage” to ensure that a robot does not harm workers. We need the same type of security, simple and explainable, for IT systems. Novel mechanisms that can be embedded in the network, such as through hardware-accelerated programmable networks or kernel extensions, enable this type of security at the network level.
Crossdisciplinary collaboration
The project is a collaboration between the University of Illinois at Urbana-Champaign and the KTH Royal Institute of Technology. KTH will combine its experience in testing and verification with UIUC’s expertise in machine learning.
Watch the recorded presentation at Digitalize in Stockholm 2022 event:
Contacts
Cyrille Artho
Associate Professor, Divison of Theoretical Computer Science at KTH, Digital Futures Faculty
+46 8 790 68 61artho@kth.se
Roberto Guanciale
Associate Professor, Divison of Theoretical Computer Science at KTH, Digital Futures fellow, Digital Futures Faculty
+46 8 790 69 37robertog@kth.se
Reyhaneh Jabbarvand
Assistant Professor, University of Illinois at Urbana-Champaign
reyhaneh@illinois.edu