OpenS3 Workshop November 4, 2021
Building Intelligent Trustworthy Computing Systems: Challenges and Opportunities
Cybersecurity and trust in computer systems have become indispensable, as information societies are increasingly depending on digital technologies and services. Emerging technologies such as AI and IoT are being rapidly integrated into the cyber world adding further complexity as well as security and privacy vulnerabilities that constitute a large attack surface towards our computing systems.
In this workshop top experts will present their views and research results on various aspects of building trustworthy systems from hardware-assisted security to the marriage of AI and security.
9:30 – 9:45
Prof. Ahmad-Reza Sadeghi, TU Darmstadt
9:45 – 10:30
Open Attestation & Authentication Infrastructure for Non-Centralized Trustworthy Systems
Prof. Yuanyuan ZHANG,
Department of Computer Science and Engineering, Shanghai Jiao Tong University, China
10:30 – 11:15
Prof. Dr. Srdjan Capkun,
ETH Zurich and Director of the Zurich Information Security and Privacy Center (ZISC), Switzerland
11:15 – 11:25
11:25 – 12:10
How Secure are Trusted Execution Environments? Finding and Exploiting Memory Corruption Errors in Enclave Code
Prof. Lucas Davi,
Department of Computer Science at University of Duisburg-Essen, Germany
Trusted execution environments (TEEs) such as Intel’s Software Guard Extensions enforce strong isolation of security-critical code and data. While previous work has focused on side-channel attacks, this talk will investigate memory corruption attacks such as return-oriented programming in the context of TEEs. We will demonstrate how an attacker can exploit TEE enclaves and steal secret information. In addition, we will investigate the host-to-enclave boundary and its susceptibility to memory corruption attacks and how we can develop analysis approaches to detect vulnerable enclave code.
12:15 – 13:00
Reverse Engineering of Neural Network Architectures Through Side-channel Information
Prof. Stjepan Picek,
Radboud University, The Netherlands
Machine learning has become mainstream across industries. Numerous examples prove the validity of it for security applications but also discuss how to attack machine learning algorithms.
In this talk, we start by discussing how to reverse engineer a neural network by using side-channel information such as timing and electromagnetic emanations. To this end, we consider multilayer perceptrons and convolutional neural networks as the machine learning architectures of choice and assume a non-invasive and passive attacker capable of measuring those kinds of leakages. Our experiments show that a side-channel attacker is capable of obtaining the following information: the activation functions used in the architecture, the number of layers and neurons in the layers, the number of output classes, and weights in the neural network.
Afterward, we discuss how to use the knowledge about the neural network to guess the inputs to the neural network. Finally, we conclude with several interesting challenges when considering implementation attacks on neural networks.
13:00 – 14:00
14:00 – 14:45
Threats to electric mobility and how to establish trust
Prof. Christoph Krauß,
Darmstadt University of Applied Sciences, Germany
In this talk, I present security and privacy threats to electric mobility, as well as possible security solutions to establish trust. First, I present the current state of the art. This includes the actuators involved, communication relationships, and protocols used. Then, I discuss possible threats and shortcomings in terms of security and privacy. Finally, I present some trusted computing-based solutions to protect against selected threats.
14:45 – 15:30
Prof. Simone Fischer-Hübner,
Computer Science Department at Karlstad University, Sweden
15:30 – 16:00
16:00 – 16:45
Is Differential Privacy what you want to protect privacy in ML?
Prof. Florian Kerschbaum,
David R. Cheriton School of Computer Science and Executive director of the Waterloo Cybersecurity and Privacy Institute, University of Waterloo, Canada
In this talk we will look at notions of privacy in machine learning, namely differential privacy and empirical privacy based on attack evaluation. We will see how those two notions compare identifying edge cases. I will show that differential privacy does not necessarily protect against privacy attacks, such as membership inference attacks. I will also show that if one wants to protect against membership inference attacks, differential privacy would not necessarily be the method of choice. This leaves the open question whether differential privacy is what you want to protect privacy in machine learning.
16:45 – 17:30
Secure Code Execution on Untrusted Remote Devices
Prof. Gene Tsudik,
Distinguished Professor of Computer Science at the University of California, Irvine (UCI), USA
Our society is increasingly reliant upon a wide range of Cyber-Physical Systems (CPS), Internet-of-Things (IoT), embedded, and so-called “smart”, devices. They often perform safety-critical functions in numerous settings, e.g., home, office, medical, automotive and industrial. Some devices are small, cheap and specialized sensors and/or actuators. They tend to have meager resources, run simple software, sometimes upon “bare metal”. If such devices are left unprotected, consequences of forged sensor readings or ignored actuation commands can be catastrophic, particularly, in safety-critical settings. This prompts the following three questions: (1) How to trust data produced by a simple remote embedded device? (2) How to ascertain that this data was produced via execution of expected software? And, (3) Is it possible to attain (1) and (2) under the assumption that all software on the remote device might be modified or compromised?
In this talk, we answer these questions by describing APEX: (Verified) Architecture for Proofs of Execution, the first of its kind result for low-end embedded systems. This work has a range of applications, especially, to authenticated sensing and trustworthy actuation, APEX incurs low overhead, making it affordable even for lowest-end embedded devices; it is also publicly available.
17:30 – 18:00
Prof. Ahmad-Reza Sadeghi, TU Darmstadt