Virtual OpenS3 Workshop November 4, 2021
Building Intelligent Trustworthy Computing Systems: Challenges and Opportunities
Cybersecurity and trust in computer systems have become indispensable, as information societies are increasingly depending on digital technologies and services. Emerging technologies such as AI and IoT are being rapidly integrated into the cyber world adding further complexity as well as security and privacy vulnerabilities that constitute a large attack surface towards our computing systems.
In this online workshop top experts will present their views and research results on various aspects of building trustworthy systems from hardware-assisted security to the marriage of AI and security.
9:30 – 9:45
Prof. Ahmad-Reza Sadeghi, TU Darmstadt
9:45 – 10:30
Open Attestation & Authentication Infrastructure for Non-Centralized Trustworthy Systems
Prof. Yuanyuan ZHANG,
Department of Computer Science and Engineering, Shanghai Jiao Tong University, China
The trust for an enclave application originates from the trust for the confidential computing platform. Attestations provide the means for the service users to assure the enclave app in use is running on an authentic trusted computing platform. The processor chip manufacturers play an essential role in the trusting systems. For example, until 2018, Intel had not delegated the attestation service to any third party. The centralized attestation, on one side, has burdened the Intel Attestation Service (IAS). The other drawback is that Intel centralized attestation required enclave developers to submit their code for security review. A variety of open confidential platforms are emerging, such as keystone, CURE. A centralized attestation is no longer suitable for the next generation ISA trustworthy platform. The participants, including chip manufactures, enclave developers, cloud infrastructure providers, bring up a need for new paradigms for enclave attestation.
10:30 – 11:15
Sovereign Smartphone: To Enjoy Freedom We Have to Control Our Phones
Prof. Dr. Srdjan Capkun,
ETH Zurich and Director of the Zurich Information Security and Privacy Center (ZISC), Switzerland
The majority of smartphones either run iOS or Android operating systems. This has created two distinct ecosystems largely controlled by Apple and Google—they dictate which applications can run, how they run, and what kind of phone resources they can access. Barring some exceptions in Android where different phone manufacturers may have influence, users, developers, and governments are left with little to no choice. Specifically, users need to entrust their security and privacy to OS vendors and accept the functionality constraints they impose. Given the wide use of Android and iOS, immediately leaving these ecosystems is not practical, except in niche application areas. In this talk, I will draw attention to this problem and why it is an undesirable situation. As an alternative, I will advocate for the development of a new smartphone architecture that securely transfers the control back to the users while maintaining compatibility with the rich existing smartphone ecosystems.
11:15 – 11:25
11:25 – 12:10
How Secure are Trusted Execution Environments? Finding and Exploiting Memory Corruption Errors in Enclave Code
Prof. Lucas Davi,
Department of Computer Science at University of Duisburg-Essen, Germany
Trusted execution environments (TEEs) such as Intel’s Software Guard Extensions enforce strong isolation of security-critical code and data. While previous work has focused on side-channel attacks, this talk will investigate memory corruption attacks such as return-oriented programming in the context of TEEs. We will demonstrate how an attacker can exploit TEE enclaves and steal secret information. In addition, we will investigate the host-to-enclave boundary and its susceptibility to memory corruption attacks and how we can develop analysis approaches to detect vulnerable enclave code.
12:15 – 13:00
Reverse Engineering of Neural Network Architectures Through Side-channel Information
Prof. Stjepan Picek,
Radboud University, The Netherlands
Machine learning has become mainstream across industries. Numerous examples prove the validity of it for security applications but also discuss how to attack machine learning algorithms.
In this talk, we start by discussing how to reverse engineer a neural network by using side-channel information such as timing and electromagnetic emanations. To this end, we consider multilayer perceptrons and convolutional neural networks as the machine learning architectures of choice and assume a non-invasive and passive attacker capable of measuring those kinds of leakages. Our experiments show that a side-channel attacker is capable of obtaining the following information: the activation functions used in the architecture, the number of layers and neurons in the layers, the number of output classes, and weights in the neural network.
Afterward, we discuss how to use the knowledge about the neural network to guess the inputs to the neural network. Finally, we conclude with several interesting challenges when considering implementation attacks on neural networks.
13:00 – 14:00
14:00 – 14:45
Threats to electric mobility and how to establish trust
Prof. Christoph Krauß,
Darmstadt University of Applied Sciences, Germany
In this talk, I present security and privacy threats to electric mobility, as well as possible security solutions to establish trust. First, I present the current state of the art. This includes the actuators involved, communication relationships, and protocols used. Then, I discuss possible threats and shortcomings in terms of security and privacy. Finally, I present some trusted computing-based solutions to protect against selected threats.
14:45 – 15:30
Hardware-assisted run-time protection
Prof. N. Asokan,
David R. Cheriton School of Computer Science and Executive director of the Waterloo Cybersecurity and Privacy Institute, University of Waterloo, Canada
Run-time attacks are a prominent attack vector for compromising systems written in memory-unsafe languages like C and C++. Over the last decade there has been significant advances by both researchers
and practitioners in understanding and defending against run-time attacks. As defenses are gradually being deployed, more sophisticated attacks, like data-oriented attacks, will become increasingly
Defenses against run-time attacks must consider how to trade-off security, performance and deployability. Fine-grained software-only defenses are effective, but can be prohibitively expensive. Hardware-based defenses can be effective and efficient but deploying new hardware extensions is difficult. In this talk, I will describe two attempts from our recent work to provide run-time
protection, especially for data-oriented attacks. The first, HardScope, is a hardware solution for enforcing lexical scope for variables at run-time. HardScope consists of a small set of proposed
processor extensions as well as associated compiler instrumentation. The second, PARTS and PACStack, are software solutions that makes use of an existing hardware-assisted mechanism in
ARM processors for pointer authentication (PA). They consist of a set of techniques that use PA in new ways for thwarting run-time attacks.
I will also briefly touch on other emerging hardware security extensions and potential research directions in exploring how best to use them.
15:30 – 16:00
16:00 – 16:45
Is Differential Privacy what you want to protect privacy in ML?
Prof. Florian Kerschbaum,
David R. Cheriton School of Computer Science, University of Waterloo, Canada
In this talk we will look at notions of privacy in machine learning, namely differential privacy and empirical privacy based on attack evaluation. We will see how those two notions compare identifying edge cases. I will show that differential privacy does not necessarily protect against privacy attacks, such as membership inference attacks. I will also show that if one wants to protect against membership inference attacks, differential privacy would not necessarily be the method of choice. This leaves the open question whether differential privacy is what you want to protect privacy in machine learning.
16:45 – 17:30
Stakeholder perspectives on privacy-preserving machine learning
Prof. Simone Fischer-Hübner,
Computer Science Department at Karlstad University, Sweden
Privacy-enhancing Technologies based on homomorphic encryption, functionel encryption, multi-party computation or differential privacy can be used as building blocks for implementing privacy-preserving machine learning, which could be outsourced to third party cloud servers while preventing the leakage of personal data.
The EU H2020 project PAPAYA has researched and developed a Platform for Privacy-preserving Data Analytics for medical and telecom use cases. This talk will discusses stakeholder and user perspectives and requirements that we elicited for these use cases for making PAPAYA’s technology trustworthy.
17:30 – 18:00
Prof. Ahmad-Reza Sadeghi, TU Darmstadt