ABOUT NEWS PEOPLE RESEARCH TEACHING JOBS
Distinguished Lecture Series

15.12.2021 - 16:00

Herbert Bos

A Stab in the Dark: Blind Attacks on the Linux Kernel

Join: https://tuwien.zoom.us/j/97853818273

Herbert Bos

Bio: Herbert Bos is full professor at the Vrije Universiteit Amsterdam and co-leads the VUSec Systems Security research group with Cristiano Giuffrida and Erik van der Kouwe.
He obtained an ERC Starting Grant to work on reverse engineering and an NWO VICI grant to work on vulnerability detection. These and other systems security topics are still close to his heart. Other research interests include OS design, networking, and dependable systems.
Herbert moved to The Netherlands after approximately four years at the Universiteit Leiden. Before that he obtained his Ph.D. from the Cambridge University Computer Laboratory, followed by a brief stint at KPN Research (now TNO Telecom).

Abstract: Unlike what we see in the movies, attacks on high-value targets are not easy. While there are still plenty of vulnerabilities in the software, exploitation is difficult due to multiple layers of protection. For instance, since data pages are no longer executable on modern systems, attackers cannot inject their own malicious code to execute and are forced to reuse snippets of code already present in the victim software instead. By stringing together such existing snippets of benign software, attackers can still create their malicious payloads. However, to do so, they need to know where all these snippets are in memory, which is made difficult by the randomized address spaces of today's software.
Researchers have shown that in some cases it is possible to attack "blind"—without knowing anything about the target software. Unfortunately for the attacker, these blind attacks induce thousands of crashes, making them applicable only in cases where (a) the software can handle crashes, and (b) nobody raises an alarm when thousands of processes crash. Such cases are rare. In particular, they do not apply to truly high-value targets such as the Linux kernel, where even a single crash is fatal.
If software exploitation is difficult, what about hardware attacks such as Spectre? Here also, exploitation is really tough due to powerful mitigations in hardware and software. In the case of the Linux kernel developers have gone through the kernel code with a fine comb, to eliminate the known Spectre "gadgets" and neutralize possible attacks.
However, even if traditional software exploitation and speculative execution attacks are difficult, I will show that we can combine them and create very powerful blind attacks still, even on the Linux kernel. In particular, the software vulnerabilities make Spectre attacks possible in code that we previously considered safe. In return, speculative execution makes it possible for an attacker to probe blindly in the victim's address space without ever crashing.
Such a symbiotic combination of attack vectors make the development of mitigation much harder: we can no longer limit ourselves to the threat of, say, memory errors or speculative execution, but have to consider interesting combinations also.


12.01.2022 - 16:00

Yang Zhang

Quantifying Privacy Risks of Machine Learning Models

Join: https://tuwien.zoom.us/j/91926100499

Yang Zhang

Bio: Yang Zhang is a faculty member at CISPA Helmholtz Center for Information Security, Germany. Previously, he was a group leader at CISPA. He obtained his Ph.D. degree from University of Luxembourg in November 2016. Yang's research interests lie at the intersection of privacy and machine learning. Over the years, he has published multiple papers at top venues in computer science, including WWW, CCS, NDSS, and USENIX Security. His work has received the NDSS 2019 distinguished paper award.

Abstract: Machine learning has made tremendous progress during the past decade. While continuing to improve our daily lives, recent research shows that machine learning models are vulnerable to various privacy attacks. In this talk, I will cover our three recent works on quantifying the privacy risks of machine learning models. First, I will talk about some recent development of membership inference. Second, I will discuss the data reconstruction attacks against online learning. In the end, I will present link stealing attacks against graph neural networks.


09.03.2022 - 16:00

Dan Boneh

Join: https://tuwien.zoom.us/j/92744131357

Dan Boneh

Bio: Dan Boneh heads the applied cryptography group and co-direct the computer security lab at Stanford University. His research focuses on applications of cryptography to computer security. His work includes cryptosystems with novel properties, web security, security for mobile devices, and cryptanalysis. He is the author of over a hundred publications in the field and is a Packard and Alfred P. Sloan fellow. He is a recipient of the 2014 ACM prize and the 2013 Godel prize. In 2011 Dr. Boneh received the Ishii award for industry education innovation. Dan Boneh received his Ph.D from Princeton University and joined Stanford University in 1997.


13.04.2022 - 16:00

Elissa Redmiles

Join: https://tuwien.zoom.us/j/97654713549

Elissa Redmiles

Bio: Dr. Elissa M. Redmiles is a faculty member and research group leader of the Safety & Society group at the Max Planck Institute for Software Systems. She additionally serves as a consultant and researcher at multiple institutions, including Microsoft Research and Facebook. Dr. Redmiles uses computational, economic, and social science methods to understand users' security, privacy, and online safety-related decision-making processes. Her work has been featured in popular press publications such as Scientific American, Wired, Business Insider, Newsweek, Schneier on Security, and CNET and has been recognized with multiple Distinguished Paper Awards at USENIX Security and research awards including a Facebook Research Award and the John Karat Usable Privacy and Security Research Award. Dr. Redmiles received her B.S. (Cum Laude), M.S., and Ph.D. in Computer Science from the University of Maryland.


Past Lectures

24.11.2021 - 16:00

Roger Wattenhofer

Cascade: Asynchronous Proof-of-Stake

Resources: [video]

Roger Wattenhofer

Bio: Roger Wattenhofer is a full professor at the Information Technology and Electrical Engineering Department, ETH Zurich, Switzer­land. He received his doctorate in Computer Science from ETH Zurich. He also worked some years at Microsoft Research in Redmond, Washington, at Brown University in Providence, Rhode Island, and at Macquarie University in Sydney, Australia.
Roger Wattenhofer’s research interests are a variety of algorithmic and systems aspects in computer science and information technology, e.g., distributed systems, positioning systems, wireless networks, mobile systems, social networks, financial networks, deep neural networks. He publishes in different communities: distributed computing (e.g., PODC, SPAA, DISC), networking and systems (e.g., SIGCOMM, SenSys, IPSN, OSDI, MobiCom), algorithmic theory (e.g., STOC, FOCS, SODA, ICALP), and more recently also machine learning (e.g., NeurIPS, ICLR, ACL, AAAI). His work received multiple awards, e.g. the Prize for Innovation in Distributed Computing for his work in Distributed Approximation. He published the book “Blockchain Science: Distributed Ledger Technology“, which has been translated to Chinese, Korean and Vietnamese.

Abstract: Nakamoto’s Bitcoin protocol has taught the world how to achieve trust without a designated trusted party. However, Bitcoin’s proof-of-work solution comes at serious costs and compromises. While solving the energy problem of proof-of-work, proof-of-stake introduces some of its own problems. We relax the usual notion of consensus to extract the requirements necessary for an efficient cryptocurrency. Towards this end, we introduce a blockchain design called Cascade that is consensusless, asynchronous, scalable, deterministic, and efficient. While Cascade has its own limitations, it should serve as a nice discussion basis for a better blockchain design.
Cascade is joint work with Jakub Sliwinski.


13.10.2021 - 16:00

Bryan Ford

Digital Personhood: Towards Technology that Securely Serves People

Resources: [video] [slides]

Bryan Ford

Bio: Prof. Bryan Ford leads the Decentralized/Distributed Systems (DEDIS) research laboratory at the Swiss Federal Institute of Technology in Lausanne (EPFL). Ford's research focuses on decentralized systems, security and privacy, digital democracy, and blockchain technology. Since earning his Ph.D. at MIT, Ford has held faculty positions at Yale University and EPFL. His awards include the Jay Lepreau Best Paper Award, the NSF CAREER award, and the AXA Research Chair. Inventions he is known for include parsing expression grammars, delegative or liquid democracy, and scalable sharded blockchains. He has advised numerous companies and governments, including serving on the US DARPA Information Science and Technology (ISAT) Study Group and on the Swiss Federal E-voting Experts Dialog.

Abstract: Internet technologies have often been called “democratizing” by virtue of giving anyone a voice in countless online forums. Technology cannot actually be “democratizing” by democratic principles, however, unless it serves everyone, offers everyone not just a voice but an equal voice, and is accountable to and ultimately governed by the people it serves. Today's technology offers not democracy but guardianship, subjecting our online lives to the arbitrary oversight of unelected employees, committees, platforms, and algorithms, which serve profit motives or special interests over our broader interests. Can we build genuinely democratizing technology that serves its users inclusively, equally, and securely?
A necessary first step is digital personhood: enabling technology to distinguish between real people and fake accounts such as sock puppets, bots, or deep fakes. Digital identity approaches undermine privacy and threaten our effective voice and freedoms, however, both in existing real-world democracies and in online forums that we might wish to embody democratic ideals. An emerging ecosystem of “proof of personhood” schemes attempts to give willing participants exactly one credential each while avoiding the privacy risks of digital identity. Proof of personhood schemes may operate in the physical world or online, building on security foundations such as in-peron events, biometrics, social trust networks, and Turing tests. We will explore the promise and challenges of secure digital personhood, and the tradeoffs of different approaches along the key metrics of security, privacy, inclusion, and equality. We will cover further security challenges such as resisting collusion, coercion, or vote-buying. Finally, we will outline a few applications of secure digital personhood, both already prototyped and envisioned.


12.05.2021 - 16:00

Thorsten Holz

Fuzz Testing and Beyond

Resources: [video]

Thorsten Holz

Bio: Thorsten Holz is a professor in the Faculty of Electrical Engineering and Information Technology at Ruhr-University Bochum, Germany. His research interests include technical aspects of secure systems, with a specific focus on systems security. Currently, his work concentrates on reverse engineering, automated vulnerability detection, and studying latest attack vectors. He received the Dipl.-Inform. degree in Computer Science from RWTH Aachen, Germany (2005) and the Ph.D. degree from University of Mannheim (2009). Prior to joining Ruhr-University Bochum in April 2010, he was a postdoctoral researcher in the Automation Systems Group at the Technical University of Vienna, Austria. In 2011, Thorsten received the Heinz Maier-Leibnitz Prize from the German Research Foundation (DFG) and in 2014 an ERC Starting Grant. Furthermore, he is Co-Spokesperson of the Cluster of Excellence "CASA - Cyber Security in the Age of Large-Scale Adversaries" (with C. Paar and E. Kiltz).

Abstract: In this talk, I will provide an overview of our recent progress in randomized testing and present some of the methods we have developed in the past years. These include fuzzing of OS kernels and hypervisors, grammar-based fuzzing of complex interpreters, and fuzz testing of stateful systems. As part of this work, we have already found hundreds of software bugs, some of which are related to well-known programs and systems. In addition, I will also talk about a method to prevent fuzz testing and conclude with an outlook on open challenges that still need to be solved.


14.04.2021 - 16:00

Sarah Meiklejohn

Privacy and Verifiability in Certificate Transparency

Sarah Meiklejohn

Bio: Sarah Meiklejohn is a Professor in Cryptography and Security at University College London (UCL), in the Computer Science department. She is affiliated with the Information Security Group, and is also a member of the Open Music Initiative, a fellow of the Alan Turing Institute, and an Associate Director of the Initiative for Cryptocurrencies and Contracts (IC3).

From November 2019 to December 2020, she was a visiting researcher at Google UK, working with the Certificate Transparency / TrustFabric team. As of December 2020 she is a Staff Research Scientist there.

Sarah Meiklejohn received a PhD in Computer Science from the University of California, San Diego under the joint supervision of Mihir Bellare and Stefan Savage. During her PhD, she spent the summers of 2011 and 2013 at MSR Redmond, working in the cryptography group with Melissa Chase. She obtained an Sc.M. in Computer Science from Brown University under the guidance of Anna Lysyanskaya in 2009, and an Sc.B. in Mathematics from Brown in 2008.

Abstract: In recent years, there has been increasing recognition of the benefits of having services provide auditable logs of data, as demonstrated by the deployment of Certificate Transparency and the development of other transparency projects. Despite their success, extending the current form of these projects can yield improved guarantees in terms of verifiability, efficiency, and privacy. This talk touches on these considerations by discussing efficient solutions for verifiability, in the form of a gossip protocol and a new verifiable data structure, and the difficulties of achieving privacy-preserving auditing.


10.03.2021 - 16:00

Bart Preneel

Proximity tracing with Coronalert: lessons learned

Resources: [video]

Bart Preneel

Bio: Prof. Bart Preneel is a full professor at the KU Leuven, where he heads the COSIC research group. His main research interests are cryptography, information security and privacy. He has served as president of the IACR (International Association for Cryptologic Research) and is co-founder and chairman of the Board of the information security cluster LSEC. He is a member of the Advisory group of ENISA, of the Board of the Cyber Security Coalition Belgium and of the Academia Europaea. He received the RSA Award for Excellence in the Field of Mathematics (2014), the IFIP TC11 Kristian Beckman award (2015) and the ESORICS Outstanding Research Award (2017). In 2015 he was elected as fellow of the IACR. He frequently consults for industry and government about security and privacy technologies.

Abstract: The corona pandemic is the first major pandemic in times of big data, AI and smart devices. Some nations have deployed these technologies a large scale to support a trace/quarantine/test/isolate strategy in order to contain a pandemic. However, serious concerns have been raised w.r.t. the privacy implications of some solutions, which makes them incompatible with privacy and human rights that are protected by EU law. This talk focuses on the proximity tracing solution developed by the DP-3T (Distributed Privacy-Preserving Proximity Tracing) consortium. This app has been rolled out in more than 40 countries and states, with support of Google and Apple. We will provide some details on the experience with the Coronalert app in Belgium that is connected to the European Federated Gateway Service, which at this moment has 11 EU countries and more than 40 million users. The talk will discuss the lessons learned from this large-scale deployment in which the principles of privacy-by-design and data minimization have played a central role.


10.02.2021 - 16:00

Henry Corrigan-Gibbs

SafetyPin: Encrypted Backups with Human-Memorable Secrets

Resources: [video]

Henry Corrigan-Gibbs

Bio: Henry Corrigan-Gibbs (he/him) is an assistant professor in MIT's department of electrical engineering and computer science and is a principal investigator in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). Henry builds computer systems that provide strong security and privacy properties using ideas from the domains of cryptography, computer security, and computer systems. Henry completed his PhD in the Applied Cryptography Group at Stanford, where he was advised by Dan Boneh. After that, he was a postdoc in Bryan Ford's research group at EPFL in Lausanne, Switzerland.

For their research efforts, Henry and his collaborators have received the Best Young Researcher Paper Award three times (at Eurocrypt in 2020, the Theory of Cryptography Conference in 2019 and at Eurocrypt in 2018), the 2016 Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, and the 2015 IEEE Security and Privacy Distinguished Paper Award. Henry's work has been cited by IETF and NIST, and his libprio library for privacy-preserving telemetry data collection ships today with the Firefox browser.

Abstract: This talk will present the design and implementation of SafetyPin, a system for encrypted mobile-device backups. Like existing cloud-based mobile-backup systems, including those of Apple and Google, SafetyPin requires users to remember only a short PIN and defends against brute-force PIN-guessing attacks using hardware security protections. Unlike today's systems, SafetyPin splits trust over a cluster of hardware security modules (HSMs) in order to provide security guarantees that scale with the number of HSMs. In this way, SafetyPin protects backed-up user data even against an attacker that can adaptively compromise many of the system's constituent HSMs. SafetyPin provides this protection without sacrificing scalability or fault tolerance. Decentralizing trust while respecting the resource limits of today's HSMs requires a synthesis of systems-design principles and new cryptographic tools. We evaluate SafetyPin on a cluster of 100 low-cost HSMs and show that a SafetyPin-protected recovery takes 1.01 seconds. To process 1B recoveries a year, we estimate that a SafetyPin deployment would need 3,100 low-cost HSMs.


13.01.2021 - 16:00

Andrei Sabelfeld

SandTrap: Securing JavaScript-driven Trigger-Action Platforms

Resources: [slides]

Andrei Sabelfeld

Bio: Andrei Sabelfeld is a Professor in the Department of Computer Science and Engineering at Chalmers University of Technology in Gothenburg, Sweden. Before joining Chalmers as faculty, he was a Research Associate at Cornell University in Ithaca, NY, USA. Andrei Sabelfeld's research ranges from foundations to practice in a range of topics in computer security and privacy. Today, he leads a group of researchers at Chalmers engaged in a number of internationally visible projects on software security, web security, IoT security, and applied cryptography.

Abstract: Trigger-Action Platforms (TAPs) seamlessly connect a wide variety of otherwise unconnected devices and services, ranging from IoT devices to cloud services and social networks. While enabling novel and exciting applications, TAPs raise critical security and privacy concerns because a TAP is effectively a "person-in-the-middle" between trigger and action services. Third-party code, routinely deployed as "apps" on TAPs, further exacerbates these concerns.

This talk focuses on JavaScript-driven TAPs. We show that the popular IFTTT and Zapier platforms and an open-source alternative, Node-RED, are susceptible to various attacks, ranging from massively exfiltrating data from unsuspecting users to taking over the entire platform. We report on the changes made by the platforms in response to our findings and present an empirical study to assess the security implications.

Motivated by the need for a secure yet flexible way to integrate third-party JavaScript apps, we propose SandTrap, a sandboxing approach that allows for isolating apps while letting them communicate via clearly defined interfaces. We present a formalization for a core language that soundly and transparently enforces fine-grained allowlist policies at module-, API-, value-, and context-level. We develop a novel proxy-based JavaScript monitor that encompasses a powerful policy generation mechanism and enables us to instantiate SandTrap to IFTTT, Zapier, and Node-RED. We illustrate on a set of benchmarks how SandTrap enforces a variety of policies while incurring a tolerable runtime overhead.