ABOUT NEWS PEOPLE RESEARCH TEACHING
Distinguished Lecture Series

05.07.2024 - 10:00

Sven Bugiel

Access Control in Mobile Software Stacks: Can we do fundamentally better?

Location: TU Wien, FAV Hörsaal 3 (Favoritenstr. 9-11)

Or join online: https://tuwien.zoom.us/j/61494248456

Sven Bugiel

Bio: Sven Bugiel is a security researcher focusing on (mobile) operating system security and trusted computing. In the past, he was particularly looking into mandatory access control systems for the Android OS and integrating hardware security building blocks into mobile operating systems. This interest has extended to object-capability systems and developing new confidential computing solutions. More recently, he also worked on the intersection of those topics with human-centered studies, authentication, and data science. Sven is a tenured faculty at the CISPA Helmholtz Center for Information Security in Saarbrücken, Germany since end of 2021.

Abstract: A cornerstone of mobile privacy and security is the permission system that enables users to selectively grant or revoke apps’ access to data. This pivotal role of permissions has earned them a lot of attention over the last 15 years by the research community, who identified its shortcomings and suggested improvements to it. In this talk, we briefly recap the access control model of the permission system “under the hood” and then take a step back to question whether we can do fundamentally better at the system design level. Central to this question is the existence of an ambient authority as the root of many problems and how we can get rid of it. To give food for thought, we base this discussion on a recent research work that proposes object capabilities as alternative access control model for Android and on looking at Google Fuchsia, Google’s latest operating system that is capability-based. We present some early results that show that even Fuchsia’s design is still not a sufficient solution and what the challenges are for such a paradigm shift in access control for (mobile) software stacks.


Past Lectures

24.06.2024 - 16:00

Konrad Rieck

On Challenges in Defending Against Code Stylometry.

Konrad Rieck

Bio: Konrad Rieck is a Professor of Computer Science at TU Berlin, where he heads the Chair of Machine Learning and Security within the Berlin Institute for the Foundations of Learning and Data. Additionally, he is a Guest Professor at TU Wien. Previously, Konrad has been working at TU Braunschweig, University of Göttingen, and Fraunhofer Institute FIRST. His research interests revolve around computer security and machine learning. His group is developing novel methods for detecting computer attacks, analyzing malicious software and discovering security vulnerabilities. Moreover, the group explores the security and privacy of learning algorithms. Konrad is also interested in efficient algorithms for analyzing structured data, such as strings, trees, and graphs. His Erdős number is 3 (Müller → Jagota → Erdős) and his Bacon number is ∞. He is a very distant academic relative of Carl Friedrich Gauß (see here), although this doesn’t help when solving math problems.

Abstract: Source code often contains subtle stylistic patterns that can be used to identify its developer, an approach known as code stylometry. While a series of research has shown that code stylometry can recognize one programmer among hundreds of others, defenses against this approach have received little attention so far. In this talk, we address this research gap from two perspectives. First, we introduce a method for automatically imitating programming styles through semantic-preserving transformations. This method allows us to mislead correct identification and protect developers’ privacy. Second, however, we prove that true anonymity cannot be achieved in this way and that stylistic patterns remain in source code under realistic conditions. Our results thus underscore the need for raising awareness and further research into protecting developers’ privacy.


26.03.2024 - 13:00

Reiner Hähnle

Context-aware Trace Contracts

Reiner Hähnle

Bio: Reiner Hähnle is Professor in Software Engineering at the Computer Science Department of TU Darmstadt. He has wide-ranging interests in the formal foundations of software design, of programming languages, and of quality assurance by verification. He is co-initiator of the KeY project that maintains the well-known, eponymous Java verification tool and he is co-designer of the active object language ABS. He is co-founder of the Tableaux and IJCAR conference series and currently SC Chair of FASE. Notably, he was the first ever Wine Chair of an international Computer Science conference at ECOOP 2014.

Abstract: The behavior of concurrent, asynchronous procedures depends in general on the call context, because of the global protocols that govern scheduling. This context cannot be specified with the state-based Hoare-style contracts common in deductive verification. Recent work generalized state-based to trace contracts, which permit to specify internal behavior of a procedure, such as calls or state changes, but not its call context. In this talk we discuss a program logic of context-aware trace contracts for specifying global behavior of asynchronous programs. We also provide a sound proof system that addresses two challenges: First, to observe the program state not merely at the end points of a procedure, we introduce the novel concept of an observation event. Second, to combat combinatorial explosion of possible call sequences of procedures, we adapt Liskov’s principle of behavioral subtyping to the analysis of asynchronous calls. This is a joint work with Eduard Kamburjan (U Oslo) and Marco Scaletta (TU Darmstadt).


13.12.2023 - 13:00

Frank Leymann

Post-Quantum Security

Frank Leymann

Bio: Frank Leymann is the first Kurt Gödel Visiting Professor and an honorary professor at TU Wien. He studied Mathematics, Physics, and Astronomy at the University of Bochum, Germany. After receiving his master’s degree in 1982, he pursued his PhD in Mathematics in 1984. Afterwards, he joined IBM Research and Development and worked for two decades for the IBM Software Group.
In 2004, Frank Leymann was appointed as a full professor of computer science at the University of Stuttgart, where he founded the Institute of Architecture of Application Systems and serves as its director. His research interests encompass middleware in general, pattern languages, and cloud computing, with a current strong focus on quantum computing.
Frank is an elected member of the Academy of Europe (Academia Europaea). He published uncountable papers in journals and proceedings, co-authored four textbooks, and holds more than 70 patents, especially in the area of workflow management and transaction processing. He served on steering-, program- and organization committees of many international conferences, and is (associated) editor of several journals.
From 2006 to 2011, he was a member of the scientific directorate of Schloss Dagstuhl (Leibniz Center of Computer Science). In 2019, he was appointed as a Fellow at the Center of Integrated Quantum Science and Technology (IQST), and in 2020 he was appointed as Member of the Expert Council for Quantum Computing of the German Government.

Abstract: We remind the underpinnings of classical encryption, factorization and elliptical curves, and their relation to discrete logarithms. After very briefly sketching the key resources of quantum computing, Shor’s algorithm is revealed to solve the discrete logarithm problem. Thus, quantum computing is jeopardizing today’s cryptographic infrastructure.
Lattice-based cryptography is introduced, and a brief overview on Dilithium and Kyber is given. These two algorithms are believed to be quantum safe, i.e. they promise to resist attacks by quantum (as well as classical) algorithms. While Dilithium and Kyber are already being standardized, a broad understanding of the above security threads is missing in industry. A sketch of activities of major industry players closes the talk.


15.09.2023 - 14:00

Marco Mellia

Data, AI and Cybersecurity - a possible cocktail?

Marco Mellia

Bio: Marco Mellia is a full professor at Politecnico di Torino, Italy, where he is the coordinator of the SmartData@PoliTO center on Big Data, Machine Learning and Data Science. His research interests are in the area of Internet monitoring, users’ characterisation, cyber security, and big data analytics applied to different areas. He has co-authored over 250 papers published in international journals and presented in leading conferences. He won the IRTF ANR Prize at IETF-88, and best paper award at IEEE P2P’12, ACM CoNEXT’13, IEEE ICDCS’15. He is Fellow of IEEE and Editor in Chief of the Proceedings of the ACM on Networking.

Abstract: Modern Artificial Intelligence technologies, led by Deep Learning, have gained unprecedented momentum over the past decade. Following this wave of “AI summer”, the network research community has also embraced AI/ML algorithms to address many problems related to network operations, management and cybersecurity.
This talk will give an overview of some of the recent results in applying AI-based solution to automatically process traffic traces and detect novel attacks, prevent cybersquatting attacks, support forensic investigations, and open new opportunities to protect users from possible abuses.


28.06.2023 - 16:00

Kenneth Paterson

Cryptography in the Wild

Resources: [video] [slides]

Kenneth Paterson

Bio: Kenneth Paterson is a Professor of Computer Science at ETH Zurich, where he leads the Applied Cryptography Group and is currently the head of department. He was Program Chair for Eurocrypt 2011 and Editor-in-Chief of the Journal of Cryptology from 2017 to 2020. He co-founded the Real World Cryptography series of conferences. His research has won best paper awards at conferences including ACM CCS 2016, 2022, IEEE S&P 2022, 2023, NDSS 2012, CHES 2018, and IMC 2018. He was made a Fellow of the IACR in 2017. In 2022, he was winner of the Golden Owl best teaching award for the Department of Computer Science at ETH.

Abstract: In this talk I’ll discuss a research theme that has emerged in the last few years, namely the analysis of deployed cryptographic systems. There is a small but dedicated group of researchers who do this kind of work. I’ll reflect on how we conduct this kind of research, why we do it, and what we can learn from it about how developers use (and abuse) cryptography.


21.06.2023 - 16:00

Christian Cachin

Consensus in blockchains: Overview and recent results

Resources: [video] [slides]

Christian Cachin

Bio: Christian Cachin is a professor of computer science at the University of Bern, where he has been leading the Cryptology and Data Security Research Group since 2019. Prior to that he worked for IBM Research - Zurich for more than 20 years. He has held visiting positions at MIT and at EPFL and has taught at several universities during his career in industrial research. He graduated with a Ph.D. in Computer Science from ETH Zurich in 1997. He is an IACR Fellow, ACM Fellow, IEEE Fellow, recipient of multiple IBM Outstanding Technical Achievement Awards, and has also served as the President of the International Association for Cryptologic Research (IACR) from 2014-2019.
With a background in cryptography, he is interested in all aspects of security in distributed systems and especially in cryptographic protocols, consistency, consensus, blockchains, and cloud-computing security. He is known for developing cryptographic protocols, particularly for achieving consensus and for executing distributed cryptographic operations over the Internet. In the area of cloud computing, he has contributed to standards in storage security and developed protocols for key management.
He has co-authored a textbook on distributed computing titled Introduction to Reliable and Secure Distributed Programming. While at IBM Research he made essential contributions to the development of Hyperledger Fabric, a blockchain platform aimed at business use.

Abstract: Reaching consensus despite faulty or corrupted nodes is a central question in distributed computing; it has received renewed attention over the last years because of its importance for cryptocurrencies and blockchain networks. Modern consensus protocols in this space have relied on a number of different methods for the nodes to influence protocol decisions. Such assumptions include (1) traditional voting, where each node has one vote, (2) weighted voting, where voting power is proportional to stake in an underlying asset, and (3) proof-of-X, which demonstrates a cryptographically verifiable investment of a resource X, such as storage space, time waited, or computational work.
This talk will give an overview of blockchain consensus methods and then highlight recent work on constructing new consensus protocols and analyzing existing ones.


14.06.2023 - 16:00

Katharina Krombholz

Towards Understandable Privacy and Security Guarantees - The Human Factors Perspective

Resources: [video]

Katharina Krombholz

Bio: Katharina Krombholz is a tenured faculty at the CISPA Helmholtz Center for Information Security in Saarbrücken, Germany, where she leads the usable security research group. She is PC co-chair of SOUPS 23 and 24 and active in both the security & privacy and human-computer interaction communities. Katharina Krombholz obtained her PhD from TU Wien and has been visiting researcher/faculty at various institutions around the world, including LUMS in Lahore and NII in Japan.

Abstract: Due to the digitization of everyday things, humans and their surroundings are exposed to visible and invisible computers that continuously collect and share data. As a result, it is almost impossible for users and bystanders to understand these complex data sharing models along with their implications for privacy.
In this talk, I will present current trends in human-centric privacy research along with a series of lessons learned to make privacy understandable and effective for everyone.


31.05.2023 - 16:30

Mooly Sagiv

Scaling Formal Verification to Realistic Code with Applications to DeFi Verification

Resources: [slides]

Mooly Sagiv

Bio: Mooly Sagiv is Full Professor, Chair of Software Systems School of Computer Science, Tel Aviv University, Israel. He is a recipient of an ERC Advanced Grant 2013. He is also the CEO of Certora, a startup company providing formal verification of smart contracts. He characterizes his research in the following way:
"My research focuses on easing the task of developing reliable and efficient software systems. I am particularly interested in static program analysis which combines two disciplines: automated theorem proving and abstract interpretation. In the next decade, I am hoping to develop useful techniques in order to change the ways modern software is built. I am particularly interested in proof automation, given a program and a requirement, automatically prove or disprove that all executions of the program satisfying the requirements. This problem is in general undecidable and untractable.
I am interested in developing practical solutions to proof-automation by: (i) exploring modularity of the system and (ii) relying on semi-automatic and interactive processes, where the user manually and interactively guides the proof automation, and (iii) simplifying the verification task by using domain-specific abstractions expressed in a decidable logic.
I am applying these techniques to verify safety of liveness of distributed systems."

Abstract: Deductive verification tools like Dafny and Viper compile the program into an SMT formula and then utilize SMT solvers to find potential bugs or prove their absence. These tools are used to reason about small programs. However, these techniques do not scale due to the inherent complexity of SMT solving and the need to specify exact procedure behavior. Furthermore, common coding patterns such as nonlinear expressions, unbounded data structures and indirect storage complicates SMT reasoning.
We present the Certora Prover, a tool that checks the semantics of the executable code against its intended behavior written in a high-level declarative language for writing relational specifications, called CVL. Developer-written specifications in CVL have prevented billion-dollar mistakes and improved code security. The Certora Prover has secured 50% of the total value locked in the Ethereum blockchain. Also, specifications are written by Solidity developers and external security experts.


03.05.2023 - 16:00

Konrad Rieck

When Papers Choose their Reviewers: Adversarial Machine Learning in Conference Management Systems.

Resources: [video] [slides]

Konrad Rieck

Bio: Konrad Rieck is a professor at TU Berlin, where he leads the Chair of Machine Learning and Security as part of the Berlin Institute for the Foundations of Learning and Data. Previously, he held academic positions at TU Braunschweig, the University of Göttingen, and Fraunhofer Institute FIRST. His research focuses on the intersection of computer security and machine learning. He has published over 100 papers in this area and serves on the PCs of the top security conferences (system security circus). He has been awarded the CAST/GI Dissertation Award, a Google Faculty Award, and an ERC Consolidator Grant.

Abstract: The number of papers submitted to scientific conferences is steadily rising in many disciplines. To handle this growth, systems for automatic paper-reviewer assignments are increasingly used during the reviewing process. These systems employ statistical topic models to characterize the papers' content and automate their assignment to reviewers. In this talk, we investigate the security of this automation and introduce a new attack that modifies a given paper so that it selects its own reviewers. Our attack is based on a novel optimization strategy that fools the topic model with unobtrusive changes to the paper's content. In an empirical evaluation with a (simulated) conference, our attack successfully selects and removes reviewers, while the tampered papers remain plausible and often indistinguishable from innocuous submissions.


17.04.2023 - 11:00

Alessandro Abate

Logic meets Learning - Formal Synthesis with Neural Templates

Resources: [slides]

Alessandro Abate

Bio: Alessandro Abate is Professor of Verification and Control in the Department of Computer Science at the University of Oxford, where he is also Deputy Head of Department. Earlier, he did research at Stanford University and at SRI International, and was an Assistant Professor at the Delft Center for Systems and Control, TU Delft. He received an MS/PhD from the University of Padova and UC Berkeley. His research interests lie on the formal verification and control of stochastic hybrid systems, and in their applications in cyber-physical systems, particularly involving safety criticality and energy. He blends in techniques from machine learning and AI, such as Bayesian inference, reinforcement learning, and game theory.

Abstract: I shall present recent work on CEGIS, a "counterexample-guided inductive synthesis" framework for sound synthesis tasks that are relevant for dynamical models, control problems, and software programs. The inductive synthesis framework comprises the interaction of two components, a learner and a verifier. The learner trains a neural template on finite samples. The verifier soundly validates the candidates trained by the learner, by means of calls to a SAT-modulo-theory solver. Whenever the candidate is not valid, SMT-generated counter-examples are passed to the learner for further training.
I shall elucidate the ins & outs of the CEGIS framework, and display its workings on a few problems: synthesis of Lyapunov functions and of barrier certificates; hybridisation of nonlinear dynamics for safety verification; synthesis of digital controllers for continuous plants; and an application in real-time autonomy.


19.09.2022 - 14:00

Alejandro Russo

Calculating Sensitivity by Parametricity

Alejandro Russo

Bio: Alejandro Russo is a professor at Chalmers University of Technology / Göteborg University working on the intersection of functional languages, security, privacy, and systems. His research ranges from foundational aspects of security to practical ones. Prof. Russo worked on prestigious research institutions like Stanford University, where he was appointed visiting associate professor back in 2013-2015.

Abstract: The work of Fuzz has pioneered the use of functional programming languages where types allow reasoning about the sensitivity of programs. Fuzz and subsequent work (e.g., DFuzz and Duet) use advanced technical devices like linear types, modal types, and partial evaluation. These features usually require the design of a new programming language from scratch ‒ a major task on its own! While these features are part of the classical toolbox of programming languages, they might result unfamiliar to non-programming language experts. In this work, we propose to take a different direction. We present the novel idea of applying parametricity, i.e., a well-known abstract uniformity property enjoyed by polymorphic functions, to compute the sensitivity of functions. A direct consequence of our result is that calculating the sensitivity of functions can be reduced to simply type-checking in a programming language with support for polymorphism. We formalize our main result in a calculus, prove its soundness, and implement a software library in the programming language Haskell ‒ where we reason about the sensitivity of classical examples. We also show that thanks to type-inference, our approach supports a limited form of sensitivity inference ‒ something that, to the best of our knowledge, has not been explored before. Our library is implemented in 365 lines of code.


15.06.2022 - 16:15

Arthur Gervais

How Dark is the Forest? On Blockchain Extractable Value in Decentralized Finance

Resources: [video]

Arthur Gervais

Bio: Arthur Gervais is a Lecturer (equivalent Assistant Professor) at Imperial College London. He's passionate about information security and worked since 2012 on blockchain, with a recent focus on Decentralized Finance (DeFi). He is co-instructor in the first DeFi MOOC attracting over 3000 students in the Fall 2021 (https://defi-learning.org/). The course will be reinstantiated in the fall of 2022.

Abstract: Permissionless blockchains such as Bitcoin have excelled at financial services. Yet, opportunistic traders extract monetary value from the mesh of decentralized finance (DeFi) smart contracts through so-called blockchain extractable value (BEV). The recent emergence of centralized BEV relayer portrays BEV as a positive additional revenue source. Because BEV, however, was quantitatively shown to deteriorate the blockchain's consensus security, BEV relayers endanger the ledger security by incentivizing rational miners to fork the chain. For example, a rational miner with a 10% hashrate will fork Ethereum if a BEV opportunity exceeds 4x the block reward.
In this talk, we quantify the BEV danger by deriving the USD extracted from sandwich attacks, liquidations, and decentralized exchange arbitrage. We estimate that over 32 months, BEV yielded 540.54M USD in profit, divided among 11,289 addresses when capturing 49,691 cryptocurrencies and 60,830 on-chain markets. The highest BEV instance we find amounts to 4.1M USD, 616.6x the Ethereum block reward. Moreover, while the practitioner's community has discussed the existence of generalized trading bots, we are, to our knowledge, the first to provide a concrete algorithm. Our algorithm can replace unconfirmed transactions without the need to understand the victim transactions' underlying logic, which we estimate to have yielded a profit of 57,037.32 ETH (35.37M USD) over 32 months of past blockchain data.

Relevant papers:

  • Quantifying Blockchain Extractable Value: How dark is the forest? PDF
  • High-Frequency Trading on Decentralized Exchanges PDF
  • On the Just-In-Time Discovery of Profit-Generating Transactions in DeFi Protocols PDF


13.04.2022 - 16:00

Elissa Redmiles

Learning from the People: Responsibly Encouraging Adoption of Contact Tracing Apps

Resources: [video] [slides]

Elissa Redmiles

Bio: Dr. Elissa M. Redmiles is a faculty member and research group leader of the Safety & Society group at the Max Planck Institute for Software Systems. She additionally serves as a consultant and researcher at multiple institutions, including Microsoft Research and Facebook. Dr. Redmiles uses computational, economic, and social science methods to understand users' security, privacy, and online safety-related decision-making processes. Her work has been featured in popular press publications such as Scientific American, Wired, Business Insider, Newsweek, Schneier on Security, and CNET and has been recognized with multiple Distinguished Paper Awards at USENIX Security and research awards including a Facebook Research Award and the John Karat Usable Privacy and Security Research Award. Dr. Redmiles received her B.S. (Cum Laude), M.S., and Ph.D. in Computer Science from the University of Maryland.

Abstract: At the beginning of the pandemic contact tracing apps proliferated as a potential solution to scaling infection tracking and response. While significant focus was put on developing privacy protocols for these apps, relatively less attention was given to understanding why, and why not, users might adopt them. Yet, for these technological solutions to benefit public health, users must be willing to adopt these apps. In this talk I showcase the value of taking a descriptive ethics approach to setting best practices in this new domain. Descriptive ethics, introduced by the field of moral philosophy, determines best practices by learning directly from the user -- observing people’s preferences and inferring best practice from that behavior -- instead of exclusively relying on experts' normative decisions. This talk presents an empirically-validated framework of user's decision inputs to adopt COVID19 contact tracing apps, including app accuracy, privacy, benefits, and mobile costs. Using predictive models of users' likelihood to install COVID apps based on quantifications of these factors, I show how high the bar is for achieving adoption. I conclude by discussing a large-scale field study in which we put our survey and experimental results into practice to help the state of Louisiana advertise their COVID app through a series of randomized controlled Google Ads experiments.


09.03.2022 - 16:00

Dan Boneh

How to Commit to a Private Function

Resources: [video] [slides]

Dan Boneh

Bio: Dan Boneh heads the applied cryptography group and co-directs the computer security lab at Stanford University. His research focuses on applications of cryptography to computer security. His work includes cryptosystems with novel properties, web security, security for mobile devices, and cryptanalysis. He is the author of over a hundred publications in the field and is a Packard and Alfred P. Sloan fellow. He is a recipient of the 2014 ACM prize and the 2013 Godel prize. In 2011 Dr. Boneh received the Ishii award for industry education innovation. Dan Boneh received his Ph.D from Princeton University and joined Stanford University in 1997.

Abstract: A cryptographic commitment scheme lets one party commit to some data while keeping the data secret. The committer can later open the commitment (uniquely) to reveal the committed data. Commitment Schemes are a fundamental tool in cryptography and have been studied for over four decades. In this talk we will generalize this basic concept, and in particular, develop ways to commit to a secret function. The commitment reveals nothing about the function, however, the committer can later "open" the function at any point, namely efficiently prove that for a given (x,y) the function evaluates to y at the point x. We will discuss some societal applications of this concept, as well as the beautiful algebraic questions that come up when constructing it. The talk will be self contained. This is joint work with Wilson Nguyen and Alex Ozdemir.


09.02.2022 - 16:00

Franziska Roesner

Attacking the Brain: Security and Privacy Case Studies in Online Advertising, Misinformation, and Augmented Reality

Resources: [video]

Franziska Roesner

Bio: Franziska (Franzi) Roesner is an Associate Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where she co-directs the Security and Privacy Research Lab. Her research focuses broadly on computer security and privacy for end users of existing and emerging technologies. Her work has studied topics including online tracking and advertising, security and privacy for sensitive user groups, security and privacy in emerging augmented reality (AR) and IoT platforms, and online mis/disinformation. She is the recipient of a Consumer Reports Digital Lab Fellowship, an MIT Technology Review "Innovators Under 35" Award, an Emerging Leader Alumni Award from the University of Texas at Austin, a Google Security and Privacy Research Award, and an NSF CAREER Award. She serves on the USENIX Security and USENIX Enigma Steering Committees. She received her PhD from the University of Washington in 2014 and her BS from UT Austin in 2008. Her website is at https://www.franziroesner.com.

Abstract: People who use modern technologies are inundated with content and information from many sources, including advertisements on the web, posts on social media, and (looking to the future) content in augmented or virtual reality. While these technologies are transforming our lives and communications in many positive ways, they also come with serious risks to users’ security, privacy, and the trustworthiness of content they see: the online advertising ecosystem tracks individual users and may serve misleading or deceptive ads, social media feeds are full of potential mis/disinformation, and emerging augmented reality technologies can directly modify users’ perceptions of the physical world in undesirable ways. In this talk, I will discuss several lines of research from our lab that explore these issues from a broad computer security and privacy perspective, leveraging methodologies ranging from qualitative user studies to systematic measurement studies to system design and evaluation. What unites these efforts is a key question: how are our brains "under attack" in today's and tomorrow's information environments, and how can we design platforms and ecosystems more robust to these risks?


12.01.2022 - 16:00

Yang Zhang

Quantifying Privacy Risks of Machine Learning Models

Resources: [video]

Yang Zhang

Bio: Yang Zhang is a faculty member at CISPA Helmholtz Center for Information Security, Germany. Previously, he was a group leader at CISPA. He obtained his Ph.D. degree from University of Luxembourg in November 2016. Yang's research interests lie at the intersection of privacy and machine learning. Over the years, he has published multiple papers at top venues in computer science, including WWW, CCS, NDSS, and USENIX Security. His work has received the NDSS 2019 distinguished paper award.

Abstract: Machine learning has made tremendous progress during the past decade. While continuing to improve our daily lives, recent research shows that machine learning models are vulnerable to various privacy attacks. In this talk, I will cover our three recent works on quantifying the privacy risks of machine learning models. First, I will talk about some recent development of membership inference. Second, I will discuss the data reconstruction attacks against online learning. In the end, I will present link stealing attacks against graph neural networks.


15.12.2021 - 16:00

Herbert Bos

A Stab in the Dark: Blind Attacks on the Linux Kernel

Resources: [video]

Herbert Bos

Bio: Herbert Bos is full professor at the Vrije Universiteit Amsterdam and co-leads the VUSec Systems Security research group with Cristiano Giuffrida and Erik van der Kouwe.
He obtained an ERC Starting Grant to work on reverse engineering and an NWO VICI grant to work on vulnerability detection. These and other systems security topics are still close to his heart. Other research interests include OS design, networking, and dependable systems.
Herbert moved to The Netherlands after approximately four years at the Universiteit Leiden. Before that he obtained his Ph.D. from the Cambridge University Computer Laboratory, followed by a brief stint at KPN Research (now TNO Telecom).

Abstract: Unlike what we see in the movies, attacks on high-value targets are not easy. While there are still plenty of vulnerabilities in the software, exploitation is difficult due to multiple layers of protection. For instance, since data pages are no longer executable on modern systems, attackers cannot inject their own malicious code to execute and are forced to reuse snippets of code already present in the victim software instead. By stringing together such existing snippets of benign software, attackers can still create their malicious payloads. However, to do so, they need to know where all these snippets are in memory, which is made difficult by the randomized address spaces of today's software.
Researchers have shown that in some cases it is possible to attack "blind"—without knowing anything about the target software. Unfortunately for the attacker, these blind attacks induce thousands of crashes, making them applicable only in cases where (a) the software can handle crashes, and (b) nobody raises an alarm when thousands of processes crash. Such cases are rare. In particular, they do not apply to truly high-value targets such as the Linux kernel, where even a single crash is fatal.
If software exploitation is difficult, what about hardware attacks such as Spectre? Here also, exploitation is really tough due to powerful mitigations in hardware and software. In the case of the Linux kernel developers have gone through the kernel code with a fine comb, to eliminate the known Spectre "gadgets" and neutralize possible attacks.
However, even if traditional software exploitation and speculative execution attacks are difficult, I will show that we can combine them and create very powerful blind attacks still, even on the Linux kernel. In particular, the software vulnerabilities make Spectre attacks possible in code that we previously considered safe. In return, speculative execution makes it possible for an attacker to probe blindly in the victim's address space without ever crashing.
Such a symbiotic combination of attack vectors make the development of mitigation much harder: we can no longer limit ourselves to the threat of, say, memory errors or speculative execution, but have to consider interesting combinations also.


24.11.2021 - 16:00

Roger Wattenhofer

Cascade: Asynchronous Proof-of-Stake

Resources: [video]

Roger Wattenhofer

Bio: Roger Wattenhofer is a full professor at the Information Technology and Electrical Engineering Department, ETH Zurich, Switzer­land. He received his doctorate in Computer Science from ETH Zurich. He also worked some years at Microsoft Research in Redmond, Washington, at Brown University in Providence, Rhode Island, and at Macquarie University in Sydney, Australia.
Roger Wattenhofer’s research interests are a variety of algorithmic and systems aspects in computer science and information technology, e.g., distributed systems, positioning systems, wireless networks, mobile systems, social networks, financial networks, deep neural networks. He publishes in different communities: distributed computing (e.g., PODC, SPAA, DISC), networking and systems (e.g., SIGCOMM, SenSys, IPSN, OSDI, MobiCom), algorithmic theory (e.g., STOC, FOCS, SODA, ICALP), and more recently also machine learning (e.g., NeurIPS, ICLR, ACL, AAAI). His work received multiple awards, e.g. the Prize for Innovation in Distributed Computing for his work in Distributed Approximation. He published the book “Blockchain Science: Distributed Ledger Technology“, which has been translated to Chinese, Korean and Vietnamese.

Abstract: Nakamoto’s Bitcoin protocol has taught the world how to achieve trust without a designated trusted party. However, Bitcoin’s proof-of-work solution comes at serious costs and compromises. While solving the energy problem of proof-of-work, proof-of-stake introduces some of its own problems. We relax the usual notion of consensus to extract the requirements necessary for an efficient cryptocurrency. Towards this end, we introduce a blockchain design called Cascade that is consensusless, asynchronous, scalable, deterministic, and efficient. While Cascade has its own limitations, it should serve as a nice discussion basis for a better blockchain design.
Cascade is joint work with Jakub Sliwinski.


13.10.2021 - 16:00

Bryan Ford

Digital Personhood: Towards Technology that Securely Serves People

Resources: [video] [slides]

Bryan Ford

Bio: Prof. Bryan Ford leads the Decentralized/Distributed Systems (DEDIS) research laboratory at the Swiss Federal Institute of Technology in Lausanne (EPFL). Ford's research focuses on decentralized systems, security and privacy, digital democracy, and blockchain technology. Since earning his Ph.D. at MIT, Ford has held faculty positions at Yale University and EPFL. His awards include the Jay Lepreau Best Paper Award, the NSF CAREER award, and the AXA Research Chair. Inventions he is known for include parsing expression grammars, delegative or liquid democracy, and scalable sharded blockchains. He has advised numerous companies and governments, including serving on the US DARPA Information Science and Technology (ISAT) Study Group and on the Swiss Federal E-voting Experts Dialog.

Abstract: Internet technologies have often been called “democratizing” by virtue of giving anyone a voice in countless online forums. Technology cannot actually be “democratizing” by democratic principles, however, unless it serves everyone, offers everyone not just a voice but an equal voice, and is accountable to and ultimately governed by the people it serves. Today's technology offers not democracy but guardianship, subjecting our online lives to the arbitrary oversight of unelected employees, committees, platforms, and algorithms, which serve profit motives or special interests over our broader interests. Can we build genuinely democratizing technology that serves its users inclusively, equally, and securely?
A necessary first step is digital personhood: enabling technology to distinguish between real people and fake accounts such as sock puppets, bots, or deep fakes. Digital identity approaches undermine privacy and threaten our effective voice and freedoms, however, both in existing real-world democracies and in online forums that we might wish to embody democratic ideals. An emerging ecosystem of “proof of personhood” schemes attempts to give willing participants exactly one credential each while avoiding the privacy risks of digital identity. Proof of personhood schemes may operate in the physical world or online, building on security foundations such as in-peron events, biometrics, social trust networks, and Turing tests. We will explore the promise and challenges of secure digital personhood, and the tradeoffs of different approaches along the key metrics of security, privacy, inclusion, and equality. We will cover further security challenges such as resisting collusion, coercion, or vote-buying. Finally, we will outline a few applications of secure digital personhood, both already prototyped and envisioned.


12.05.2021 - 16:00

Thorsten Holz

Fuzz Testing and Beyond

Resources: [video]

Thorsten Holz

Bio: Thorsten Holz is a professor in the Faculty of Electrical Engineering and Information Technology at Ruhr-University Bochum, Germany. His research interests include technical aspects of secure systems, with a specific focus on systems security. Currently, his work concentrates on reverse engineering, automated vulnerability detection, and studying latest attack vectors. He received the Dipl.-Inform. degree in Computer Science from RWTH Aachen, Germany (2005) and the Ph.D. degree from University of Mannheim (2009). Prior to joining Ruhr-University Bochum in April 2010, he was a postdoctoral researcher in the Automation Systems Group at the Technical University of Vienna, Austria. In 2011, Thorsten received the Heinz Maier-Leibnitz Prize from the German Research Foundation (DFG) and in 2014 an ERC Starting Grant. Furthermore, he is Co-Spokesperson of the Cluster of Excellence "CASA - Cyber Security in the Age of Large-Scale Adversaries" (with C. Paar and E. Kiltz).

Abstract: In this talk, I will provide an overview of our recent progress in randomized testing and present some of the methods we have developed in the past years. These include fuzzing of OS kernels and hypervisors, grammar-based fuzzing of complex interpreters, and fuzz testing of stateful systems. As part of this work, we have already found hundreds of software bugs, some of which are related to well-known programs and systems. In addition, I will also talk about a method to prevent fuzz testing and conclude with an outlook on open challenges that still need to be solved.


14.04.2021 - 16:00

Sarah Meiklejohn

Privacy and Verifiability in Certificate Transparency

Sarah Meiklejohn

Bio: Sarah Meiklejohn is a Professor in Cryptography and Security at University College London (UCL), in the Computer Science department. She is affiliated with the Information Security Group, and is also a member of the Open Music Initiative, a fellow of the Alan Turing Institute, and an Associate Director of the Initiative for Cryptocurrencies and Contracts (IC3).

From November 2019 to December 2020, she was a visiting researcher at Google UK, working with the Certificate Transparency / TrustFabric team. As of December 2020 she is a Staff Research Scientist there.

Sarah Meiklejohn received a PhD in Computer Science from the University of California, San Diego under the joint supervision of Mihir Bellare and Stefan Savage. During her PhD, she spent the summers of 2011 and 2013 at MSR Redmond, working in the cryptography group with Melissa Chase. She obtained an Sc.M. in Computer Science from Brown University under the guidance of Anna Lysyanskaya in 2009, and an Sc.B. in Mathematics from Brown in 2008.

Abstract: In recent years, there has been increasing recognition of the benefits of having services provide auditable logs of data, as demonstrated by the deployment of Certificate Transparency and the development of other transparency projects. Despite their success, extending the current form of these projects can yield improved guarantees in terms of verifiability, efficiency, and privacy. This talk touches on these considerations by discussing efficient solutions for verifiability, in the form of a gossip protocol and a new verifiable data structure, and the difficulties of achieving privacy-preserving auditing.


10.03.2021 - 16:00

Bart Preneel

Proximity tracing with Coronalert: lessons learned

Resources: [video]

Bart Preneel

Bio: Prof. Bart Preneel is a full professor at the KU Leuven, where he heads the COSIC research group. His main research interests are cryptography, information security and privacy. He has served as president of the IACR (International Association for Cryptologic Research) and is co-founder and chairman of the Board of the information security cluster LSEC. He is a member of the Advisory group of ENISA, of the Board of the Cyber Security Coalition Belgium and of the Academia Europaea. He received the RSA Award for Excellence in the Field of Mathematics (2014), the IFIP TC11 Kristian Beckman award (2015) and the ESORICS Outstanding Research Award (2017). In 2015 he was elected as fellow of the IACR. He frequently consults for industry and government about security and privacy technologies.

Abstract: The corona pandemic is the first major pandemic in times of big data, AI and smart devices. Some nations have deployed these technologies a large scale to support a trace/quarantine/test/isolate strategy in order to contain a pandemic. However, serious concerns have been raised w.r.t. the privacy implications of some solutions, which makes them incompatible with privacy and human rights that are protected by EU law. This talk focuses on the proximity tracing solution developed by the DP-3T (Distributed Privacy-Preserving Proximity Tracing) consortium. This app has been rolled out in more than 40 countries and states, with support of Google and Apple. We will provide some details on the experience with the Coronalert app in Belgium that is connected to the European Federated Gateway Service, which at this moment has 11 EU countries and more than 40 million users. The talk will discuss the lessons learned from this large-scale deployment in which the principles of privacy-by-design and data minimization have played a central role.


10.02.2021 - 16:15

Henry Corrigan-Gibbs

SafetyPin: Encrypted Backups with Human-Memorable Secrets

Resources: [video]

Henry Corrigan-Gibbs

Bio: Henry Corrigan-Gibbs (he/him) is an assistant professor in MIT's department of electrical engineering and computer science and is a principal investigator in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). Henry builds computer systems that provide strong security and privacy properties using ideas from the domains of cryptography, computer security, and computer systems. Henry completed his PhD in the Applied Cryptography Group at Stanford, where he was advised by Dan Boneh. After that, he was a postdoc in Bryan Ford's research group at EPFL in Lausanne, Switzerland.

For their research efforts, Henry and his collaborators have received the Best Young Researcher Paper Award three times (at Eurocrypt in 2020, the Theory of Cryptography Conference in 2019 and at Eurocrypt in 2018), the 2016 Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, and the 2015 IEEE Security and Privacy Distinguished Paper Award. Henry's work has been cited by IETF and NIST, and his libprio library for privacy-preserving telemetry data collection ships today with the Firefox browser.

Abstract: This talk will present the design and implementation of SafetyPin, a system for encrypted mobile-device backups. Like existing cloud-based mobile-backup systems, including those of Apple and Google, SafetyPin requires users to remember only a short PIN and defends against brute-force PIN-guessing attacks using hardware security protections. Unlike today's systems, SafetyPin splits trust over a cluster of hardware security modules (HSMs) in order to provide security guarantees that scale with the number of HSMs. In this way, SafetyPin protects backed-up user data even against an attacker that can adaptively compromise many of the system's constituent HSMs. SafetyPin provides this protection without sacrificing scalability or fault tolerance. Decentralizing trust while respecting the resource limits of today's HSMs requires a synthesis of systems-design principles and new cryptographic tools. We evaluate SafetyPin on a cluster of 100 low-cost HSMs and show that a SafetyPin-protected recovery takes 1.01 seconds. To process 1B recoveries a year, we estimate that a SafetyPin deployment would need 3,100 low-cost HSMs.


13.01.2021 - 16:00

Andrei Sabelfeld

SandTrap: Securing JavaScript-driven Trigger-Action Platforms

Resources: [slides]

Andrei Sabelfeld

Bio: Andrei Sabelfeld is a Professor in the Department of Computer Science and Engineering at Chalmers University of Technology in Gothenburg, Sweden. Before joining Chalmers as faculty, he was a Research Associate at Cornell University in Ithaca, NY, USA. Andrei Sabelfeld's research ranges from foundations to practice in a range of topics in computer security and privacy. Today, he leads a group of researchers at Chalmers engaged in a number of internationally visible projects on software security, web security, IoT security, and applied cryptography.

Abstract: Trigger-Action Platforms (TAPs) seamlessly connect a wide variety of otherwise unconnected devices and services, ranging from IoT devices to cloud services and social networks. While enabling novel and exciting applications, TAPs raise critical security and privacy concerns because a TAP is effectively a "person-in-the-middle" between trigger and action services. Third-party code, routinely deployed as "apps" on TAPs, further exacerbates these concerns.

This talk focuses on JavaScript-driven TAPs. We show that the popular IFTTT and Zapier platforms and an open-source alternative, Node-RED, are susceptible to various attacks, ranging from massively exfiltrating data from unsuspecting users to taking over the entire platform. We report on the changes made by the platforms in response to our findings and present an empirical study to assess the security implications.

Motivated by the need for a secure yet flexible way to integrate third-party JavaScript apps, we propose SandTrap, a sandboxing approach that allows for isolating apps while letting them communicate via clearly defined interfaces. We present a formalization for a core language that soundly and transparently enforces fine-grained allowlist policies at module-, API-, value-, and context-level. We develop a novel proxy-based JavaScript monitor that encompasses a powerful policy generation mechanism and enables us to instantiate SandTrap to IFTTT, Zapier, and Node-RED. We illustrate on a set of benchmarks how SandTrap enforces a variety of policies while incurring a tolerable runtime overhead.