15.06.2022 - 16:15
How Dark is the Forest? On Blockchain Extractable Value in Decentralized Finance
Bio: Arthur Gervais is a Lecturer (equivalent Assistant Professor) at Imperial College London. He's passionate about information security and worked since 2012 on blockchain, with a recent focus on Decentralized Finance (DeFi). He is co-instructor in the first DeFi MOOC attracting over 3000 students in the Fall 2021 (https://defi-learning.org/). The course will be reinstantiated in the fall of 2022.
Abstract: Permissionless blockchains such as Bitcoin have excelled at financial services. Yet, opportunistic traders extract monetary value from the mesh of decentralized finance (DeFi) smart contracts through so-called blockchain extractable value (BEV). The recent emergence of centralized BEV relayer portrays BEV as a positive additional revenue source. Because BEV, however, was quantitatively shown to deteriorate the blockchain's consensus security, BEV relayers endanger the ledger security by incentivizing rational miners to fork the chain. For example, a rational miner with a 10% hashrate will fork Ethereum if a BEV opportunity exceeds 4x the block reward.
In this talk, we quantify the BEV danger by deriving the USD extracted from sandwich attacks, liquidations, and decentralized exchange arbitrage. We estimate that over 32 months, BEV yielded 540.54M USD in profit, divided among 11,289 addresses when capturing 49,691 cryptocurrencies and 60,830 on-chain markets. The highest BEV instance we find amounts to 4.1M USD, 616.6x the Ethereum block reward. Moreover, while the practitioner's community has discussed the existence of generalized trading bots, we are, to our knowledge, the first to provide a concrete algorithm. Our algorithm can replace unconfirmed transactions without the need to understand the victim transactions' underlying logic, which we estimate to have yielded a profit of 57,037.32 ETH (35.37M USD) over 32 months of past blockchain data.
13.04.2022 - 16:00
Learning from the People: Responsibly Encouraging Adoption of Contact Tracing Apps
Bio: Dr. Elissa M. Redmiles is a faculty member and research group leader of the Safety & Society group at the Max Planck Institute for Software Systems. She additionally serves as a consultant and researcher at multiple institutions, including Microsoft Research and Facebook. Dr. Redmiles uses computational, economic, and social science methods to understand users' security, privacy, and online safety-related decision-making processes. Her work has been featured in popular press publications such as Scientific American, Wired, Business Insider, Newsweek, Schneier on Security, and CNET and has been recognized with multiple Distinguished Paper Awards at USENIX Security and research awards including a Facebook Research Award and the John Karat Usable Privacy and Security Research Award. Dr. Redmiles received her B.S. (Cum Laude), M.S., and Ph.D. in Computer Science from the University of Maryland.
Abstract: At the beginning of the pandemic contact tracing apps proliferated as a potential solution to scaling infection tracking and response. While significant focus was put on developing privacy protocols for these apps, relatively less attention was given to understanding why, and why not, users might adopt them. Yet, for these technological solutions to benefit public health, users must be willing to adopt these apps. In this talk I showcase the value of taking a descriptive ethics approach to setting best practices in this new domain. Descriptive ethics, introduced by the field of moral philosophy, determines best practices by learning directly from the user -- observing people’s preferences and inferring best practice from that behavior -- instead of exclusively relying on experts' normative decisions. This talk presents an empirically-validated framework of user's decision inputs to adopt COVID19 contact tracing apps, including app accuracy, privacy, benefits, and mobile costs. Using predictive models of users' likelihood to install COVID apps based on quantifications of these factors, I show how high the bar is for achieving adoption. I conclude by discussing a large-scale field study in which we put our survey and experimental results into practice to help the state of Louisiana advertise their COVID app through a series of randomized controlled Google Ads experiments.
09.03.2022 - 16:00
How to Commit to a Private Function
Bio: Dan Boneh heads the applied cryptography group and co-directs the computer security lab at Stanford University. His research focuses on applications of cryptography to computer security. His work includes cryptosystems with novel properties, web security, security for mobile devices, and cryptanalysis. He is the author of over a hundred publications in the field and is a Packard and Alfred P. Sloan fellow. He is a recipient of the 2014 ACM prize and the 2013 Godel prize. In 2011 Dr. Boneh received the Ishii award for industry education innovation. Dan Boneh received his Ph.D from Princeton University and joined Stanford University in 1997.
Abstract: A cryptographic commitment scheme lets one party commit to some data while keeping the data secret. The committer can later open the commitment (uniquely) to reveal the committed data. Commitment Schemes are a fundamental tool in cryptography and have been studied for over four decades. In this talk we will generalize this basic concept, and in particular, develop ways to commit to a secret function. The commitment reveals nothing about the function, however, the committer can later "open" the function at any point, namely efficiently prove that for a given (x,y) the function evaluates to y at the point x. We will discuss some societal applications of this concept, as well as the beautiful algebraic questions that come up when constructing it. The talk will be self contained. This is joint work with Wilson Nguyen and Alex Ozdemir.
09.02.2022 - 16:00
Attacking the Brain: Security and Privacy Case Studies in Online Advertising, Misinformation, and Augmented Reality
Bio: Franziska (Franzi) Roesner is an Associate Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where she co-directs the Security and Privacy Research Lab. Her research focuses broadly on computer security and privacy for end users of existing and emerging technologies. Her work has studied topics including online tracking and advertising, security and privacy for sensitive user groups, security and privacy in emerging augmented reality (AR) and IoT platforms, and online mis/disinformation. She is the recipient of a Consumer Reports Digital Lab Fellowship, an MIT Technology Review "Innovators Under 35" Award, an Emerging Leader Alumni Award from the University of Texas at Austin, a Google Security and Privacy Research Award, and an NSF CAREER Award. She serves on the USENIX Security and USENIX Enigma Steering Committees. She received her PhD from the University of Washington in 2014 and her BS from UT Austin in 2008. Her website is at https://www.franziroesner.com.
Abstract: People who use modern technologies are inundated with content and information from many sources, including advertisements on the web, posts on social media, and (looking to the future) content in augmented or virtual reality. While these technologies are transforming our lives and communications in many positive ways, they also come with serious risks to users’ security, privacy, and the trustworthiness of content they see: the online advertising ecosystem tracks individual users and may serve misleading or deceptive ads, social media feeds are full of potential mis/disinformation, and emerging augmented reality technologies can directly modify users’ perceptions of the physical world in undesirable ways. In this talk, I will discuss several lines of research from our lab that explore these issues from a broad computer security and privacy perspective, leveraging methodologies ranging from qualitative user studies to systematic measurement studies to system design and evaluation. What unites these efforts is a key question: how are our brains "under attack" in today's and tomorrow's information environments, and how can we design platforms and ecosystems more robust to these risks?
12.01.2022 - 16:00
Quantifying Privacy Risks of Machine Learning Models
Bio: Yang Zhang is a faculty member at CISPA Helmholtz Center for Information Security, Germany. Previously, he was a group leader at CISPA. He obtained his Ph.D. degree from University of Luxembourg in November 2016. Yang's research interests lie at the intersection of privacy and machine learning. Over the years, he has published multiple papers at top venues in computer science, including WWW, CCS, NDSS, and USENIX Security. His work has received the NDSS 2019 distinguished paper award.
Abstract: Machine learning has made tremendous progress during the past decade. While continuing to improve our daily lives, recent research shows that machine learning models are vulnerable to various privacy attacks. In this talk, I will cover our three recent works on quantifying the privacy risks of machine learning models. First, I will talk about some recent development of membership inference. Second, I will discuss the data reconstruction attacks against online learning. In the end, I will present link stealing attacks against graph neural networks.
15.12.2021 - 16:00
A Stab in the Dark: Blind Attacks on the Linux Kernel
Bio: Herbert Bos is full professor at the Vrije Universiteit Amsterdam and co-leads the VUSec Systems Security research group with Cristiano Giuffrida and Erik van der Kouwe.
He obtained an ERC Starting Grant to work on reverse engineering and an NWO VICI grant to work on vulnerability detection. These and other systems security topics are still close to his heart. Other research interests include OS design, networking, and dependable systems.
Herbert moved to The Netherlands after approximately four years at the Universiteit Leiden. Before that he obtained his Ph.D. from the Cambridge University Computer Laboratory, followed by a brief stint at KPN Research (now TNO Telecom).
Abstract: Unlike what we see in the movies, attacks on high-value targets are not easy. While there are still plenty of vulnerabilities in the software, exploitation is difficult due to multiple layers of protection. For instance, since data pages are no longer executable on modern systems, attackers cannot inject their own malicious code to execute and are forced to reuse snippets of code already present in the victim software instead. By stringing together such existing snippets of benign software, attackers can still create their malicious payloads. However, to do so, they need to know where all these snippets are in memory, which is made difficult by the randomized address spaces of today's software.
Researchers have shown that in some cases it is possible to attack "blind"—without knowing anything about the target software. Unfortunately for the attacker, these blind attacks induce thousands of crashes, making them applicable only in cases where (a) the software can handle crashes, and (b) nobody raises an alarm when thousands of processes crash. Such cases are rare. In particular, they do not apply to truly high-value targets such as the Linux kernel, where even a single crash is fatal.
If software exploitation is difficult, what about hardware attacks such as Spectre? Here also, exploitation is really tough due to powerful mitigations in hardware and software. In the case of the Linux kernel developers have gone through the kernel code with a fine comb, to eliminate the known Spectre "gadgets" and neutralize possible attacks.
However, even if traditional software exploitation and speculative execution attacks are difficult, I will show that we can combine them and create very powerful blind attacks still, even on the Linux kernel. In particular, the software vulnerabilities make Spectre attacks possible in code that we previously considered safe. In return, speculative execution makes it possible for an attacker to probe blindly in the victim's address space without ever crashing.
Such a symbiotic combination of attack vectors make the development of mitigation much harder: we can no longer limit ourselves to the threat of, say, memory errors or speculative execution, but have to consider interesting combinations also.
24.11.2021 - 16:00
Cascade: Asynchronous Proof-of-Stake
Bio: Roger Wattenhofer is a full professor at the Information Technology and Electrical Engineering Department, ETH Zurich, Switzerland. He received his doctorate in Computer Science from ETH Zurich. He also worked some years at Microsoft Research in Redmond, Washington, at Brown University in Providence, Rhode Island, and at Macquarie University in Sydney, Australia.
Roger Wattenhofer’s research interests are a variety of algorithmic and systems aspects in computer science and information technology, e.g., distributed systems, positioning systems, wireless networks, mobile systems, social networks, financial networks, deep neural networks. He publishes in different communities: distributed computing (e.g., PODC, SPAA, DISC), networking and systems (e.g., SIGCOMM, SenSys, IPSN, OSDI, MobiCom), algorithmic theory (e.g., STOC, FOCS, SODA, ICALP), and more recently also machine learning (e.g., NeurIPS, ICLR, ACL, AAAI). His work received multiple awards, e.g. the Prize for Innovation in Distributed Computing for his work in Distributed Approximation. He published the book “Blockchain Science: Distributed Ledger Technology“, which has been translated to Chinese, Korean and Vietnamese.
Abstract: Nakamoto’s Bitcoin protocol has taught the world how to achieve trust without a designated trusted party. However, Bitcoin’s proof-of-work solution comes at serious costs and compromises. While solving the energy problem of proof-of-work, proof-of-stake introduces some of its own problems. We relax the usual notion of consensus to extract the requirements necessary for an efficient cryptocurrency. Towards this end, we introduce a blockchain design called Cascade that is consensusless, asynchronous, scalable, deterministic, and efficient. While Cascade has its own limitations, it should serve as a nice discussion basis for a better blockchain design.
Cascade is joint work with Jakub Sliwinski.
13.10.2021 - 16:00
Digital Personhood: Towards Technology that Securely Serves People
Bio: Prof. Bryan Ford leads the Decentralized/Distributed Systems (DEDIS) research laboratory at the Swiss Federal Institute of Technology in Lausanne (EPFL). Ford's research focuses on decentralized systems, security and privacy, digital democracy, and blockchain technology. Since earning his Ph.D. at MIT, Ford has held faculty positions at Yale University and EPFL. His awards include the Jay Lepreau Best Paper Award, the NSF CAREER award, and the AXA Research Chair. Inventions he is known for include parsing expression grammars, delegative or liquid democracy, and scalable sharded blockchains. He has advised numerous companies and governments, including serving on the US DARPA Information Science and Technology (ISAT) Study Group and on the Swiss Federal E-voting Experts Dialog.
Abstract: Internet technologies have often been called “democratizing” by virtue of giving anyone a voice in countless online forums. Technology cannot actually be “democratizing” by democratic principles, however, unless it serves everyone, offers everyone not just a voice but an equal voice, and is accountable to and ultimately governed by the people it serves. Today's technology offers not democracy but guardianship, subjecting our online lives to the arbitrary oversight of unelected employees, committees, platforms, and algorithms, which serve profit motives or special interests over our broader interests. Can we build genuinely democratizing technology that serves its users inclusively, equally, and securely?
A necessary first step is digital personhood: enabling technology to distinguish between real people and fake accounts such as sock puppets, bots, or deep fakes. Digital identity approaches undermine privacy and threaten our effective voice and freedoms, however, both in existing real-world democracies and in online forums that we might wish to embody democratic ideals. An emerging ecosystem of “proof of personhood” schemes attempts to give willing participants exactly one credential each while avoiding the privacy risks of digital identity. Proof of personhood schemes may operate in the physical world or online, building on security foundations such as in-peron events, biometrics, social trust networks, and Turing tests. We will explore the promise and challenges of secure digital personhood, and the tradeoffs of different approaches along the key metrics of security, privacy, inclusion, and equality. We will cover further security challenges such as resisting collusion, coercion, or vote-buying. Finally, we will outline a few applications of secure digital personhood, both already prototyped and envisioned.
12.05.2021 - 16:00
Fuzz Testing and Beyond
Bio: Thorsten Holz is a professor in the Faculty of Electrical Engineering and Information Technology at Ruhr-University Bochum, Germany. His research interests include technical aspects of secure systems, with a specific focus on systems security. Currently, his work concentrates on reverse engineering, automated vulnerability detection, and studying latest attack vectors. He received the Dipl.-Inform. degree in Computer Science from RWTH Aachen, Germany (2005) and the Ph.D. degree from University of Mannheim (2009). Prior to joining Ruhr-University Bochum in April 2010, he was a postdoctoral researcher in the Automation Systems Group at the Technical University of Vienna, Austria. In 2011, Thorsten received the Heinz Maier-Leibnitz Prize from the German Research Foundation (DFG) and in 2014 an ERC Starting Grant. Furthermore, he is Co-Spokesperson of the Cluster of Excellence "CASA - Cyber Security in the Age of Large-Scale Adversaries" (with C. Paar and E. Kiltz).
Abstract: In this talk, I will provide an overview of our recent progress in randomized testing and present some of the methods we have developed in the past years. These include fuzzing of OS kernels and hypervisors, grammar-based fuzzing of complex interpreters, and fuzz testing of stateful systems. As part of this work, we have already found hundreds of software bugs, some of which are related to well-known programs and systems. In addition, I will also talk about a method to prevent fuzz testing and conclude with an outlook on open challenges that still need to be solved.
14.04.2021 - 16:00
Privacy and Verifiability in Certificate Transparency
Bio: Sarah Meiklejohn is a Professor in Cryptography and Security at University College London (UCL), in the Computer Science department. She is affiliated with the Information Security Group, and is also a member of the Open Music Initiative, a fellow of the Alan Turing Institute, and an Associate Director of the Initiative for Cryptocurrencies and Contracts (IC3).
From November 2019 to December 2020, she was a visiting researcher at Google UK, working with the Certificate Transparency / TrustFabric team. As of December 2020 she is a Staff Research Scientist there.
Sarah Meiklejohn received a PhD in Computer Science from the University of California, San Diego under the joint supervision of Mihir Bellare and Stefan Savage. During her PhD, she spent the summers of 2011 and 2013 at MSR Redmond, working in the cryptography group with Melissa Chase. She obtained an Sc.M. in Computer Science from Brown University under the guidance of Anna Lysyanskaya in 2009, and an Sc.B. in Mathematics from Brown in 2008.
Abstract: In recent years, there has been increasing recognition of the benefits of having services provide auditable logs of data, as demonstrated by the deployment of Certificate Transparency and the development of other transparency projects. Despite their success, extending the current form of these projects can yield improved guarantees in terms of verifiability, efficiency, and privacy. This talk touches on these considerations by discussing efficient solutions for verifiability, in the form of a gossip protocol and a new verifiable data structure, and the difficulties of achieving privacy-preserving auditing.
10.03.2021 - 16:00
Proximity tracing with Coronalert: lessons learned
Bio: Prof. Bart Preneel is a full professor at the KU Leuven, where he heads the COSIC research group. His main research interests are cryptography, information security and privacy. He has served as president of the IACR (International Association for Cryptologic Research) and is co-founder and chairman of the Board of the information security cluster LSEC. He is a member of the Advisory group of ENISA, of the Board of the Cyber Security Coalition Belgium and of the Academia Europaea. He received the RSA Award for Excellence in the Field of Mathematics (2014), the IFIP TC11 Kristian Beckman award (2015) and the ESORICS Outstanding Research Award (2017). In 2015 he was elected as fellow of the IACR. He frequently consults for industry and government about security and privacy technologies.
Abstract: The corona pandemic is the first major pandemic in times of big data, AI and smart devices. Some nations have deployed these technologies a large scale to support a trace/quarantine/test/isolate strategy in order to contain a pandemic. However, serious concerns have been raised w.r.t. the privacy implications of some solutions, which makes them incompatible with privacy and human rights that are protected by EU law. This talk focuses on the proximity tracing solution developed by the DP-3T (Distributed Privacy-Preserving Proximity Tracing) consortium. This app has been rolled out in more than 40 countries and states, with support of Google and Apple. We will provide some details on the experience with the Coronalert app in Belgium that is connected to the European Federated Gateway Service, which at this moment has 11 EU countries and more than 40 million users. The talk will discuss the lessons learned from this large-scale deployment in which the principles of privacy-by-design and data minimization have played a central role.
10.02.2021 - 16:15
SafetyPin: Encrypted Backups with Human-Memorable Secrets
Bio: Henry Corrigan-Gibbs (he/him) is an assistant professor in MIT's department of electrical engineering and computer science and is a principal investigator in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). Henry builds computer systems that provide strong security and privacy properties using ideas from the domains of cryptography, computer security, and computer systems. Henry completed his PhD in the Applied Cryptography Group at Stanford, where he was advised by Dan Boneh. After that, he was a postdoc in Bryan Ford's research group at EPFL in Lausanne, Switzerland.
For their research efforts, Henry and his collaborators have received the Best Young Researcher Paper Award three times (at Eurocrypt in 2020, the Theory of Cryptography Conference in 2019 and at Eurocrypt in 2018), the 2016 Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, and the 2015 IEEE Security and Privacy Distinguished Paper Award. Henry's work has been cited by IETF and NIST, and his libprio library for privacy-preserving telemetry data collection ships today with the Firefox browser.
Abstract: This talk will present the design and implementation of SafetyPin, a system for encrypted mobile-device backups. Like existing cloud-based mobile-backup systems, including those of Apple and Google, SafetyPin requires users to remember only a short PIN and defends against brute-force PIN-guessing attacks using hardware security protections. Unlike today's systems, SafetyPin splits trust over a cluster of hardware security modules (HSMs) in order to provide security guarantees that scale with the number of HSMs. In this way, SafetyPin protects backed-up user data even against an attacker that can adaptively compromise many of the system's constituent HSMs. SafetyPin provides this protection without sacrificing scalability or fault tolerance. Decentralizing trust while respecting the resource limits of today's HSMs requires a synthesis of systems-design principles and new cryptographic tools. We evaluate SafetyPin on a cluster of 100 low-cost HSMs and show that a SafetyPin-protected recovery takes 1.01 seconds. To process 1B recoveries a year, we estimate that a SafetyPin deployment would need 3,100 low-cost HSMs.
13.01.2021 - 16:00
Bio: Andrei Sabelfeld is a Professor in the Department of Computer Science and Engineering at Chalmers University of Technology in Gothenburg, Sweden. Before joining Chalmers as faculty, he was a Research Associate at Cornell University in Ithaca, NY, USA. Andrei Sabelfeld's research ranges from foundations to practice in a range of topics in computer security and privacy. Today, he leads a group of researchers at Chalmers engaged in a number of internationally visible projects on software security, web security, IoT security, and applied cryptography.
Abstract: Trigger-Action Platforms (TAPs) seamlessly connect a wide variety of otherwise unconnected devices and services, ranging from IoT devices to cloud services and social networks. While enabling novel and exciting applications, TAPs raise critical security and privacy concerns because a TAP is effectively a "person-in-the-middle" between trigger and action services. Third-party code, routinely deployed as "apps" on TAPs, further exacerbates these concerns.