Publications

Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX

Machine learning has made enormous progress in recent years and is now being used in many real-world applications. Nevertheless, even …

EagerPy: Writing Code That Works Natively with PyTorch, TensorFlow, JAX, and NumPy

EagerPy is a Python framework that lets you write code that automatically works natively with PyTorch, TensorFlow, JAX, and NumPy. …

Fast Differentiable Clipping-Aware Normalization and Rescaling

Rescaling a vector δ⃗ ∈ ℝ^n to a desired length is a common operation in many areas such as data science and machine learning. When the …

Modeling patterns of smartphone usage and their relationship to cognitive health

The ubiquity of smartphone usage in many people’s lives make it a rich source of information about a person’s mental and …

Accurate, reliable and fast robustness evaluation

Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturbations has moved from a peculiar …

Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks

Modern neural networks are highly non-robust against adversarial manipulation. A significant amount of work has been invested in …

On Evaluating Adversarial Robustness

Correctly evaluating defenses against adversarial examples has proven to be extremely difficult. Despite the significant amount of …

Generalisation in humans and deep neural networks

We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different …

Adversarial Vision Challenge

The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and …

Towards the first adversarially robust neural network model on MNIST

Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most …

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much …

Foolbox: A Python toolbox to benchmark the robustness of machine learning models

Even todays most advanced machine learning models are easily fooled by almost imperceptible perturbations of their inputs. Foolbox is a …