Jamie Hayes

About - Research - Code & Misc. - CV

Email: j.hayes at cs.ucl.ac.uk

By Font Awesome Free 5.4.1 by @fontawesome - https://fontawesome.com, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=73686066 By Font Awesome Free 5.4.1 by @fontawesome - https://fontawesome.com, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=73686066 By Font Awesome Free 5.4.1 by @fontawesome - https://fontawesome.com, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=73686066 By Font Awesome Free 5.4.1 by @fontawesome - https://fontawesome.com, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=73686066
Extensions and limitations of randomized smoothing for robustness guarantees [pdf soon]
J Hayes 06-2020 CVPR (workshop track)
A classifier's resistance to adversarial examples is evaluated in terms of different divergence measures.

A framework for robustness certification of smoothed classifiers using f-divergences [pdf]
K Dvijotham, J Hayes, B Balle, Z Kolter, C Qin, A Gyorgy, K Xiao, S Gowal, P Kohli 05-2020 ICLR
We can certify properties of a neural network under different threat models.

Towards transformation-resilient provenance detection of digital media [pdf update soon]
J Hayes, K Dvijotham, Y Chen, S Dieleman, P Kohli, N Casagrande 09-2019
Adversarial training is used to create a zero-bit watermarking scheme that is robust to a range of attacks.

LOGAN: Membership inference attacks against generative models [pdf] [code]
J Hayes, L Melis, G Danezis, E De Cristofaro 07-2019 PETS
Generative learning can be as bad as discriminative learning when it comes to privacy.

A note on hyperparameters in black-box adversarial examples [pdf] [code]
J Hayes 12-2018
The performance of a few methods to craft black-box adversarial examples is measured.

Evading classifiers in discrete domains with provable optimality guarantees [pdf] [code]
B Kulynych, J Hayes, N Samarin, C Troncoso 12-2018 NeurIPS (workshop track)
Given a classifier and a discrete input, the smallest perturbation required to cause a different classification is found.

Contamination attacks in multi-party machine learning [pdf]
J Hayes, O Ohrimenko 12-2018 NeurIPS
Learning to predict which party supplied an input in multi-party machine learning can mitigate attacks.

On visible adversarial perturbations & digital watermarking [pdf]
J Hayes 06-2018 CVPR (workshop track)
Visible adversarial examples can be neutralized by saliency methods, however this is not a panacea.

Learning universal adversarial perturbations with generative models [pdf] [code] [slides]
J Hayes 05-2018 DLS (IEEE S&P workshop)
A single adversarial perturbation is learnt by a generative model that can be applied to any input belonging to a target distribution.

Generating steganographic images via adversarial training [pdf] [code]
J Hayes, G Danezis 12-2017 NeurIPS
Adversarial learning is applied to the problem of information hiding in digital images.

AnNotify: A private notification service [pdf]
A Piotrowska, J Hayes, N Gelernter, G Danezis, A Herzberg 10-2017 WPES
A private notification service based on Bloom filters and sharding.

The Loopix anonymity system [pdf]
A Piotrowska, J Hayes, T Elahi, S Meiser, G Danezis 08-2017 USENIX Security
A low-latency anonymous communication system based on mix networks.

Website fingerprinting defenses at the application layer [pdf] [code]
G Cherubin, J Hayes, M Juarez 07-2017 PETS
A server-side website fingerprinting defense based on randomization of object sizes.

TASP: Towards anonymity sets that persist [pdf]
J Hayes, C Troncoso, G Danezis 10-2016 WPES
A global adversary observing the ingress/egress of an anonymity system can link communications by intersecting who was online at a certain time. By pre-learning communication patterns, we can group people intelligently into communicating batches, which extends the time until everyone is doomed.

k-fingerprinting: a robust scalable website fingerprinting technique [pdf] [code] [slides]
J Hayes, G Danezis 08-2016 USENIX Security
Random forests and nearest neighbour classifiers are used to learn a robust website fingerprinting model.

Traffic confirmation attacks despite noise [pdf] [slides]
J Hayes 02-2016 NDSS (workshop track)
A hashing algorithm based on projecting network traffic flows is used to link communicating parties in anonymous communication systems.

Guard sets for onion routing [pdf] [slides]
J Hayes 07-2015 PETS
The Tor entry guard system is re-designed to provide better security properties.

An introduction to the dynamics of real and complex quadratic polynomials [pdf]
J Hayes 07-2011
Masters thesis on the properties of quadratics (e.g. Mandelbrot set and Julia sets).