Author Archives: calzavara

Article accepted at CoSe

      No Comments on Article accepted at CoSe

Our latest work on adversarial machine learning was accepted for publication at Computers & Security. We propose resilience, a new formal notion of security for classifiers deployed in adversarial settings, which mitigates significant problems of the traditional robustness notion. We thus propose an algorithm to soundly verify resilience for tree-based classifiers like Random Forest and we experimentally prove the effectiveness… Read more »

Paper accepted at USENIX Security

      No Comments on Paper accepted at USENIX Security

Glad to announce that our paper on client-side web security inconsistencies has been accepted at USENIX Security and will be presented in August 2022! Read how web application security crumbles when the same page grants different levels of protection to different clients, thus leading to the “security lottery” phenomenon 🙂 The paper is available here

Article accepted at JCS

      No Comments on Article accepted at JCS

Our work “Certifying machine learning models against evasion attacks by program analysis” has been accepted at the Journal of Computer Security! This is a significantly extended version of prior work published at ESORICS 2020, where we only focused on decision tree models. In this version we extend the same approach to other classes of machine learning models and we leverage… Read more »

New journal article accepted!

      No Comments on New journal article accepted!

Our article on “secure feature partitioning” has been accepted at the EURASIP Journal on Information Security! We discuss how to improve the robustness of machine learning models by training ensembles of classifiers based on disjoint sets of features. This provides state-of-the-art security against attackers based on the L0-distance. More information in our article.

Two more papers coming

      No Comments on Two more papers coming

I’m glad to announce that two papers got recently accepted and will be available online soon: “The Remote on the Local: Exacerbating Web Attacks Via Service Workers Caches” identifies a new attack on service workers caches, quantifies its prevalence in the wild and proposes countermeasures “AMEBA: An Adaptive Approach to the Black-Box Evasion of Machine Learning Models” identifies a trade-off… Read more »