Research

My research broadly focuses on formal methods, software security and their intersection. I have worked on many different topics after my PhD and I’m always eager to investigate new research areas. Below, you can find relevant research areas to which I’ve contributed in the last few years. Although not exhaustive, they are representative of what I’ve been recently working on.

Client-side web security: web security is a rich and varied research area. I’ve extensively worked on the analysis of client-side security mechanisms, i.e., web application defenses which are enforced at the browser side. I was one of the first researchers to investigate the security guarantees of Content Security Policy (CSP) and I enjoy measuring the effectiveness of deployed client-side defenses in the wild. More recently, I’ve developed an interest in client-side web security inconsistencies, i.e., inappropriate configurations of client-side defenses which may void their intended security guarantees.

Web sessions: virtually all the web applications that we use on a daily basis feature a password-protected private area. Upon login, a web session starts and users get authenticated access to the web application. Unfortunately, web sessions are fragile and can be attacked in many ways. I’ve analyzed the security of web sessions with respect to different threat models over the last few years and I’ve proposed defenses designed to improve over their shortcomings. Most of the proposed solutions have also been formally verified and proved correct.

Adversarial machine learning: machine learning is now phenomenally popular and found applications in many different settings, including security-critical applications. However, researchers identified several vulnerabilities in machine learning algorithms and models, which may lead to exploitation. Over the last few years, I’ve extensively studied evasion attacks against machine learning models, i.e., malicious perturbations of inputs leading the model into mispredicting. In my research, I’ve proposed techniques to improve and verify the security of tree-based models against evasion attacks.