Our latest work on adversarial machine learning was accepted for publication at Computers & Security. We propose resilience, a new formal notion of security for classifiers deployed in adversarial settings, which mitigates significant problems of the traditional robustness notion. We thus propose an algorithm to soundly verify resilience for tree-based classifiers like Random Forest and we experimentally prove the effectiveness of our proposal.