Recent advancements in adversarial attack methodologies are reshaping the landscape of machine learning security, particularly in text and computer vision domains. New strategies, such as PivotAttack, are optimizing query efficiency in hard-label text attacks by employing an inside-out approach that minimizes search space traversal. In the realm of black-box models, the Contract And Conquer method guarantees the identification of adversarial examples within a fixed number of iterations, enhancing the robustness of model testing. Meanwhile, novel white-box attacks leveraging SHAP values are demonstrating increased effectiveness in generating misclassifications, particularly in scenarios where traditional gradient-based methods falter. The introduction of motion-aware frameworks for event cameras highlights the urgent need to address vulnerabilities in safety-critical applications like autonomous driving. Collectively, these developments signal a shift towards more systematic and efficient adversarial strategies, addressing pressing commercial concerns about the reliability and security of AI systems across various applications.
Existing hard-label text attacks often rely on inefficient "outside-in" strategies that traverse vast search spaces. We propose PivotAttack, a query-efficient "inside-out" framework. It employs a Mult...
Black-box adversarial attacks are widely used as tools to test the robustness of deep neural networks against malicious perturbations of input data aimed at a specific change in the output of the mode...
Deep neural networks have demonstrated excellent performance in SAR target detection tasks but remain susceptible to adversarial attacks. Existing SAR-specific attack methods can effectively deceive d...
Although infrared pedestrian detectors have been widely deployed in visual perception tasks, their vulnerability to physical adversarial attacks is becoming increasingly apparent. Existing physical at...
While the rapid development of facial recognition algorithms has enabled numerous beneficial applications, their widespread deployment has raised significant concerns about the risks of mass surveilla...
This demonstration presents Digital-Physical Adversarial Attacks (DiPA), a new class of practical adversarial attacks against pervasive camera-based authentication systems, where an attacker displays ...
State-of-the-art crowd counting and localization are primarily modeled using two paradigms: density maps and point regression. Given the field's security ramifications, there is active interest in mod...
Although text-to-image diffusion models exhibit remarkable generative power, concept erasure techniques are essential for their safe deployment to prevent the creation of harmful content. This has fos...
The paper introduces a white-box attack on computer vision models using SHAP values. It demonstrates how adversarial evasion attacks can compromise the performance of deep learning models by reducing ...
Event cameras have been widely adopted in safety-critical domains such as autonomous driving, robotics, and human-computer interaction. A pressing challenge arises from the vulnerability of deep neura...