The field of security is currently focused on addressing vulnerabilities arising from the integration of advanced technologies, particularly in cryptography, machine learning, and decentralized systems. Recent work on post-quantum cryptography is driving the development of frameworks to evaluate the performance of new algorithms, essential for safeguarding digital communications against quantum threats. Concurrently, the rise of machine learning models in software security has revealed issues like data leakage that can inflate performance metrics, necessitating more robust evaluation methods. In decentralized environments, such as federated learning, researchers are uncovering new attack vectors, including layer-specific vulnerabilities that can compromise model integrity. Additionally, the emergence of multimodal large language models has introduced novel attack strategies, such as image-based prompt injection, highlighting the need for adaptive defenses. Collectively, these efforts underscore a shift toward a more comprehensive understanding of security across diverse technological landscapes, aiming to fortify systems against increasingly sophisticated threats.
Advances in quantum computing threaten digital communication security by undermining the foundations of current public-key cryptography through Shor's quantum algorithm. This has driven the developmen...
Searchable Symmetric Encryption (SSE) allows users to search over encrypted data stored on untrusted servers, like cloud providers. While SSE hides the content of queries and documents, it still leaks...
Website fingerprinting (WF) attacks on Tor can infer user destinations from encrypted traffic metadata. However, their real-world effectiveness remains debated due to laboratory settings that fail to ...
Transaction processing systems underpin modern commerce, finance, and critical infrastructure, yet their security has never been studied across the full evolutionary arc of these systems. Over five de...
The rapid evolution of GUI-enabled agents has rendered traditional CAPTCHAs obsolete. While previous benchmarks like OpenCaptchaWorld established a baseline for evaluating multimodal agents, recent ad...
Machine learning models are increasingly used for software security tasks. These models are commonly trained and evaluated on large Internet-derived datasets, which often contain duplicated or highly ...
Graph Neural Networks (GNNs) have achieved remarkable results in various tasks. Recent studies reveal that graph backdoor attacks can poison the GNN model to predict test nodes with triggers attached ...
Federated learning (FL) enables distributed model training across edge devices while preserving data locality. This decentralized approach has emerged as a promising solution for collaborative learnin...
Multimodal Large Language Models (MLLMs) integrate vision and text to power applications, but this integration introduces new vulnerabilities. We study Image-based Prompt Injection (IPI), a black-box ...
Large Language Models (LLMs) have emerged as a transformative and disruptive technology, enabling a wide range of applications in natural language processing, machine translation, and beyond. However,...