Recent research on privacy in AI is increasingly focused on enhancing the security of large vision-language models (LVLMs) and large language models (LLMs) against data leaks and privacy violations. New methods, such as neuron-level gradient gating, aim to improve models' ability to refuse sensitive queries while preserving their overall performance, addressing the critical vulnerabilities that arise from their deployment in sensitive fields like healthcare and finance. Additionally, the introduction of benchmarks like VLM-GEOPRIVACY highlights the need for models to respect contextual integrity when disclosing location information, revealing a gap between model capabilities and human privacy expectations. Meanwhile, findings from privacy attacks on ostensibly secure LLM insight systems underscore the inadequacy of current heuristic protections, prompting calls for more robust design principles. Overall, the field is shifting towards a nuanced understanding of the balance between utility and privacy, emphasizing the need for innovative strategies that can safeguard sensitive information without compromising model effectiveness.
Large Vision-Language Models (LVLMs) have shown remarkable potential across a wide array of vision-language tasks, leading to their adoption in critical domains such as finance and healthcare. However...
Vision-language models (VLMs) have demonstrated strong performance in image geolocation, a capability further sharpened by frontier multimodal large reasoning models (MLRMs). This poses a significant ...
As AI assistants become widely used, privacy-aware platforms like Anthropic's Clio have been introduced to generate insights from real-world AI use. Clio's privacy protections rely on layering multipl...
A deep learning model usually has to sacrifice some utilities when it acquires some other abilities or characteristics. Privacy preservation has such trade-off relationships with utilities. The loss d...
Prior approaches for membership privacy preservation usually update or retrain all weights in neural networks, which is costly and can lead to unnecessary utility loss or even more serious misalignmen...