Recent research on privacy in AI is increasingly focused on enhancing the security of large vision-language models (LVLMs) and large language models (LLMs) against data leaks and privacy violations. New methods, such as neuron-level gradient gating, aim to improve models' ability to refuse sensitive queries while preserving their overall performance, addressing the critical vulnerabilities that arise from their deployment in sensitive fields like healthcare and finance. Additionally, the introduction of benchmarks like VLM-GEOPRIVACY highlights the need for models to respect contextual integrity when disclosing location information, revealing a gap between model capabilities and human privacy expectations. Meanwhile, findings from privacy attacks on ostensibly secure LLM insight systems underscore the inadequacy of current heuristic protections, prompting calls for more robust design principles. Overall, the field is shifting towards a nuanced understanding of the balance between utility and privacy, emphasizing the need for innovative strategies that can safeguard sensitive information without compromising model effectiveness.