Recent research on large language model (LLM) security is increasingly focused on identifying and mitigating vulnerabilities that could be exploited by malicious actors. One significant area of concern is the potential for covert attacks, such as steganographic finetuning, which allows harmful content to be embedded within seemingly benign outputs. This has prompted a shift towards understanding the complex interplay between safety mechanisms, with proposals like the Disentangled Safety Hypothesis highlighting the need for more nuanced defenses against jailbreak attacks. Additionally, advancements in multi-tenant LLM serving systems aim to address timing side channels that could leak sensitive information, while new frameworks for watermarking and functional fingerprinting are emerging to protect intellectual property. As LLMs become integral to critical applications, the focus is shifting towards comprehensive risk assessment and treatment strategies that encompass both model behavior and broader system vulnerabilities, ensuring that security measures do not compromise performance or usability.