Recent advancements in large language models (LLMs) are focusing on enhancing their structural understanding and engagement capabilities, addressing critical limitations in current applications. One notable development involves the introduction of a specialized token that encapsulates graph structures, allowing for improved comprehension and reasoning in graph-related tasks, which could significantly benefit fields like data analysis and knowledge representation. Concurrently, iterative improvement processes are being employed in social chat applications, yielding measurable increases in user engagement and steerability, crucial for maintaining user interest in competitive platforms. Techniques for enhancing inter-head interactions in attention mechanisms are also being explored, leading to more efficient training and reduced memory usage, which is vital for deploying LLMs in resource-constrained environments. Furthermore, strategies to infuse randomness into prompts are being tested to boost output diversity, a key factor for creative applications. Collectively, these efforts reflect a concerted push towards making LLMs more versatile, efficient, and user-friendly in real-world scenarios.
Large language models show great potential in unstructured data understanding, but still face significant challenges with graphs due to their structural hallucination. Existing approaches mainly eithe...
This report presents CharacterFlywheel, an iterative flywheel process for improving large language models (LLMs) in production social chat applications across Instagram, WhatsApp, and Messenger. Start...
In large language models built upon the Transformer architecture, recent studies have shown that inter-head interaction can enhance attention performance. Motivated by this, we propose Multi-head Expl...
Reasoning can significantly enhance the performance of Large Language Models. While recent studies have exploited behavior-related prompts adjustment to enhance reasoning, these designs remain largely...
Large language models (LLMs) achieve strong capabilities by scaling model capacity and training data, yet many real-world deployments rely on smaller models trained or adapted from low-resource data. ...
Reinforcement learning (RL)-based enhancement of large language models (LLMs) often leads to reduced output diversity, undermining their utility in open-ended tasks like creative writing. Current meth...
Large language models (LLMs) are known to produce outputs with limited diversity. In this work, we study whether infusing random concepts in the prompts can improve the diversity of the generated outp...