43 papers - avg viability 5.7
Current research in generative models is increasingly focused on enhancing sample quality and diversity while addressing inherent biases and inefficiencies. Recent advancements have introduced innovative frameworks that refine generative outputs without the need for extensive noise injection or complex resampling processes, thereby improving fidelity and coverage in high-dimensional data. Techniques such as Condition-Degradation Guidance and instance-aware discretization are being developed to optimize the generative process, allowing for more precise control over output semantics and better adaptation to input complexities. Additionally, the integration of reinforcement learning paradigms is enabling generative models to leverage non-differentiable rewards, which are crucial for real-world applications. These developments have significant implications for industries ranging from urban planning to robotics, where the ability to generate high-quality, diverse synthetic data can enhance model training and decision-making processes. As the field matures, the emphasis is shifting toward creating generative systems that are not only efficient but also robust and adaptable to various practical challenges.
Mix-GRM enhances generative reward models through modular frameworks and verifiable reinforcement learning, outperforming current benchmarks.
A novel guidance method for text-to-image models that enhances compositional accuracy by using strategically degraded conditions.
TDM-R1 is a reinforcement learning method that improves few-step text-to-image models with non-differentiable rewards, achieving state-of-the-art performance and scaling effectively to strong generative models.
Builds a tool to automatically decompose data into reusable components for recombination and synthesis using diffusion models.
Bi-stage Flow Refinement (BFR) framework offers state-of-the-art bias correction for generative models, improving image quality with minimal computational overhead.
Refine dataset quality progressively using generative models to enhance model training outcomes.
A novel approach to VAE encoder distillation that enhances high-resolution image reconstruction from low-resolution training data.
An instance-aware discretization framework that enhances the performance of diffusion models by adapting timestep allocations based on input-dependent priors.
A unified training approach for latent diffusion models that simplifies the training pipeline and improves performance across modalities.
A novel multi-patch transformer architecture for diffusion models that significantly reduces computational cost while maintaining high generative performance.