Recent research in optimization is increasingly focused on enhancing efficiency and robustness across various applications, particularly in machine learning and combinatorial problems. Techniques like stochastic generative optimization leverage language models to streamline complex system tuning, while novel algorithms grounded in fractional calculus address challenges in imbalanced datasets, significantly improving performance in areas like financial fraud detection. Dynamic momentum recalibration methods are redefining gradient descent, optimizing noise suppression and signal preservation in deep learning. Additionally, hybrid evaluation strategies in genetic programming are tackling real-world scheduling problems in satellite operations, balancing computational efficiency with solution quality. The integration of attention mechanisms in mixed-integer linear programming is pushing the boundaries of traditional optimization, allowing for more expressive representations. Overall, the field is shifting towards more adaptive and context-aware optimization strategies, addressing both theoretical limitations and practical challenges in diverse domains, from healthcare to space technology.
Optimizing complex systems, ranging from LLM prompts to multi-turn agents, traditionally requires labor-intensive manual iteration. We formalize this challenge as a stochastic generative optimization ...
The Uncertain Agile Earth Observation Satellite Scheduling Problem (UAEOSSP) is a novel combinatorial optimization problem and a practical engineering challenge that aligns with the current demands of...
Mixed-integer linear programming (MILP), a widely used modeling framework for combinatorial optimization, are central to many scientific and engineering applications, yet remains computationally chall...
Standard Gradient Descent and its modern variants assume local, Markovian weight updates, making them highly susceptible to noise and overfitting. This limitation becomes critically severe in extremel...
Stochastic Gradient Descent (SGD) and its momentum variants form the backbone of deep learning optimization, yet the underlying dynamics of their gradient behavior remain insufficiently understood. In...
The orienteering problem with time windows and variable profits (OPTWVP) is common in many real-world applications and involves continuous time variables. Current approaches fail to develop an efficie...
Modern neural network optimization relies heavily on architectural priorssuch as Batch Normalization and Residual connectionsto stabilize training dynamics. Without these, or in low-data regimes with ...
Machine learning is increasingly used to improve decisions within branch-and-bound algorithms for mixed-integer programming. Many existing approaches rely on deep learning, which often requires very l...
Stochastic gradient methods are central to large-scale learning, yet their analysis typically treats mini-batch gradients as unbiased estimators of the population gradient. In high-dimensional setting...
Multi-objective combinatorial optimization seeks Pareto-optimal solutions over exponentially large discrete spaces, yet existing methods sacrifice generality, scalability, or theoretical guarantees. W...