Recent advancements in neural network optimization are focusing on enhancing efficiency and reducing computational costs, addressing critical challenges in deploying deep learning models. Hierarchical zero-order optimization is gaining traction for its ability to simplify gradient calculations, significantly lowering query complexity while maintaining accuracy. Meanwhile, spiking layer-adaptive pruning introduces a novel approach to optimize spiking neural networks by balancing connectivity and performance, making them more viable for energy-constrained environments. The discovery of winning lottery tickets through differentiable methods is streamlining the process of identifying sparse subnetworks, achieving high sparsity with minimal accuracy loss. Additionally, the PRISM framework is revolutionizing matrix function computations, accelerating training processes without the need for explicit spectral bounds. Finally, the SCORE method redefines layer stacking by employing recurrent updates, improving convergence speed and reducing parameter counts. Collectively, these innovations are paving the way for more efficient, scalable, and deployable neural network architectures across various applications.