74 papers - avg viability 5.7
Federated learning is currently addressing critical challenges related to data privacy, model robustness, and adaptability in heterogeneous environments. Recent work emphasizes enhancing model performance while minimizing communication costs, as seen in frameworks that utilize lightweight prompts or adaptive sampling strategies to improve efficiency without sacrificing accuracy. Innovations like memory-centric collaboration and differential privacy mechanisms are gaining traction, enabling more secure and effective data sharing among clients. Additionally, frameworks designed to mitigate issues like asynchronous data drift and class imbalance are emerging, ensuring that federated systems remain robust in dynamic real-world scenarios. The focus is shifting toward creating flexible, scalable solutions that can be readily integrated into existing infrastructures, thereby offering commercial applications in sectors such as healthcare, finance, and personalized services where data sensitivity and model accuracy are paramount. Overall, the field is moving toward more practical implementations that balance privacy, performance, and operational efficiency.
FedBPrompt enhances federated person re-identification by using learnable visual prompts to improve feature discrimination across decentralized data.
FairFAL is an adaptive federated active learning framework that enhances performance in class-imbalanced and non-IID settings.
HeteroFedSyn is a framework for differentially private tabular data synthesis in heterogeneous federated settings, enabling secure data sharing for various tasks.
A federated learning framework that uses flow-matching generation to protect client privacy and resist poisoning attacks, achieving higher accuracy and security.
A memory-centric social machine learning framework enabling privacy-preserving collaboration among heterogeneous agents by sharing abstracted knowledge instead of model parameters.
DriftGuard is a federated learning framework that efficiently adapts to asynchronous data drift by separating global and local parameters, reducing retraining costs by up to 83% while maintaining high accuracy.
FedAOT is a novel defense mechanism for Byzantine-robust federated learning that dynamically weights client updates to enhance model resilience against adversarial attacks.
A dual-sided framework for stable personalized federated learning that enhances client specificity and global model accuracy through hierarchical alignment and adversarial knowledge transfer.
A personalized adaptive clipping framework for federated learning that significantly improves accuracy and convergence speed while maintaining differential privacy.
A framework for efficient and accurate federated learning on resource-constrained devices by aligning client gradients.