Low-Rank Adaptation (LoRA) is a parameter-efficient fine-tuning technique that injects trainable low-rank matrices into pre-trained model layers, significantly reducing the number of parameters requiring updates during adaptation.
Low-Rank Adaptation (LoRA) is a smart way to fine-tune huge AI models without needing massive computing power. It works by only training tiny, specialized parts of the model, making the process much faster and cheaper while still achieving great results.
LoRA, Low-Rank Adaptation
Was this definition helpful?