Infusion: Shaping Model Behavior by Editing Training Data via Influence Functions explores Infusion leverages influence functions to craft subtle training data perturbations that reshape AI model behavior without explicit training signal insertion.. Commercial viability score: 8/10 in AI Model Training.
Use an AI coding agent to implement this research.
Lightweight coding agent in your terminal.
Agentic coding tool for terminal workflows.
AI agent mindset installer and workflow scaffolder.
AI-first code editor built on VS Code.
Free, open-source editor by Microsoft.
6mo ROI
0.5-1x
3yr ROI
6-15x
GPU-heavy products have higher costs but premium pricing. Expect break-even by 12mo, then 40%+ margins at scale.
Robert Kirk
Independent
Edward Grefenstette
AI Centre, UCL
Jakob Foerster
FLAIR, University of Oxford
Find Similar Experts
AI experts on LinkedIn & GitHub
References are not available from the internal index yet.
High Potential
1/4 signals
Quick Build
2/4 signals
Series A Potential
4/4 signals
Sources used for this analysis
arXiv Paper
Full-text PDF analysis of the research paper
GitHub Repository
Code availability, stars, and contributor activity
Citation Network
Semantic Scholar citations and co-citation patterns
Community Predictions
Crowd-sourced unicorn probability assessments
Analysis model: GPT-4o · Last scored: 4/2/2026
Generating constellation...
~3-8 seconds
This research highlights the vulnerability of AI models to subtle, undetectable manipulations that can significantly alter model behavior, emphasizing the need for robust data security and interpretability solutions in AI systems.
Develop a SaaS product that monitors training data integrity and uses AI-driven detection methods to identify potential data poisoning threats in real-time.
The Infusion method could disrupt traditional security approaches that rely on detecting explicit anomalies in training data by addressing the subtler but equally damaging forms of data poisoning.
There is an increasing need for data integrity solutions in AI as models are deployed in mission-critical applications. Enterprises and governments, who are highly motivated to protect their AI investments, would pay for a tool that prevents subtle data poisoning attacks.
Create a security tool for AI systems that detects and mitigates subtle data poisoning attacks to ensure model integrity and robustness.
The paper introduces a framework called Infusion, which uses scalable influence-function approximations to compute small perturbations to training data. These perturbations induce specific behaviors in models by manipulating parameter shifts without explicit examples.
Infusion was tested on tasks within vision and language domains using CIFAR-10 and GPT-Neo models, demonstrating that minimal, subtle edits to a small fraction of training data can instigate substantial behavior changes in AI models, including cross-architecture transferability.
The reliance on accurate influence function estimations, scalability to large datasets, and the handling of discrete token spaces in language models pose challenges. Furthermore, real-world application requires comprehensive validation to avoid unintended side-effects.