AI Gamestore: Scalable, Open-Ended Evaluation of Machine General Intelligence with Human Games explores AI Gamestore revolutionizes AI evaluation by leveraging human-designed games to assess machine general intelligence.. Commercial viability score: 7/10 in AI Evaluation Frameworks.
Use an AI coding agent to implement this research.
Lightweight coding agent in your terminal.
Agentic coding tool for terminal workflows.
AI agent mindset installer and workflow scaffolder.
AI-first code editor built on VS Code.
Free, open-source editor by Microsoft.
6mo ROI
2-4x
3yr ROI
10-20x
Lightweight AI tools can reach profitability quickly. At $500/mo average contract, 20 customers = $10K MRR by 6mo, 200+ by 3yr.
Lance Ying
MIT
Ryan Truong
Harvard University
Prafull Sharma
MIT
Kaiya Ivy Zhao
MIT
Find Similar Experts
AI experts on LinkedIn & GitHub
High Potential
2/4 signals
Quick Build
4/4 signals
Series A Potential
2/4 signals
Sources used for this analysis
arXiv Paper
Full-text PDF analysis of the research paper
GitHub Repository
Code availability, stars, and contributor activity
Citation Network
Semantic Scholar citations and co-citation patterns
Community Predictions
Crowd-sourced unicorn probability assessments
Analysis model: GPT-4o · Last scored: 4/2/2026
Generating constellation...
~3-8 seconds
This research offers a novel and scalable method for evaluating AI against human general intelligence by using a diverse set of human-designed games, which are grounded in real-world cognitive skills.
To productize, AI Gamestore could offer a subscription-based API for AI companies to test their models' general intelligence by playing various human games, providing comprehensive scorecards and analytics.
AI Gamestore could replace traditional benchmarks that measure narrow AI tasks, offering a richer and more comprehensive evaluation method aligning closer with human-like cognitive abilities.
There is a growing need in the AI industry for robust evaluation mechanisms that go beyond narrow tasks and assess broader cognitive capabilities. This platform addresses a gap in the market for AI evaluation, attracting AI research labs, and developers.
A commercial platform that offers AI developers scalable and comprehensive evaluation metrics for their models, focusing on general intelligence capabilities through game performance.
AI Gamestore uses a framework that sources and adapts various human-designed games from digital platforms. It then evaluates AI models by comparing their performance to human players across these games, providing insights into machine general intelligence.
The platform was tested with 100 games sourced from the Apple App Store and Steam. It evaluated vision-language models and compared their performance with human players, highlighting areas where AI still lags behind human cognition.
The platform is limited by the challenges of adapting commercially produced games, IP restrictions, and ensuring the novelty of games to prevent benchmark saturation.
Showing 20 of 49 references