LLM-as-Judge Framework for Evaluating Tone-Induced Hallucination in Vision-Language Models explores A new benchmark and evaluation framework to measure tone-induced hallucination in Vision-Language Models under graded prompt intensity.. Commercial viability score: 6/10 in Vision-Language Models.
Use This Via API or MCP
This route is the stable paper-level surface for citations, viability, references, and downstream handoffs. Use it as the proof layer behind Signal Canvas, workspace creation, and launch-pack generation.
Page Freshness
Canonical route: /paper/llm-as-judge-framework-for-evaluating-tone-induced-hallucination-in-vision-language-models
Page-specific freshness sourced from this paper's evidence receipt and score bundle.
Agent Handoff
Canonical ID llm-as-judge-framework-for-evaluating-tone-induced-hallucination-in-vision-language-models | Route /paper/llm-as-judge-framework-for-evaluating-tone-induced-hallucination-in-vision-language-models
REST example
curl https://sciencetostartup.com/api/v1/agent-handoff/paper/llm-as-judge-framework-for-evaluating-tone-induced-hallucination-in-vision-language-modelsMCP example
{
"tool": "get_paper",
"arguments": {
"arxiv_id": "2604.18803"
}
}source_context
{
"surface": "paper",
"mode": "paper",
"query": "LLM-as-Judge Framework for Evaluating Tone-Induced Hallucination in Vision-Language Models",
"normalized_query": "2604.18803",
"route": "/paper/llm-as-judge-framework-for-evaluating-tone-induced-hallucination-in-vision-language-models",
"paper_ref": "llm-as-judge-framework-for-evaluating-tone-induced-hallucination-in-vision-language-models",
"topic_slug": null,
"benchmark_ref": null,
"dataset_ref": null
}Paper proof page receipt window
/buildability/llm-as-judge-framework-for-evaluating-tone-induced-hallucination-in-vision-language-models
Subject: LLM-as-Judge Framework for Evaluating Tone-Induced Hallucination in Vision-Language Models
Verdict
Watch
Verdict is Watch because viability or proof quality is intermediate and should be re-evaluated before execution.
Time to first demo
Insufficient data
No first-demo timestamp, owner estimate, or elapsed demo receipt is attached to this surface.
Structured compute envelope
Insufficient data
No data, compute, hardware, memory, latency, dependency, or serving requirement receipt is attached.
Constellation, claims, and market context stay visible on the paper proof page even when commercialization rails are held back for incomplete proof receipts.
Research neighborhood
Interactive graph renders after load.
Preparing verified analysis
Dimensions overall score 6.0
Visual citation anchors from the paper document graph.
Owned Distribution
Get the weekly shortlist of commercializable papers, benchmark movers, and proof receipts that matter for product execution.
References are not available from the internal index yet.
Receipt path
/buildability/llm-as-judge-framework-for-evaluating-tone-induced-hallucination-in-vision-language-models
Paper ref
llm-as-judge-framework-for-evaluating-tone-induced-hallucination-in-vision-language-models
arXiv id
2604.18803
Generated at
2026-04-22T20:31:38.578Z
Evidence freshness
fresh
Last verification
2026-04-22T20:31:38.578Z
Sources
3
References
0
Coverage
50%
Lineage hash
8a284ccd286b8f327035b2bc00000432e61c638a897623be530e2f7db2d08902
Canonical opportunity-kernel lineage hash.
External signature
unsigned_external
No founder, registry, pilot, or production-adoption signature is attached to this receipt.
Verification
not_verified
Verification is blocked until an external signature is provided.
Pending verification refs / 3 sources / Verification pending
repo_url
references
This equation defines the score or evaluation function that determines model quality.
Page and bbox are available; crop image is pending.
if score ≥3 Hallucinated =
Page and bbox are available; crop image is pending.
No public competitor map is available for this paper yet.