Evaluating LLM Alignment With Human Trust Models | ScienceToStartup | ScienceToStartup