LLM Performance Researcher
Apply NowCompany: Endeavor AI, Inc
Location: San Francisco, CA 94112
Description:
Full-time San Francisco or NYC
At Endeavor, we're rebuilding ERP from first principles for $1B+ manufacturing and distribution companies. These companies run on PDFs, spreadsheets, and semi-structured chaos - and we're building LLM-powered systems to parse, match, and reason through all of it with human-level reliability.
We're looking for a researcher with deep experience in LLM performance on document tasks - especially extraction, entity linking, and record matching. You've likely published papers on it. You've probably run head-to-head evals on OpenAI, Claude, and open-source models. You're fluent in both academic benchmarks and in the weird, grimy failure modes that only show up in production.
Your work will directly improve the core performance of our agentic ERP. You'll prototype new techniques, run structured evals, improve few-shot + tool-augmented performance, and help shape how LLMs interface with structured business systems.
What You'll Do
At Endeavor, we're rebuilding ERP from first principles for $1B+ manufacturing and distribution companies. These companies run on PDFs, spreadsheets, and semi-structured chaos - and we're building LLM-powered systems to parse, match, and reason through all of it with human-level reliability.
We're looking for a researcher with deep experience in LLM performance on document tasks - especially extraction, entity linking, and record matching. You've likely published papers on it. You've probably run head-to-head evals on OpenAI, Claude, and open-source models. You're fluent in both academic benchmarks and in the weird, grimy failure modes that only show up in production.
Your work will directly improve the core performance of our agentic ERP. You'll prototype new techniques, run structured evals, improve few-shot + tool-augmented performance, and help shape how LLMs interface with structured business systems.
What You'll Do
- Design and run experiments to improve extraction, normalization, and matching across real-world documents
- Evaluate LLM performance on noisy, multi-format inputs like scanned PDFs, OCR output, and Excel sheets
- Improve model accuracy and reliability in the face of rare formats, abbreviations, bad formatting, and domain-specific vocab
- Build and own our eval infrastructure for matching, linking, extraction, and schema alignment tasks
- Work with the Applied AI Researcher and Backend Engineers to deploy improvements into production
- Contribute to long-term strategy around fine-tuning, retrieval augmentation, tool use, or structured memory (if and when needed)
- Have deep experience with document understanding and information extraction using LLMs
- Have worked on schema alignment, record linking, or entity resolution at scale
- Have published papers on LLM performance (e.g. extraction, evals, few-shot prompting, matching)
- Understand both academic benchmarks and real-world weirdness
- Know how to make evals meaningful, tight, and fast to iterate on
- Want to work in a setting where research turns into production code fast
- Have a PhD or equivalent research background in NLP, ML, or similar (but we care more about what you've done than what your title says)
- Experience with post-OCR workflows or noisy doc normalization
- Deep intuition for failure modes in enterprise-scale matching/linking systems
- Obsession with eval quality and reproducibility
- Comfort implementing papers and benchmarking models at scale
- Past work in procurement, invoicing, logistics, or any doc-heavy vertical