Business team reviewing strategy and documents at a table
AI advances

Foundation models and the future of market research

11 min readBy Charity June Editorial

Large language models compress text well and synthesize narratives quickly. They do not automatically understand filings the way a skeptical analyst does. The shift for many research shops is not “AI replaces the associate”; it is “the associate spends less time finding passages and more time verifying numbers, stress-testing conclusions, and documenting uncertainty.” That reinvestment is where quality lives—or dies.

Retrieval-augmented generation (RAG) versus fine-tuning

For episodic documents—10-Ks, transcripts, prospectuses—RAG with a curated corpus often beats opaque fine-tunes because you can cite sources and refresh the corpus without retraining weights. Fine-tuning still helps for stable house style, formatting, or domain phrasing, but it is a poor substitute for up-to-date facts. Many teams combine both: retrieval for fidelity, light tuning for tone and templates.

Where models help most today

Cross-sectional search across exhibits, first drafts of meeting summaries, multilingual triage, and scenario prompts (“What changes to working capital assumptions would invalidate this thesis?”) save hours. Assistants wired into internal note stores can reduce repeated work—provided permissions and retention policies are explicit.

Numeric and table risk

Models hallucinate plausible figures. Production workflows treat any unsourced number as wrong until reconciled against the primary document or data vendor. Pipelines that parse tables with deterministic code, then let the model narrate verified values, outperform “model reads PDF end-to-end” stacks in reliability studies. The extra engineering pays for itself the first time a wrong margin does not reach a client deck.

Wall-mounted analytics dashboard with graphs and KPIs in a modern office
Dashboards built on verified metrics scale; narratives built on loose numbers do not. (Photo: Unsplash)

Evaluation habits that scale

Maintain a golden set of questions with reference answers your analysts trust. Track precision on citations, numeric match rate after reconciliation, and human edit distance on drafts. Review failures in a weekly forum; pattern failures become tickets. Smaller models with tight tools often beat raw frontier access on total cost and stability—especially for retail-facing surfaces where latency and support load matter.

On the horizon

Better calibrated uncertainty, standardized attribution strings in summaries, and safer agentic flows with rollback and transaction limits. Those improvements help everyday investors as soon as they show up in products—not just at elite research desks.

Continue reading

More on ai advances and adjacent ideas from the journal.