How We Score

Transparent methodology for evaluating LLM APIs. Rankings are based on documented facts, not undisclosed criteria.

Our Principles

Official Sources Only

All provider data comes from official documentation and pricing pages. We don't rely on third-party benchmarks or unverified claims.

Regular Verification

Every provider entry shows a "Last checked" date. Pricing and limits change — we verify against official sources and update accordingly.

No Fabricated Benchmarks

We don't publish our own performance benchmarks. We link to official provider comparisons and public evaluations when available.

Transparent Affiliation

We disclose affiliate relationships. A commission from a featured provider doesn't affect our editorial judgment or scoring methodology.

Scoring Criteria

Criteria Weight What We Look For
Clarity of Pricing 20% Simple, predictable pricing. No hidden fees or usage tiers that surprise you. Clear per-token or per-minute rates.
Usefulness of Free Tier 20% Generous enough for real development work, not just toy examples. Reasonable rate limits that don't block prototyping.
Developer Experience 15% Clear documentation, good SDKs, sensible API design, helpful error messages, and responsive support.
Speed to First Call 10% How quickly can a developer go from zero to a working API call? Good docs, clear auth, and working examples.
Model Quality 15% Output quality for real tasks. Based on documented capabilities and public evaluations, not internal testing.
Breadth of Modalities 10% Support for text, image, audio, video, and other modalities. Single-provider multimodal convenience.
Value for Solo Builders 10% Cost-effectiveness at small-to-medium scale. Does the pricing make sense for indie hackers and small teams?

Separate Ranking Profiles

We use different scoring profiles for different categories to avoid apples-to-oranges comparisons.

🏅 Free Tier Ranking

Free tier generosity30%
Rate limits25%
Model quality20%
Developer experience15%
Speed to first call10%

💰 Pay-As-You-Go Ranking

Cost per token25%
Model quality25%
Pricing clarity20%
Modalities15%
Developer experience15%

📦 Subscription Value Ranking

Value for money30%
Breadth of access25%
Predictability20%
Modalities included15%
Model quality10%

Data Sources

  • Primary: Official provider documentation and pricing pages (linked in each provider entry)
  • Provider Inventory: Manually curated from public lists of free LLM APIs, then verified
  • Public Benchmarks:Public benchmark references come from provider websites, the Hugging Face Open LLM Leaderboard, and other documented evaluations when clearly cited
  • Updates: Verified monthly or when providers announce pricing changes

Affiliation Disclosure

API Scout may earn commissions when readers use links to featured providers. This doesn't affect our rankings or methodology — we score providers based on the criteria above, not affiliate relationships. We clearly label featured recommendations and always disclose when a link may earn us a commission.

Use the methodology with the live pages

The scoring rules make the most sense when you compare them against the actual ranking pages and the raw table view.

Open compare

See the full provider table after reading how the weighting works.

Go to compare →

Open free-tier ranking

See how the separate free-tier model changes the winners.

See free APIs →

Open paid ranking

Use this when value, predictability, and long-term spend matter more than experimentation.

See paid APIs →