Home/Blog/Guide
Guide

How We Score AI Tools: Our Editorial Criteria Explained

Transparency matters. Here's the exact 12-point rubric our editors use when reviewing every tool on PromptBulletin.

PE

PromptBulletin Editors

Editorial Team

5 min
·April 19, 2026

Why Methodology Transparency Matters

Most AI tool directories are pay-to-rank. A tool position in a list correlates more with its affiliate commission rate than its actual quality. We built PromptBulletin because we were frustrated with that. Our editorial scores are completely independent from commercial relationships.

Every tool reviewed on PromptBulletin goes through the same 12-point rubric. We score across four categories: Output Quality, Ease of Use, Value, and Reliability. Each category has three sub-criteria, each scored 1-10, and the final score is a weighted average.

Output Quality (35% of score)

What we test

We run every tool through a standardized set of test prompts specific to its category. Writing tools get 20 prompts ranging from simple blog intros to complex technical copy. Code tools get 15 real debugging tasks from open-source repos.

35%
Output Quality weight
30%
Ease of Use weight
25%
Value weight
10%
Reliability weight

Ease of Use (30% of score)

Onboarding speed, interface clarity, and learning curve are tested by having three team members with varying technical backgrounds use the tool for the first time. We measure time-to-first-useful-output, number of support docs consulted, and subjective confusion rating.

Value for Money (25% of score)

We compare each tool pricing against comparable alternatives and against the value delivered at each tier. A tool that costs $100/mo but saves a full-time employee 10 hours/week scores better on value than a $10/mo tool that saves 30 minutes.

Reliability (10% of score)

Uptime, consistency of outputs between sessions, and customer support responsiveness. We test tools for at least 30 days before publishing a review. Any tool that changes its pricing or features dramatically after review will have its score updated.

Score Updates and Re-Reviews

AI tools move fast. A 9/10 tool today might be a 7/10 in six months. We schedule re-reviews for every major tool at least once per year, and immediately when a tool makes significant changes.

Community upvotes and reviews factor into the displayed rating but are kept separate from the editor score. Both are shown transparently so you can weigh them appropriately.

EditorialMethodologyTransparencyReviews
PE

PromptBulletin Editors

Editorial Team

The PromptBulletin editorial team is a group of AI researchers, engineers, and writers dedicated to unbiased, rigorous AI tool reviews.