Monthly Benchmark

April 2026 Reliability Baseline

This report establishes the first monthly baseline across 22 platforms using methodology v1.2. It is designed for shortlist refinement, pilot planning, and risk-aware procurement review.

Published: 2026-04-04 | Protocol: v1.2 | Universe: 22 platforms.

Verification Summary

What to check for 2026 04

This page provides a concise, evidence-first guide for 2026 04. Focus: provide actionable verification steps and real-world checks so procurement decisions are based on repeatable evidence, not promotional claims. Run a short pilot in a test account (3 sessions), capture browser versions, proxy settings, and checkout eligibility responses. Document failures with timestamps and screenshots and use them to decide whether to proceed with annual commitments. Include a brief case note at the end of each pilot with a go/no-go recommendation. Share the evidence pack with procurement and ops for reproducible validation.

Protocol Context

How This Benchmark Was Run

Dimension Value
Methodology version v1.2 reliability-weighted protocol
Critical run requirement 3 repeated clean runs per critical workflow
Core scoring weights Profile integrity 38%, API reliability 26%, operational cost 21%, team governance 15%
Publication blockers Critical drift, unresolved connection leaks, missing no-buy criteria

Headline Findings

April 2026 Summary

8Platforms with Level A evidence in baseline flows
9Platforms with Level B evidence and explicit caveats
5Platforms with no-buy blockers in at least one critical flow
14Profiles showing measurable governance drift risk under team scaling

Top-Line Matrix

Reliability Bands by Evidence Level

Band Score range Evidence posture Procurement interpretation
Band A 8.5 to 10.0 Level A with stable repeated runs Eligible for controlled procurement after local validation
Band B 7.0 to 8.4 Level B with caveat-bound reliability Pilot-first, avoid annual commitment until caveats are closed
Band C 0.0 to 6.9 Level C or unresolved blockers No-buy for critical workflows at current baseline state

This table is directional. Always verify with your own stack, proxy profile, and team operating model.

Primary Drift Patterns

  • Worker and main-thread identity mismatch under extended sessions.
  • Connection narrative instability after proxy pool rotation.
  • Governance drift when seat count increased without role policy hardening.

Decision Impact

  • Budget-first stacks require stronger rollback discipline before annual contracts.
  • API-heavy teams should prioritize lifecycle reliability over entry pricing.
  • Promo claims should be treated as secondary until evidence reaches stable Level A/B.

Action Path

What to Do After Reading This Report

Step 1: map your shortlisted candidates against this month baseline bands.
Step 2: run Fingerprint Readiness Score to pre-check pilot risk.
Step 3: execute ops SOP gates and lock no-buy criteria.
Step 4: generate a reusable evidence pack for approval traceability.
Step 5: only then move through compare and promo routes for procurement action.

Evidence Governance

Traceability Links