How we measure the numbers we publish.
Every quantified claim on this site lives here with the sample it came from, the definition behind it, the window it covers, and the caveat that keeps it honest. If a number on the site is not listed here, tell us and we will either add it or retire it.
20+ hours a week returned
Where it appears Hero, homepage description, meta description
Definition Hours of recurring operational work removed from a customer team per week after an Autoolize-built agent or pipeline ships to production.
How we measure Measured per-engagement by timing the replaced workflow before/after rollout. Before: stopwatch sample over 1–2 weeks of representative volume. After: log-derived count of automated actions, multiplied by the timed per-unit cost. Per-engagement numbers vary; "20+" is the lower bound seen across shipped builds, not a ceiling.
Caveat Not a universal guarantee — some engagements return less, some return substantially more. We quote the specific per-engagement number in each audit and post-ship report.
15+ years of senior engineering experience
Where it appears Hero lede, FAQ, Proof bar
Definition Cumulative production-software engineering experience of the senior engineer owning the engagement — not a team-size statement.
How we measure Every Autoolize engagement is owned end-to-end by an engineer with at least 15 years of production software experience before their first day on AI work. If that is not the case on a specific engagement, we say so up front in the audit.
Caveat Seniority alone does not guarantee outcomes. We pair it with typed pipelines, eval suites, and measurable payback windows (see below).
10+ countries we deliver across
Where it appears Proof bar, Footer
Definition Count of distinct countries where Autoolize has delivered at least one paid engagement.
How we measure Counted from our engagement records — each country contributes once regardless of client count. The number grows over time as we take on new geographies.
Caveat Regulated industries and data-residency-sensitive workloads require an explicit conversation about applicable law before scoping. We default to deploying into the customer's own cloud account.
40+ agents and workflows in production
Where it appears Proof bar
Definition An agent or workflow is "in production" when it has served real customer-facing or operational traffic within the last 30 days, not an internal demo.
How we measure Counted from our internal catalogue of shipped work across all engagements. Includes custom agents, document-extraction pipelines, internal copilots, and retainer builds. Updated as new work ships and retired workflows roll off.
Caveat A complex multi-step agent and a small Zapier replacement each count as one — this is a breadth signal, not a complexity ranking.
Under 60 days median payback
Where it appears Proof bar, Pricing
Definition Number of days from engagement kickoff until cumulative labour savings (or new revenue, where measured) equal the engagement cost.
How we measure Per-engagement math: fixed-scope build cost divided by measured weekly savings × 7. Reported as median across shipped builds that have run in production for at least 90 days. Sample skews toward inbound-triage and document-extraction engagements, which tend to have fast payback.
Caveat Greenfield copilots and retainer work have longer, less linear payback curves — we say so before scoping. Retainer value accrues as the portfolio of maintained workflows grows, not as a single ROI line.
99.4% agent uptime
Where it appears HeroPanel chip
Definition Percentage of minutes in a calendar month during which an Autoolize-operated agent successfully serves requests (p95 latency within its SLO, error rate under its threshold).
How we measure Computed from OpenTelemetry span data aggregated per-agent across all currently live retainer-maintained agents. The 99.4% figure is a rolling 90-day average across that fleet, not a single-agent cherry-pick.
Caveat We do not quote 99.9% or 99.99% because we do not operate our own infrastructure at those tiers — we ride the SLAs of Claude, OpenAI, AWS, and Cloudflare. When upstreams are down, so are we. We design for graceful degradation and queue-and-replay rather than dual-provider failover unless that is explicitly scoped.
Typical agent cost — single-digit cents per run
Where it appears HeroPanel scenarios ($0.0023, $0.0041, $0.0089, $0.0017)
Definition Per-run fully-loaded inference cost for a given agent — input tokens, output tokens, retrieval embeddings, and model routing combined. Quoted per scenario, not as a site-wide average.
How we measure Measured directly from provider-billed token counts and embedding API calls on real production traffic. Numbers shown in the HeroPanel scenarios are representative per-agent figures from recent engagements, not fabricated benchmarks.
Caveat Cost depends heavily on context-window shape and caching ratio. Agents with large RAG payloads cost more; agents with aggressive prompt caching cost less. Your actual figure is a scope-time conversation, not a marketing headline.
Questions about a specific number?
Email hello@autoolize.com or book a call — we’ll walk through the engagement, sample, and math behind any figure we publish.