What SaaS Idea Validation with Cortex AIF Returns

Paste a 2–3 sentence description of your SaaS — what it does, who it's for, how it's priced. Cortex AIF classifies it as SaaS, applies SaaS-weighted scoring across 16 analytical blocks, and returns one of three verdicts:

For SaaS ideas specifically, the pipeline calculates:

Every number is tagged: VERIFIED, ESTIMATED, CONTRADICTED, or UNVERIFIED. Blocks where evidence was insufficient run at 0.3x weight — the system does not fill gaps with invented scores.

Example: AI Code Review SaaS — GO with Conditions (May 2026)

Input (134 characters):

"AI-powered code review tool for solo developers, $19/mo, web-based, Stripe billing, GitHub integration, runs in browser as PR-comments bot."

Tier: Full Investor Report ($97). Project #312, May 2026.

Real analysis output, Project #312, May 2026.

Verdict

GO — score 7.42/10
Logic: 7.42 ≥ 7.0 threshold → GO. Formula, not opinion.
Confidence: 58% — moderate. The system does not claim certainty it doesn't have.

Unit economics (calculated against market benchmarks)

LTV/CAC: 4.0x ✅ above 3.0 threshold
Gross margin: 95% ✅ software typical
Price point: $19/mo — within verified market range for solo-dev tools

Score breakdown — formula-visible

Financial: 9.5/10 × weight 10.0 = 95.0 contribution
Legal: 7.0/10 × weight 5.0 = 35.0 contribution
Tech: 7.0/10 × weight (contributes proportionally)

5 blocks ran at reduced weight 0.3x — insufficient public evidence for channels, exit, marketing, risk, team. Shown explicitly, not hidden.

Honesty signal on a passing verdict

killer_question_source = rejected_fabrication

The pipeline generated an analytical stress-test question, audited its own output for quality and manipulative language, and rejected it as untrustworthy — falling back to a baseline question instead.

This happened on a GO verdict. The system applies the same anti-fabrication audit whether the idea passes or fails. Most AI tools are more confident when they should be less. Cortex AIF is not.

GO conditions — the formula's contract, not a blessing

Condition 1 (30-day): Trial-to-paid conversion ≥ 5% at $19/mo
Condition 2 (60-day): LTV/CAC > 3 with CAC < $50

These are formula-derived, not advisory. If the conditions aren't met, the math that produced GO no longer holds.

Analysis time: ~25 minutes ($97 Investor tier, deep research mode). Report length: 21,977 characters. Full audit trail in aif_cross_data.

The Same Idea, Five Verdicts

The AI code review SaaS above was run through the pipeline 5 times across several weeks. Verdicts ranged from NO_GO 4.91 to GO 7.42.

This is not a bug. This is honest behavior.

Cortex AIF pulls live market data on every run. Competitor funding rounds, new market research reports, pricing page changes, and category shifts move the underlying data — and the scores move with them.

What this means for you

If you run an idea and get CONDITIONAL_GO — address the conditions, then re-run. A changed market position, a validated pricing benchmark, or a new competitor entering the space will change the score.

If you get NO_GO — the current data doesn't support the idea. That can change. Or it can confirm that the market isn't there.

What this does not mean

The score is not arbitrary. The formula is fixed. The weights are fixed. What changes is the live data the formula processes — the same way a financial model produces different outputs when market inputs change.

Cortex AIF does not remember your previous runs and adjust to please you. Every run starts from live data.

Who Uses SaaS Idea Validation

Solo Developers and Indie Hackers

You have 3 ideas and 4 weekends. Which one actually pencils out?

Run each through the same formula. The block scores show exactly where each idea is strong and where it fails — LTV/CAC, market size, competitive density, technical complexity. Pick the one with the highest aggregate score and the most addressable weakness pattern.

$19 per run. 8–10 minutes. Comparable verdicts across all three ideas.

Dev Tools Founders

You're building for developers. The market is skeptical, vocal, and allergic to marketing. Before you build:

The pipeline checks TAM from independent sources (not your own market sizing). It estimates LTV/CAC against actual pricing in your category. It scores competitive density against existing GitHub integrations, VS Code extensions, and browser-based tools.

If the math says GO — you get conditions. If it says NO_GO — you save a year.

VC Scouts — Dev Tools Triage

You receive pitch decks from dev tools founders weekly. Reading each properly takes hours.

Run the idea description through Cortex AIF first ($19, 8–10 minutes). Sort by score. Read the high-score decks. Your follow-up questions land on the specific dimensions the report flagged — not on generic discovery that any founder can script around.

What This Analysis Does Not Cover

Cortex AIF does not:

What Cortex AIF does: takes your SaaS description, pulls live market data, computes LTV/CAC and TAM from independent sources, scores 16 blocks by formula, and tells you GO/CONDITIONAL_GO/NO_GO with the conditions attached — in 8–10 minutes for $19.

How This Compares

Method Time Cost SaaS metrics Verdict
Cortex AIF Hypothesis8–10 min$19LTV/CAC, TAM multi-source, churn estimateGO/CONDITIONAL_GO/NO_GO by formula
Cortex AIF Investor20–30 min$97All above + deep research, 22K char reportGO/CONDITIONAL_GO/NO_GO + conditions
YC applicationHoursFreeSelf-assessedHonest only if you are
ChatGPT analysisSeconds$0–$20/moNone — no live dataEncouraging text
Lean canvas30 minFreeSelf-filled assumptionsNo verdict, no verification
Founder community feedbackDaysFreeOpinionsSocial bias, optimism pressure

Frequently Asked Questions

What SaaS metrics does Cortex AIF check?

LTV/CAC ratio (threshold: above 3.0), gross margin (SaaS benchmark: 70%+), TAM from multiple independent sources, competitive density, pricing viability, and channel acquisition cost estimates. Each metric tagged VERIFIED, ESTIMATED, or UNVERIFIED.

What does GO mean for a SaaS idea?

Score 7.0 or above across 16 blocks. Not "build unconditionally" — every GO includes formula-bound conditions. Example from Project #312: trial-to-paid conversion above 5% within 30 days, LTV/CAC above 3 with CAC below $50 within 60 days.

Why do same inputs return different verdicts across runs?

Live market data changes. The same AI code review SaaS returned NO_GO 4.91 to GO 7.42 across 5 runs over several weeks — reflecting real market changes, not randomness. The formula is fixed; the live inputs move.

How is SaaS validation different from general idea validation?

SaaS triggers specific pipeline behaviors: LTV/CAC calculation, churn risk estimation, MRR modeling, GitHub/API integration complexity scoring, and SaaS-specific weights on financial and technical dimensions.

What if the pipeline cannot verify my market size?

Blocks with insufficient evidence run at 0.3x weight instead of fabricating scores. The output shows which blocks ran at reduced weight. You see where evidence was thin — not a padded score hiding data gaps.

Case study data (AI Code Review SaaS, Project #312) from Cortex AIF pipeline run, May 2026. Score variance across 5 runs (Projects #306–#322) reflects live market data changes between runs. Audit trail preserved in aif_cross_data for the lifetime of the project record.