Ascent Performance Analytics (APA)¶
Comprehensive Business Plan¶
Version: 1.0
Date: March 2026
Prepared by: Atlas, Director of Research & Intelligence, Vivere Vitalis LLC
Classification: Internal — Strategic Planning Document
This document synthesizes all existing APA + BioThread research into a single actionable business plan. Assumptions are flagged explicitly. Numbers derived from secondary research should be stress-tested against direct customer conversations before capital commitment.
Table of Contents¶
- Executive Summary
- Market Analysis
- Product Architecture
- Development Environment Setup
- Developer Account & Platform Setup
- Phased Milestone Plan
- Cost Model
- Revenue Projections
- Risk Analysis
- Go-to-Market Strategy
1. Executive Summary¶
Vision¶
A world where every serious athletic program — not just the ones with seven-figure budgets — has access to AI-powered performance intelligence that protects athletes and drives competitive advantage.
Mission¶
Build the performance analytics platform that fills the $10K–$50K/year gap in sports tech: delivering 80% of enterprise-grade analytics at a price that NCAA Division II programs, D1 mid-majors, and training academies can actually justify.
The Problem¶
Sports performance analytics has a catastrophic pricing cliff. Enterprise tools (Catapult, Hudl, Kinexon) run $50K–$250K/year. Budget tools (TrainHeroic, TeamBuildr) cost $2K–$8K/year but lack real analytics. The middle is functionally empty.
The result: ~700+ NCAA programs, dozens of semi-pro clubs, and hundreds of training academies are managing athlete performance on Excel, Google Forms, and disconnected apps. Coaches spend 6+ hours on the floor and have no automated insight delivery. They're flying blind on injury risk, recovery status, and training load — not because they don't care, but because there's no tool built for their budget.
The Solution¶
Ascent Performance Analytics (APA) is a SaaS platform that delivers: - Unified athlete dashboard (training load, wellness, recovery, injury log) - Multi-source data ingestion (wearables-agnostic: Garmin, WHOOP, Apple Health, Polar, manual entry) - AI-powered insights layer (fatigue prediction, injury risk flags, load recommendations) - BioThread integration (GAP Score / CNS fatigue monitoring as premium add-on) - Modern UX designed for coaches who are on the floor, not behind a desk
Target Market¶
Primary ICP: NCAA D1 mid-major and FCS programs (161 schools), D2 programs (292 schools)
Secondary ICP: Training academies, select D3 programs, USL soccer clubs
Future expansion: Commercial gyms, personal trainers (1.3M+ NSCA/NASM certified in the US)
Revenue Model¶
Monthly SaaS subscription with annual contract option: - Starter: $500/month — single team, core dashboards, manual data entry - Pro: $1,000/month — multi-team, AI insights, wearable integrations, custom reports - Elite: $2,000/month — full suite, API access, white-label option, BioThread included - BioThread Add-On: $175/month — available on Pro tier; GAP Score, CNS fatigue monitoring
Financial Snapshot¶
| Milestone | Target Date | Revenue |
|---|---|---|
| First 10 paying customers | Month 6 (Oct 2026) | ~$60K ARR |
| Break-even on operating costs | Month 9 (Jan 2027) | ~$120K ARR |
| 50 customers | Month 12 (Apr 2027) | ~$500K ARR |
| $1M ARR | ~Month 20 (Dec 2027) | 100+ customers |
| $2M ARR | ~Month 30 (Oct 2028) | 200+ customers |
Why Now¶
- NCAA December 2025 biometric consent guidance is formalizing performance tech as a budget line item
- AI tooling costs have collapsed — a solo operator + AI agent team can build what required a 20-person engineering team in 2020
- The mid-market gap has been validated by market research and competitor pricing analysis
- Wearable API availability (Garmin, WHOOP, Polar, Apple Health) has matured for third-party integration
- Sports analytics market growing at 15.7% CAGR ($2.29B in 2025 → $4.75B by 2030, MarketsandMarkets)
2. Market Analysis¶
2.1 Total Addressable Market (TAM)¶
The global sports analytics market is estimated at $2.29B–$5.7B in 2025 (range across MarketsandMarkets, Grand View Research, SkyQuest), growing at 15.7–24.3% CAGR. The discrepancy in estimates reflects differing scope definitions; the conservative MarketsandMarkets figure ($2.29B, 15.7% CAGR to $4.75B by 2030) is used as the baseline for this plan.
APA targets a specific sub-segment: team performance analytics software for mid-market sports organizations. This is distinct from broadcast analytics, fan engagement analytics, and professional league data products that dominate the broader market TAM.
NCAA Program Universe (2025–26 season):
| Division | Schools | Sport Teams | Notes |
|---|---|---|---|
| Division I | 361 | ~6,000+ | Mix of P5 giants and mid-majors/FCS |
| Division II | 292 | ~5,021 | Primary ICP |
| Division III | 422 | ~8,157 | Budget-constrained; secondary ICP |
| Total | 1,075 | ~19,000+ | Contract unit = institution, not team |
Source: NCAA.org, NCSA Sports 2025
Semi-Pro and Minor League:
| League | Teams | Notes |
|---|---|---|
| USL Championship | ~25 | Highest semi-pro soccer; independent clubs |
| USL League One | ~25 | One tier below Championship |
| USL League Two | ~90 | Development; very tight budgets |
| NAHL (hockey) | 32 | Tier II junior; budget-constrained |
| ECHL (hockey) | 28 | Professional but minor league |
| Independent baseball leagues | ~100 | Excludes MLB affiliates |
| Total addressable | ~200–250 | Budget and staff constraints limit addressable % |
Training Academies (estimate): - Elite prep academies (soccer, baseball, multi-sport): ~500–2,000 in the US (estimate) - College combine/sports performance training centers: ~300–500 (estimate) - Addressable: ~500–800 with sufficient budget and athlete volume
Future Expansion — Commercial Gyms and Personal Trainers: - NSCA-certified strength coaches: ~60,000 members - NASM-certified personal trainers: ~170,000 active certifications - Commercial gyms with performance focus (not general fitness): ~5,000–10,000 (estimate) - This segment requires a fundamentally different product tier (individual athlete vs. team) and is a Phase 8+ initiative
2.2 Serviceable Addressable Market (SAM)¶
Applying budget and readiness filters to the raw program counts:
| Segment | Total Orgs | Budget-Qualified | Addressable SAM |
|---|---|---|---|
| D1 mid-major + FCS | ~161 | 120 | 120 schools |
| D1 Power/SEC/ACC (top end) | 65 | 65 | 65 schools (compete with enterprise) |
| Division II | 292 | 200 | 200 schools |
| Division III (larger budgets) | ~100 of 422 | 60 | 60 schools |
| USL Championship / League One | ~50 | 30 | 30 clubs |
| Training academies | ~800 | 200 | 200 facilities |
| Total SAM | ~675 organizations |
SAM revenue potential at average $600/month per customer:
675 × $600 × 12 = ~$4.9M total SAM ARR
2.3 Serviceable Obtainable Market (SOM)¶
Realistic 3-year capture rate given APA's resources, sales motion, and product maturity:
| Segment | SAM | Year 1 (3%) | Year 2 (7%) | Year 3 (12%) |
|---|---|---|---|---|
| D1 mid-major + FCS | 120 | 4 | 8 | 14 |
| D1 Power | 65 | 0 | 1 | 3 |
| Division II | 200 | 8 | 18 | 28 |
| Division III | 60 | 1 | 3 | 6 |
| USL / semi-pro | 30 | 0 | 2 | 4 |
| Training academies | 200 | 2 | 8 | 15 |
| Total customers | ~15 | ~40 | ~70 |
SOM ARR projection: - Year 1: ~$110K ARR (avg $610/month, accounting for starter-heavy mix) - Year 2: ~$450K ARR (mix shifting to Pro as customers expand) - Year 3: ~$1.1M ARR (with BioThread add-on revenue + some Elite contracts)
Note: These are conservative SOM estimates. The combined APA + BioThread + licensing projections in Section 8 show a more aggressive but achievable path to $1M ARR by Month 20.
2.4 Competitive Landscape¶
Enterprise Tier ($50K–$250K/year) — Indirect Competition¶
| Company | Core Product | Annual Price | Key Weakness |
|---|---|---|---|
| Catapult | GPS hardware + analytics platform | $50K–$150K+ | Hardware lock-in; price kills D1/D2 mid-market |
| Hudl | Video analysis + performance | $50K–$100K+ | Video-first, not physiology; expensive |
| Kinexon | Real-time IoT player tracking | $200K+ | Ultra-enterprise; NBA/NFL only |
| Smartabase / Kinduct | Data aggregation + analytics | $30K–$80K | Complex implementation; enterprise IT requirements |
| STATSports | GPS hardware + PlayerTek SaaS | $30-60/athlete/month | Hardware-required; team-level cost is high |
Mid-Market ($10K–$50K/year) — Direct Competition¶
The gap is real and currently unoccupied by any credible player. No tool exists that delivers true AI-powered analytics to NCAA programs at mid-market SaaS pricing. This is APA's lane.
Budget Tier (<$10K/year) — Adjacent Competition / Market Below¶
| Company | Core Product | Annual Price | Why They're Not the Answer |
|---|---|---|---|
| TrainHeroic | Workout programming + basic tracking | $1,920–$5,000/yr | Programming delivery; no real analytics layer |
| TeamBuildr | S&C programming + AMS | $1,800–$6,000/yr | AMS module exists but no AI, no unified view |
| TrueCoach | Personal trainer client management | $1,200–$3,600/yr | Individual trainers; not team-scale |
| Exercise.com | All-in-one platform | Varies | Jack of all trades; analytics are superficial |
| CoachMePlus | Athlete management system | $5K–$15K/yr | Closer competitor; limited AI; dated UX |
Positioning Map¶
Annual Cost
$250K │ Kinexon
│
$150K │ Catapult
│
$100K │ Hudl, Smartabase
│
$50K │ STATSports
│
│ ════════════════ THE GAP ═══════════════
$24K │ >>> APA ELITE ($2K/month) <<<
$12K │ >>> APA PRO ($1K/month) <<<
$6K │ >>> APA STARTER ($500/month) <<<
│ ════════════════════════════════════════
$5K │ TeamBuildr (top tier)
$2K │ TrainHeroic, TrueCoach
└────────────────────────────────────────
Budget Mid-Market Enterprise
(Excel) (APA's zone) (pro sports)
Key Differentiators APA Must Build¶
- Hardware-agnostic data ingestion — works with whatever devices teams already own; no hardware lock-in
- AI insights layer — not just dashboards; automated recommendations, fatigue flags, injury risk models
- Unified data view — one platform connecting wellness surveys, training load, wearable data, injury log
- Compliance-ready — NCAA biometric consent templates built in; reduces procurement friction
- Modern UX — designed for coaches on phones and tablets, not enterprise IT administrators
- BioThread CNS GAP Score — proprietary dual-index recovery metric unavailable anywhere else at this price
2.5 Why Now — Market Timing Factors¶
1. AI Cost Collapse (2023–2026)
The cost of building a capable ML analytics layer has dropped 10x since 2022. LLM APIs, pre-trained models, and AI agent tooling make it possible for a solo operator + AI team to build what previously required a 20-person engineering org. This is the core reason APA is viable now when it wasn't 3 years ago.
2. NCAA Biometric Consent Guidance (December 2025)
The NCAA's CSMAS approved new performance technology guidance requiring informed consent for biometric tracking. This is a tailwind: it's creating formal budget line items and procurement processes around performance tech at programs that previously used ad-hoc tools. APA can be the compliant solution that makes this easy.
3. Wearable API Maturity
Garmin Health API (free for approved developers), WHOOP Developer Platform (currently free), Polar Open AccessLink (free), and Apple HealthKit (available via Expo/React Native) have all reached maturity. The integration surface exists and is accessible. Three years ago, several of these were closed or nascent.
4. The TrainHeroic Ceiling
TrainHeroic's 2021 acquisition by Peaksware (for $12M per reports) validated the market, but TrainHeroic hasn't meaningfully expanded its analytics capabilities. Their ceiling is the programmer's tool. The coaching community is actively looking for what comes next.
5. Post-COVID Athletic Department Technology Investment
Athletic departments that were forced to go remote in 2020–2021 upgraded their technology infrastructure. The muscle memory for software purchasing is there in a way it wasn't pre-pandemic.
6. Sports Analytics Market Growth
15.7–24.3% CAGR creates expansion demand. New programs are adding performance staff; existing programs are formalizing their analytics requirements. The market is growing into APA's lane.
3. Product Architecture¶
3.1 Tech Stack Recommendation¶
The right stack for APA optimizes for: (1) shared codebase across mobile and web, (2) ability to ship fast with an AI agent team, (3) cost efficiency at early scale, (4) headroom to grow.
Frontend — Web Dashboard¶
Next.js 14+ (App Router) - React-based, familiar to any React developer (or AI coding agent) - App Router enables server-side rendering and streaming — critical for real-time dashboard performance - Edge deployment on Vercel for low-latency global access - Ideal for: team management console, analytics dashboards, reporting, admin panel
Frontend — Mobile Apps¶
React Native with Expo (Managed Workflow)
- Single codebase for iOS and Android
- Expo Managed Workflow handles 90% of native configuration automatically
- react-native-healthkit package provides Apple HealthKit access (iOS only)
- react-native-health-connect provides Android Health Connect access
- Expo EAS Build for cloud-based iOS/Android builds (no Mac required for CI builds)
- Expo OTA updates for hot-patching without App Store review cycles
- Ideal for: athlete-facing wellness check-in app, coach mobile dashboard, real-time alerts
Why Expo over Flutter: React Native shares code with the Next.js web dashboard (shared components, hooks, API clients). Flutter requires a fully separate codebase for web. For a solo operator with an AI team, React + React Native is the right choice. React Native is also the dominant framework in sports tech apps.
Backend — API Server¶
Node.js with Express or NestJS - TypeScript across the stack for type safety - NestJS recommended for structured, scalable architecture as the codebase grows - REST API for CRUD operations; WebSockets for real-time dashboard updates - Rate limiting, auth middleware, API key management for third-party integrations - Deployed on Railway or Render (more predictable costs than AWS at early scale)
Backend — ML/Analytics Engine¶
Python (FastAPI) — separate microservice - Python is the native language of the ML ecosystem (scikit-learn, PyTorch, pandas, numpy) - FastAPI provides async performance and auto-generated API docs - Runs independently from the Node.js API server; called via internal HTTP - BioThread GAP Score calculation engine lives here - ML models for injury risk, training load recommendations - Deployed on Railway or Google Cloud Run (scale-to-zero at low traffic)
Database¶
Supabase (PostgreSQL) - Managed PostgreSQL with built-in auth, real-time subscriptions, and Row Level Security - Storage for CSV imports and file uploads - Postgres TimescaleDB extension for time-series wearable data (if needed at scale) - Pro plan: $25/month; covers 8GB database, 250GB bandwidth, 50GB storage - Auth module replaces building custom auth (huge time savings) - Direct SQL access and REST/GraphQL API built-in - Real-time subscriptions for live dashboard updates
Infrastructure¶
- Hosting: Vercel (Next.js web app, generous free tier to start) + Railway (Node.js API + Python ML service)
- CDN: Vercel Edge Network (included) for global performance
- Monitoring: Sentry (error tracking, $26/month Pro), Vercel Analytics
- Email: Resend ($20/month for transactional email) or SendGrid
- Background Jobs: Railway cron jobs or Supabase Edge Functions for scheduled data syncs
- File Storage: Supabase Storage for CSV uploads (included in Supabase plan)
Full Stack Reference¶
┌─────────────────────────────────────────────────────────┐
│ CLIENT LAYER │
│ Next.js Web Dashboard React Native Mobile (Expo) │
│ (coaches, admin, reports) (athletes, coaches on mobile)│
└────────────────┬────────────────────────────────────────┘
│ HTTPS / WebSocket
┌────────────────▼────────────────────────────────────────┐
│ API GATEWAY LAYER │
│ Node.js / NestJS REST API (TypeScript) │
│ Auth (Supabase Auth) | Rate Limiting | Logging │
└────────┬──────────────────┬──────────────────┬──────────┘
│ │ │
┌────────▼───────┐ ┌───────▼───────┐ ┌──────▼──────────┐
│ Supabase DB │ │ Python ML │ │ Data Ingestion │
│ (PostgreSQL) │ │ Service │ │ Service │
│ Athlete data │ │ (FastAPI) │ │ Wearable APIs │
│ Team configs │ │ AI insights │ │ CSV parsers │
│ Session logs │ │ GAP Score │ │ Webhook handlers│
└────────────────┘ └───────────────┘ └─────────────────┘
3.2 Data Aggregation Layer¶
This is the hardest engineering problem in APA. Getting clean, normalized, comparable data from 5+ different wearable brands with different data schemas, sync frequencies, and API contracts is where most sports analytics platforms either fail or lock themselves to a single vendor.
The Core Challenge¶
Every wearable vendor uses different: - Data schemas (Garmin calls it "stressScore"; WHOOP calls it "strain"; Apple calls it "heartRateVariabilitySDNN") - Sampling rates (WHOOP syncs nightly; Garmin syncs on connection; Apple Watch syncs continuously) - Units of measurement (different HRV calculation methods across vendors) - Auth flows (OAuth2 with different scopes, token expiry, refresh patterns) - Rate limits (Garmin: 1000 calls/day per app; WHOOP: undisclosed; Apple HealthKit: local-device only)
Architecture Solution: Normalized Data Layer (NDL)¶
The NDL is an abstraction layer that translates vendor-specific data into APA's canonical schema before it ever touches the database. Think of it as a universal translator for biometric data.
Canonical Athlete Metric Schema:
{
"athlete_id": "uuid",
"recorded_at": "ISO8601 timestamp",
"source": "garmin|whoop|apple_health|polar|manual",
"metric_type": "hrv_rmssd|rhr|sleep_duration|sleep_quality|training_load|wellness_score|...",
"value": 42.5,
"unit": "ms|bpm|hours|score_0_100|...",
"confidence": 0.95,
"raw_payload": { ... } // vendor original JSON preserved
}
Ingestion Pipeline Architecture:
External Sources Connectors NDL DB
───────────────── ────────── ──────── ────
Garmin Health API ──▶ Garmin Adapter ──▶│ │
WHOOP API ──▶ WHOOP Adapter ──▶│ Schema │──▶ athlete_metrics
Apple HealthKit ──▶ Health Adapter ──▶│ Mapper │ (canonical)
Polar AccessLink ──▶ Polar Adapter ──▶│ │
CSV Upload ──▶ CSV Parser ──▶│ │
Manual Entry (app) ──▶ Direct Write ──▶│ │
Wearable Integration Details¶
Garmin Health API
- Access: Free for approved business developers (developer.garmin.com)
- Auth: OAuth 2.0; athletes connect Garmin account to APA app
- Data available: HRV status, sleep, stress, body battery, activities, health snapshots
- Sync model: Push-based via webhook (Garmin pushes data to APA endpoint on sync)
- Key metrics: dailySummary.averageStressLevel, sleepData.sleepScore, activities.heartRateVariability
- Running dynamics: Ground Contact Time, Vertical Oscillation (for BioThread CNS proxy)
- Implementation complexity: Medium — webhook handling is non-trivial; data normalization required
WHOOP Developer Platform
- Access: Currently free (developer.whoop.com); requires personal WHOOP membership to develop
- Auth: OAuth 2.0
- Data available: Recovery score, strain, HRV, RHR, sleep performance, sleep stages
- Sync model: Pull-based API (APA polls on schedule or at athlete request)
- Key metrics: recovery.score, recovery.hrv_rmssd, recovery.resting_heart_rate, sleep.performance_percentage
- Implementation complexity: Low-Medium — clean REST API, good documentation
- Note: The $15K/year API license referenced in older research appears to be for commercial-scale enterprise integrations; developer access is currently free per official docs (2025)
Apple HealthKit (iOS via React Native)
- Access: Via react-native-healthkit npm package; requires Apple Developer Program membership ($99/year)
- Auth: On-device permission prompts (no OAuth); data stays on device, APA reads with user permission
- Data available: HRV (RMSSD), RHR, sleep stages, workouts, activity data, steps, VO2Max estimate
- Key consideration: HealthKit data is local to the device — the athlete's iPhone must have the APA app installed and permissions granted for each metric type
- Sync model: App reads local HealthKit store on launch or background fetch; syncs to APA backend
- Implementation complexity: Medium — native module required; testing requires physical iPhone
- Android equivalent: Google Health Connect (via react-native-health-connect) — similar architecture
Polar Open AccessLink API - Access: Free with Polar Flow account (no device required for development) - Auth: OAuth 2.0 - Data available: Training data, activity data, sleep, nightly recharge, heart rate - Implementation complexity: Low — older but stable API; good for teams using Polar H10 straps
Manual Entry (In-App) - Athlete wellness form: sleep hours, sleep quality (1–10), energy (1–10), muscle soreness (1–10), mood (1–10), RPE of yesterday's session - Session log: activity type, duration, perceived effort (RPE × duration = sRPE training load proxy) - Coach override: ability to flag data points, add context notes - This is the MVP data source — no API integration required; available on Day 1
Data Quality and Trust Scoring¶
Not all data is created equal. A manual wellness entry has higher uncertainty than a synced WHOOP recovery score. APA must track data provenance and surface confidence levels:
- Manual entry: confidence 0.7 (self-reported, subjective)
- WHOOP sync: confidence 0.95 (validated sensor, scientific methodology)
- Garmin HRV: confidence 0.85 (good hardware, but algorithm differences from RMSSD standard)
- CSV import (legacy Catapult data): confidence 0.80 (depends on export quality)
This matters for the AI insights layer: a low-confidence data point shouldn't trigger a high-confidence injury risk alert.
The Differential Sync Problem¶
Athletes don't sync every day. Some forget for a week. A sudden gap in data looks like "no data" but might mean "athlete forgot to wear device." The ingestion layer must distinguish: - Expected gap (rest day logged) - Unexpected gap (flag for coach follow-up) - Retroactive sync (Garmin often syncs past weeks on first connection) - Duplicate data (prevent double-counting when athlete syncs from multiple sources)
3.3 AI Insights Engine¶
The AI layer is what separates APA from dashboards. Anyone can display charts. APA delivers recommendations and predictions.
Architecture: Three-Tier AI Model¶
Tier 1: Rule-Based Thresholds (shipped at MVP) - Fast to build; deterministic; interpretable - Examples: "Athlete has HRV 20% below 7-day baseline → Yellow flag" / "sRPE Load >1,500 AU this week, 40% above 4-week average → Spike Alert" - Uses: ACWR (Acute:Chronic Workload Ratio) algorithm — sports science standard for injury risk - No ML required; pure logic in the Python service
Tier 2: Statistical Models (Phase 3) - scikit-learn: Linear regression for load-performance correlation; logistic regression for binary injury risk classification - XGBoost: Gradient boosting for multi-variable risk prediction with tabular athlete data - Training data requirement: 20+ athletes with 3+ months of data for basic model utility - Requires validation against actual injury outcomes — need beta customer data
Tier 3: LLM-Powered Natural Language Insights (Phase 3+) - OpenAI GPT-4o or Claude API for generating natural language summaries of athlete data - "Here's Sarah's week: HRV trending down, sleep quality dropped Tuesday–Thursday, training load increased 30%. Recommendation: reduce Thursday practice intensity and monitor Friday morning HRV before deciding on Saturday game lineup." - Coaches don't want to interpret charts; they want the insight in plain English - Cost at scale: ~$0.005–0.02 per weekly summary per athlete; manageable at mid-market scale
What the Insights Engine Produces¶
| Insight Type | Trigger | Output |
|---|---|---|
| Fatigue Alert | sRPE load spike >30% above 4-week avg | Yellow/Red flag on athlete card |
| Recovery Flag | HRV 15%+ below personal baseline | Recovery score with recommendation |
| Injury Risk Score | ACWR >1.5 or chronic load drop >20% | Risk level + specific recommendation |
| Readiness Score | Composite of HRV, sleep, wellness | Daily team readiness dashboard |
| Weekly Load Summary | Automated weekly report | Coach email digest (AI-generated) |
| CNS Gap Score | BioThread algorithm (see 3.4) | Dual-index recovery indicator |
| Anomaly Detection | Statistical outlier vs. athlete baseline | Coach alert for unusual patterns |
Model Inputs and Feature Engineering¶
Core feature set for the ML models: - sRPE (session RPE × duration) — session internal load - ACWR (7-day / 28-day load ratio) — injury risk proxy - HRV_delta (today vs. 7-day rolling average) — normalized per athlete - Sleep_quality_score (composite of duration, efficiency, stage distribution) - Wellness_composite (coach-defined weighted sum of self-report items) - Days_since_last_rest — consecutive training day count - Sport-specific load weights (contact sport vs. endurance sport differs significantly)
Critical design principle: All metrics must be normalized to the individual athlete's baseline, not population averages. A HRV of 35ms might be low for one athlete and perfectly normal for another. The insight engine requires 14–21 days of baseline data before generating reliable alerts.
3.4 BioThread Integration¶
BioThread's GAP Score (General Adaptation Profile) is APA's proprietary performance differentiator — the algorithm that no competitor at this price point has.
Concept: Standard recovery tools measure cardiovascular (CV) recovery. BioThread measures both CV recovery and CNS (central nervous system) recovery, then calculates the gap between them. When CV looks "green" but CNS is still fatigued, athletes are at highest injury risk — they feel ready but physiologically aren't.
GAP Score Formula:
GAP_Score = CV_Index - CNS_Index
CV_Index = f(HRV_RMSSD, RHR, sleep_duration, sleep_efficiency)
CNS_Index = f(GCT_delta, deep_sleep_%, LF/HF_ratio, GCT_variability)
Where:
GCT_delta = % change in Ground Contact Time vs. 3-session baseline
deep_sleep_% = deep sleep as % of total sleep (from WHOOP/Garmin/Apple)
LF/HF_ratio = sympathetic/parasympathetic balance from HRV (Polar H10 or similar)
GCT_variability = coefficient of variation in GCT across a session
Interpretation:
GAP ≈ 0: True recovery — CV and CNS aligned
GAP > 0: CNS recovering faster than CV (less common)
GAP < -10: CV looks fine but CNS is depleted ("false green") — highest risk zone
GAP < -20: Clear CNS fatigue — recommend rest or technical-only work
APA Product Integration: - Base tier (Starter/Pro): CV recovery only (standard HRV, sleep, wellness scores) — this is table stakes - BioThread Add-On ($175/month): Enables the dual-index dashboard, GAP Score, CNS fatigue trend, and prescriptive CNS-specific recommendations - Elite tier: BioThread included; adds API access to GAP Score for third-party consumption
Data Requirements for BioThread: - GCT data: Requires Garmin running watch (Fenix, Forerunner 945+) or compatible running pod — not universal; athletes without this data see partial GAP Score based on available proxy metrics - Deep sleep: WHOOP, Garmin, Apple Watch (though Apple's sleep stage accuracy is lower), Oura Ring - LF/HF ratio: Polar H10 chest strap with Polar Flow integration, or certain Garmin devices in HRV stress test mode
Fallback Scoring: When GCT data is unavailable, substitute with wellness self-report (soreness, energy) weighted higher. Flag to coach that score is partial. Don't show a falsely precise number when the data isn't there.
3.5 API Design¶
APA needs two distinct API surfaces:
Internal API (Backend → Frontend)
- RESTful JSON API following OpenAPI 3.0 spec
- Authentication: Supabase Auth JWTs with RLS policies at the database level
- Versioned: /api/v1/... to protect integrations from breaking changes
- Key endpoints:
- GET /athletes — team athlete roster with latest metrics
- GET /athletes/:id/metrics — time-series data for athlete
- POST /athletes/:id/wellness — submit daily wellness check-in
- GET /team/readiness — team readiness dashboard data
- GET /insights/recommendations — AI-generated recommendations for team
- POST /data/import — CSV upload for bulk historical data
External / Partner API (for white-label and licensing)
- Available on Elite tier and to licensed partners
- API key authentication (rotating keys, rate limited per tier)
- Webhooks for push notifications (athlete alert events)
- Key partner endpoints:
- GET /biothread/gap-score/:athlete_id — returns current GAP Score
- POST /biothread/calculate — ad-hoc GAP Score calculation from submitted data
- GET /analytics/load-summary/:team_id — team-level load analytics
- Rate limits: 1,000 calls/day (Elite); 10,000/day (licensed partner)
- SLA: 99.5% uptime; response time <500ms for GET endpoints
4. Development Environment Setup¶
4.1 Prerequisites — Hardware¶
Mac mini M-series (Jeff's current hardware) is ideal for this stack. The M-series chip runs iOS simulators and Android emulators efficiently; Rosetta 2 handles x86 compatibility. No additional hardware required to start development.
Physical test devices recommended (but not required at Phase 0): - iPhone (any model running iOS 16+) — for real HealthKit testing - Android phone (any running Android 9+) — for Health Connect testing
4.2 Step-by-Step Environment Setup¶
Step 1: Core Development Tools¶
# Install Homebrew (macOS package manager)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Node.js via nvm (version manager — use LTS, currently 20.x)
brew install nvm
nvm install --lts
nvm use --lts
# Install Python 3.11+ for ML service
brew install pyenv
pyenv install 3.11.0
pyenv global 3.11.0
# Install pnpm (faster npm alternative — recommended for monorepo)
npm install -g pnpm
# Install Git (probably already installed, verify)
git --version
Step 2: Xcode (iOS Development)¶
# Install Xcode from Mac App Store (free, ~12GB, takes 20-40 min)
# After install, install CLI tools:
xcode-select --install
# Accept license agreement
sudo xcodebuild -license accept
# Install iOS Simulator (included with Xcode)
# Verify: open Xcode → Window → Devices and Simulators
Step 3: Android Studio (Android Development)¶
# Download from: https://developer.android.com/studio
# Or via Homebrew:
brew install --cask android-studio
# After install, open Android Studio and:
# 1. Complete setup wizard (installs Android SDK)
# 2. Install an emulator image: Tools → SDK Manager → SDK Tools → Android Emulator
# 3. Create AVD: Tools → Device Manager → Create Device
# Recommended: Pixel 7, API 34 (Android 14)
# Set Android environment variables in ~/.zshrc:
export ANDROID_HOME=$HOME/Library/Android/sdk
export PATH=$PATH:$ANDROID_HOME/emulator
export PATH=$PATH:$ANDROID_HOME/platform-tools
Step 4: Expo CLI and EAS CLI¶
# Install Expo CLI globally
npm install -g expo-cli
# Install EAS CLI (Expo Application Services — for builds and submissions)
npm install -g eas-cli
# Login to Expo account (create at expo.dev if needed)
eas login
Step 5: Project Scaffolding (Monorepo Structure)¶
apa/
├── apps/
│ ├── web/ # Next.js web dashboard
│ ├── mobile/ # Expo React Native app
│ └── api/ # Node.js/NestJS API server
├── packages/
│ ├── shared/ # Shared TypeScript types, utilities
│ ├── ui/ # Shared UI components (React)
│ └── analytics/ # Shared analytics logic
├── services/
│ └── ml/ # Python FastAPI ML service
├── package.json # Root pnpm workspace config
└── turbo.json # Turborepo build orchestration
Scaffold commands:
# Initialize monorepo with Turborepo
pnpm dlx create-turbo@latest apa
# Add Next.js web app
cd apps && pnpm create next-app web --typescript --tailwind --app
# Add Expo mobile app
npx create-expo-app mobile --template blank-typescript
# Add NestJS API
npx @nestjs/cli new api
# Initialize Python ML service
cd services/ml
python -m venv venv
pip install fastapi uvicorn scikit-learn pandas numpy python-dotenv
Step 6: Database Setup (Supabase)¶
# Install Supabase CLI
brew install supabase/tap/supabase
# Initialize Supabase project (creates local dev environment)
supabase init
# Start local Supabase (PostgreSQL + Auth + Storage locally)
supabase start
# This starts:
# - PostgreSQL on localhost:5432
# - Supabase Studio (DB GUI) on localhost:54323
# - Auth server on localhost:54321
# - Storage server on localhost:54321
# Environment variables for local dev:
# SUPABASE_URL=http://localhost:54321
# SUPABASE_ANON_KEY=<from supabase start output>
Step 7: Environment Configuration¶
Create .env.local files for each service:
# apps/web/.env.local
NEXT_PUBLIC_SUPABASE_URL=http://localhost:54321
NEXT_PUBLIC_SUPABASE_ANON_KEY=<local key>
API_BASE_URL=http://localhost:3001
# apps/api/.env
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/postgres
SUPABASE_SERVICE_KEY=<local service key>
ML_SERVICE_URL=http://localhost:8000
GARMIN_CLIENT_ID=<from developer.garmin.com>
WHOOP_CLIENT_ID=<from developer.whoop.com>
# services/ml/.env
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/postgres
OPENAI_API_KEY=<from platform.openai.com>
4.3 Local Development Workflow¶
Daily Dev Loop:
# Terminal 1: Start Supabase
supabase start
# Terminal 2: Start API server
cd apps/api && pnpm run dev
# Terminal 3: Start ML service
cd services/ml && source venv/bin/activate && uvicorn main:app --reload
# Terminal 4: Start web app
cd apps/web && pnpm run dev
# Terminal 5: Start mobile app
cd apps/mobile && npx expo start
# Then: press 'i' for iOS simulator, 'a' for Android emulator
Recommended VS Code Extensions: - Prisma (DB schema management) - ESLint + Prettier (code formatting) - Tailwind CSS IntelliSense - REST Client (API testing without Postman) - GitHub Copilot (accelerates AI-agent coding)
4.4 Testing Strategy¶
Unit Tests: Vitest for TypeScript (faster than Jest); pytest for Python ML service - Test coverage targets: business logic 80%+, API handlers 70%+, ML models (validation scripts)
Integration Tests: Supertest for API endpoint testing - Test database: Supabase local instance with seed data
Mobile Testing: - iOS Simulator: Good for UI development; cannot test real HealthKit data (HealthKit requires physical device) - Android Emulator: Good for UI; Health Connect requires Android 9+ device or configured emulator - Physical Devices: Required for HealthKit and wearable OAuth flows — enroll devices in Apple Developer Program for TestFlight distribution
E2E Tests: Playwright for web dashboard (deferred to Phase 6)
Wearable API Testing: - Garmin: Sandbox environment available (developer.garmin.com/testing) - WHOOP: Use real account with developer app (developer.whoop.com) - Apple HealthKit: Must use physical iPhone; no simulator support for real HealthKit reads
4.5 CI/CD Pipeline¶
Recommended Stack: GitHub Actions + Vercel + EAS
# .github/workflows/ci.yml
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- run: pnpm install
- run: pnpm run test
- run: pnpm run lint
deploy-web:
if: github.ref == 'refs/heads/main'
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: amondnet/vercel-action@v25 # Auto-deploy to Vercel
build-mobile:
if: github.ref == 'refs/heads/main'
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: expo/expo-github-action@v8
- run: eas build --platform all --non-interactive
Deployment Targets: - Web dashboard → Vercel (auto-deploy on merge to main) - API server → Railway (auto-deploy via Railway GitHub integration) - ML service → Railway or Google Cloud Run (scale-to-zero) - Mobile builds → EAS Build (cloud; no Mac required for builds) - Mobile updates (non-breaking) → Expo OTA Updates (instant, no App Store review)
5. Developer Account & Platform Setup¶
5.1 Apple Developer Program¶
| Item | Detail |
|---|---|
| Cost | $99 USD/year (individual or organization) |
| Process | Sign up at developer.apple.com/programs/enroll/ |
| Timeline | Instant for individuals; 2–7 days for organization enrollment (requires D-U-N-S number) |
| What's included | App Store distribution, TestFlight beta, Xcode cloud, 100 registered test devices |
| Required for | iOS App Store publishing, TestFlight distribution, HealthKit entitlements |
Enrollment steps: 1. Sign in with Apple ID at developer.apple.com 2. Choose Individual (personal) or Organization (Vivere Vitalis LLC) 3. Organization requires: D-U-N-S number (free from Dun & Bradstreet, takes 5 business days if not already registered) 4. Pay $99 via credit card 5. Wait for activation email (24–48 hours for organization)
App Store guidelines to know (subscriptions + health data): - Guideline 3.1.2: Auto-renewable subscriptions require clear terms, cancellation instructions, and Apple's standard subscription disclosure language — build this into the paywall UI - Guideline 5.1.1: Apps accessing health/biometric data must have a clear privacy policy explaining data use - HealthKit guideline: Only request HealthKit permissions relevant to your app's function; Apple reviewers will reject if you request unnecessary data types - In-app purchase: If selling subscriptions through the mobile app, Apple takes 15% (first year per customer under $1M ARR) or 30% (after) — APA's B2B sales will route through the web dashboard, not in-app purchase, to avoid this fee - Enterprise vs. Consumer: If eventually doing white-label (schools getting their own branded app), Apple Developer Enterprise Program ($299/year) allows in-house distribution without App Store review
5.2 Google Play Console¶
| Item | Detail |
|---|---|
| Cost | $25 USD one-time registration fee |
| Process | Sign up at play.google.com/console |
| Timeline | Account active within 48 hours; first app review 3–7 business days |
| What's included | App distribution, Play Store publishing, internal testing tracks |
| Revenue share | 15% on first $1M/year per app; 30% above $1M |
Health Connect integration notes:
- Android's Health Connect (successor to Google Fit) provides unified health data access
- Apps must declare android.permission.health.READ_* permissions in the manifest
- Google Play requires a privacy policy URL and clear disclosure of health data usage
- Health Connect permissions require a separate Play Store review for sensitive data types
5.3 Domain and Hosting Setup¶
Domain:
- Register ascentperformanceanalytics.com or ascentsports.io (or similar)
- Registrar: Namecheap (~$12/year) or Cloudflare Registrar (~$10/year at cost)
- DNS: Cloudflare (free tier) for fast propagation and DDoS protection
- Email: Resend or Google Workspace ($6/user/month) for jeff@ascentperformanceanalytics.com
Hosting accounts to create: - Vercel (vercel.com) — free tier for web dashboard development; Pro $20/month when needed - Railway (railway.app) — $5/month starter + usage for API and ML service - Supabase (supabase.com) — free tier for development; Pro $25/month for production
5.4 Wearable Developer Accounts¶
| Platform | Registration | Cost | Data Available |
|---|---|---|---|
| Garmin Connect Developer | developer.garmin.com | Free (approved businesses) | HRV, sleep, stress, activities, running dynamics |
| WHOOP Developer | developer.whoop.com | Free (requires WHOOP membership) | Recovery, strain, HRV, sleep, heart rate |
| Polar Open AccessLink | polar.com/developers | Free (Polar Flow account) | Training, activity, sleep, HR |
| Apple HealthKit | Via Apple Dev Program ($99/yr) | Included | Everything in Health app on iPhone |
| Google Health Connect | Via Play Console ($25 one-time) | Included | Android equivalent of HealthKit |
| Oura Ring API | cloud.ouraring.com/docs | Free (application required) | Sleep stages, readiness, HRV, activity |
Priority order for integration: Manual entry → Apple HealthKit/Health Connect → WHOOP → Garmin → Polar → Oura
5.5 Other Accounts¶
- OpenAI API (platform.openai.com) — for LLM-powered insights; pay-as-you-go; ~$10–20/month at early scale
- Sentry (sentry.io) — error tracking; free tier covers early scale; $26/month Pro when needed
- GitHub (github.com) — version control; free for private repos
- Resend (resend.com) — transactional email; free tier (3,000 emails/month); $20/month Pro
- Stripe (stripe.com) — payment processing; 2.9% + $0.30 per transaction; no monthly fee
6. Phased Milestone Plan¶
Jeff is building with an AI coding agent team (Melody, Atlas, Quinn), not a human development team. Timelines reflect AI-assisted velocity, which is roughly 3–5x faster than traditional human development for well-scoped tasks, but subject to architectural complexity and integration surprises.
Each phase has a go/no-go gate that must be met before the next phase begins.
Phase 0: Foundation¶
Duration: 2 weeks
Start: April 2026
End: Mid-April 2026
Objectives: - All developer accounts active - Development environment fully operational on Mac mini - Project repository scaffolded and CI/CD pipeline live - Team alignment on architecture and tech stack decisions
Deliverables: - [ ] Apple Developer Program enrollment complete - [ ] Google Play Console account registered - [ ] Garmin Health API developer application submitted - [ ] WHOOP developer account created - [ ] Supabase project created (dev + staging + prod environments) - [ ] GitHub monorepo scaffolded (apps/web, apps/mobile, apps/api, services/ml) - [ ] CI pipeline running (GitHub Actions → Vercel preview deploys) - [ ] Domain registered and DNS configured - [ ] Local dev environment verified: iOS simulator running Expo app, Android emulator running, Next.js hot reload working, API server responding, ML service health check passing
Estimated Duration: 2 weeks
Dependencies: Apple D-U-N-S number (get this first — can take 5 days)
Go/No-Go Criteria: All developer accounts active; local dev environment running all 4 services; first Expo build deploying to iOS simulator
Phase 1: Core Platform¶
Duration: 4 weeks
Start: Mid-April 2026
End: Mid-May 2026
Objectives: Build the minimum viable web dashboard: auth, team management, athlete profiles, and manual wellness check-in. This is the skeleton everything else attaches to.
Deliverables: - [ ] Authentication (Supabase Auth): email/password login, team invite flow, role-based access (admin, coach, athlete) - [ ] Team management UI: create team, add athletes, manage roster - [ ] Athlete profile: basic info, sport, position, custom fields - [ ] Manual wellness check-in form (web): sleep hours, sleep quality, energy, soreness, mood, notes - [ ] Training load entry: date, activity type, duration, RPE → auto-calculates sRPE - [ ] Basic athlete dashboard: 7-day wellness trend chart, current week load summary - [ ] Team overview dashboard: color-coded athlete cards (green/yellow/red) based on wellness score - [ ] Database schema v1: athletes, teams, wellness_entries, training_sessions, users - [ ] API v1: auth endpoints, athlete CRUD, wellness entry CRUD, team management - [ ] Mobile app v0.1: wellness check-in form (athletes submit daily via phone) — this is the mobile MVP
Estimated Duration: 4 weeks
Dependencies: Phase 0 complete; Supabase schema finalized
Go/No-Go Criteria: Coach can create team, add 5 athletes, athletes can submit wellness check-ins via mobile, coach sees team dashboard with color coding. System stable for 1 week of dog-fooding.
Phase 2: Data Integration¶
Duration: 6 weeks
Start: Mid-May 2026
End: Late June 2026
Objectives: Build the data ingestion pipeline. Connect at least 2 wearable APIs (WHOOP + Garmin recommended for Phase 2). Implement the Normalized Data Layer. Begin CSV import for teams with existing data.
Deliverables: - [ ] Normalized Data Layer (NDL): canonical schema, schema mapper, confidence scoring - [ ] WHOOP OAuth integration: athlete connects WHOOP account; nightly sync of recovery, HRV, sleep - [ ] Garmin Health API integration: athlete connects Garmin account; webhook handler for activity and health pushes - [ ] Apple HealthKit integration (iOS mobile app): reads HRV, sleep, workouts from iPhone - [ ] CSV import tool: upload Catapult / TrainHeroic / TeamBuildr exports; field mapping UI - [ ] Data quality indicators: flag low-confidence data, display data source on each metric - [ ] Automated daily sync job: scheduled task syncs all connected wearable accounts nightly - [ ] Athlete data connection UI: wizard for connecting wearable accounts (OAuth flows) - [ ] Historical backfill: on first wearable connection, import last 90 days of data
Estimated Duration: 6 weeks
Dependencies: Phase 1 complete; Garmin API approval (submit application in Phase 0; approval can take 2–4 weeks)
Go/No-Go Criteria: At least 2 wearable sources actively ingesting data for test athletes; NDL outputting canonical metrics; no data duplication errors over 1-week test period; CSV import successfully ingesting a real exported dataset
Phase 3: AI Insights Engine¶
Duration: 5 weeks
Start: Late June 2026
End: Early August 2026
Objectives: Build the analytics engine that turns data into insights. Start with rule-based alerts (ACWR, HRV thresholds), add statistical models, implement LLM-powered weekly summaries.
Deliverables: - [ ] Athlete baseline calculation: 14-day rolling average for HRV, sleep, load (per athlete) - [ ] ACWR implementation: acute (7-day) : chronic (28-day) workload ratio; alert at >1.3 and >1.5 - [ ] HRV deviation alert: flag when athlete HRV >15% below personal 7-day baseline - [ ] Readiness Score: composite of HRV, sleep quality, wellness survey → single 0–100 score per athlete - [ ] Team Readiness Dashboard: real-time team view with individual readiness scores and status flags - [ ] Injury Risk Score v1: rule-based (ACWR + HRV + wellness composite) - [ ] LLM Weekly Summary: GPT-4o generates plain-English weekly summary for each athlete; coach receives email digest - [ ] Recommendation Engine v1: context-aware recommendations ("Reduce Friday intensity — 3 athletes showing overreaching signals") - [ ] Alert system: in-app notifications + email for high-priority flags (ACWR >1.5, red status athlete)
Estimated Duration: 5 weeks
Dependencies: Phase 2 complete with 2+ weeks of real data for baseline; OpenAI API key configured
Go/No-Go Criteria: At least 5 test athletes generating daily insights; ACWR calculation verified against manual calculation; LLM summaries reviewed and approved as accurate + useful; readiness scores correlating reasonably with self-reported wellness (informal validation)
Phase 4: Mobile Apps¶
Duration: 5 weeks
Start: Early August 2026
End: Mid-September 2026
Objectives: Expand the mobile app beyond wellness check-in. Build the full athlete-facing experience and coach mobile dashboard. Prepare for TestFlight and Play Store internal testing.
Deliverables: - [ ] Athlete mobile app (iOS + Android): - Daily wellness check-in (improved UX from Phase 1 prototype) - Personal dashboard: readiness score, HRV trend, load this week - Wearable connection wizard (in-app OAuth for Garmin, WHOOP) - Push notifications: morning check-in reminder, coach messages - HealthKit / Health Connect read (automatic data sync in background) - [ ] Coach mobile app: - Team readiness overview (same as web, optimized for phone) - Athlete detail view - Push notification receipt for high-priority alerts - [ ] TestFlight distribution: internal beta (Jeff + test coaches) - [ ] Play Store internal testing track - [ ] App Store Connect configuration: bundle IDs, app metadata, screenshots - [ ] Privacy policy and terms of service pages live on web
Estimated Duration: 5 weeks
Dependencies: Phase 3 complete; Apple Developer Program active; EAS configured for production builds
Go/No-Go Criteria: iOS and Android apps installable via TestFlight / internal track; athlete can complete full onboarding (account creation → wearable connect → first check-in) in under 5 minutes; push notifications delivering on both platforms; no critical crashes in 1 week of internal testing
Phase 5: BioThread Module¶
Duration: 4 weeks
Start: Mid-September 2026
End: Mid-October 2026
Objectives: Implement the BioThread GAP Score as a premium add-on module. This is the IP differentiator. Validate the algorithm against real athlete data from Phase 2–4.
Deliverables: - [ ] GAP Score calculation engine (Python microservice): CV_Index + CNS_Index calculation - [ ] Garmin running dynamics integration: GCT extraction and normalization - [ ] Deep sleep percentage extraction from WHOOP and Garmin sleep data - [ ] Fallback scoring logic: graceful degradation when GCT data unavailable - [ ] BioThread dashboard (web + mobile): dual-index visualization, GAP trend, "false green" alert - [ ] Prescriptive recommendations: CNS-specific guidance triggered by GAP score thresholds - [ ] BioThread Add-On toggle: stripe billing integration; teams can enable/disable per billing cycle - [ ] Algorithm validation: compare GAP scores against coach observations in beta cohort - [ ] BioThread onboarding guide: in-app explanation of CNS fatigue concept (for coaches unfamiliar with it)
Estimated Duration: 4 weeks
Dependencies: Phase 4 complete; at least 10 beta athletes with Garmin or WHOOP data from prior phases; Stripe subscription management configured
Go/No-Go Criteria: GAP Score calculating correctly for at least 20 athletes; at least 3 beta coaches validating that GAP Score output correlates with their observational judgment; billing toggle working (enable/disable BioThread triggers Stripe subscription change)
Phase 6: Launch Preparation¶
Duration: 4 weeks
Start: Mid-October 2026
End: Mid-November 2026
Objectives: Prepare for public launch. App Store submission, marketing website, beta program formalization, compliance documentation.
Deliverables: - [ ] App Store submission (iOS): app review (typically 1–3 days for clean submissions) - [ ] Google Play Store submission: production release (typically 1–7 days review) - [ ] Marketing website live: ascentperformanceanalytics.com — product overview, pricing, use cases, social proof from beta, demo request CTA - [ ] Demo environment: pre-loaded data set for product demos (no real athlete data) - [ ] NCAA compliance kit: Athlete biometric consent template, data processing agreement (DPA) for FERPA compliance, privacy policy tailored to NCAA requirements - [ ] Beta customer case studies: 2–3 case studies from beta cohort documenting outcomes - [ ] Onboarding documentation: setup guides for coaches, athlete onboarding scripts - [ ] Customer support setup: help documentation, Intercom or Crisp chat widget - [ ] SOC 2 readiness assessment (deferred to Year 2, but document what's needed) - [ ] Pricing page with Stripe checkout integration - [ ] Free trial flow: 30-day trial, no credit card required
Estimated Duration: 4 weeks
Dependencies: Phase 5 complete; 5+ beta customers willing to provide testimonials; App Store review approved
Go/No-Go Criteria: Both apps live in respective stores; marketing website converting visitors to demo requests; pricing + checkout flow functional; at least 2 case studies published; NCAA compliance kit ready to provide to prospects
Phase 7: Go-to-Market¶
Duration: Ongoing from Mid-November 2026
Target: First 10 paying customers by January 2027
Objectives: Convert beta users to paid, begin outbound sales motion, attend NSCA 2026 conference, iterate based on customer feedback.
Deliverables (Month 1–3 of GTM): - [ ] Beta → paid conversion: offer founding customer pricing (10% lifetime discount for first 20 customers) - [ ] Outbound sequence: 500-contact email + LinkedIn list of D2 S&C coaches; 3-touch sequence - [ ] NSCA 2026 National Conference booth or sponsorship (July 6–12, 2026 confirmed dates) - [ ] Content marketing: 2 technical blog posts/month targeting "athlete load monitoring," "HRV for coaches," "injury prevention analytics" - [ ] Referral program: existing customers get 1 month free for each new paying customer referred - [ ] First 10 paying customers onboarded and generating revenue - [ ] Monthly metrics dashboard: MRR, churn, NPS, time-to-value for new customers - [ ] Year 2 product roadmap drafted based on customer feedback
Dependencies: Phase 6 complete
Go/No-Go Criteria for Phase 7 Success: 10 paying customers by end of Month 2 (January 2027); MRR of $6,000+ (averaging $600/month per customer); NPS > 30; churn rate <5% per month
7. Cost Model¶
7.1 What Jeff Is Already Paying (Baseline)¶
Assumption: Jeff's existing Vivere Vitalis infrastructure is not allocated to APA. These are incremental costs.
Existing tools that benefit APA development but are already paid: - Mac mini M-series: owned hardware — $0 incremental - Claude/Codex AI agent credits: existing subscription — assume partially incremental - OpenClaw: already operational — $0 incremental - GitHub: already in use — $0 incremental
7.2 Phase-by-Phase Cost Breakdown¶
Phase 0: Foundation (Weeks 1–2)¶
| Item | Cost | Notes |
|---|---|---|
| Apple Developer Program | $99/year | One-time annual |
| Google Play Console | $25 one-time | Never recurring |
| Domain registration | $12/year | ascentperformanceanalytics.com |
| Cloudflare (DNS/CDN) | $0 | Free tier |
| Supabase (dev) | $0 | Free tier for dev |
| Vercel (dev) | $0 | Free tier |
| Railway (dev) | $5/month | Starter plan |
| Phase 0 Total | ~$141 upfront + $5/month |
Phase 1: Core Platform (Weeks 3–6)¶
| Item | Monthly Cost | Notes |
|---|---|---|
| Supabase Free → Pro | $0 (free tier) | Upgrade to Pro at launch |
| Railway (API + ML) | $5–$15/month | Minimal traffic; scale-to-zero |
| Vercel (web) | $0 | Free tier handles dev traffic |
| Resend (email) | $0 | Free tier (3K emails/month) |
| OpenAI API | $0 | Not yet integrated |
| Phase 1 Monthly | ~$15/month |
Phase 2: Data Integration (Weeks 7–12)¶
| Item | Monthly Cost | Notes |
|---|---|---|
| Supabase Free | $0 | Still dev; stay on free |
| Railway | $15–$25/month | More services running |
| Garmin API | $0 | Free for approved developers |
| WHOOP API | $0 | Free developer access |
| Phase 2 Monthly | ~$25/month |
Phase 3: AI Insights (Weeks 13–17)¶
| Item | Monthly Cost | Notes |
|---|---|---|
| OpenAI API | $10–$30/month | LLM summaries for beta athletes |
| Railway (ML service) | $20–$30/month | More compute for ML |
| Sentry (error tracking) | $0 | Free tier |
| Phase 3 Monthly | ~$50/month |
Phase 4–5: Mobile + BioThread (Weeks 18–26)¶
| Item | Monthly Cost | Notes |
|---|---|---|
| Expo EAS Build | $0–$29/month | Free tier: 30 builds/month; Production $29/month |
| Stripe | $0 | No monthly fee; 2.9% + $0.30/transaction |
| TestFlight | $0 | Included in Apple Dev Program |
| Phase 4–5 Monthly | ~$79/month total |
Phase 6: Launch Prep¶
| Item | Monthly Cost / One-Time | Notes |
|---|---|---|
| Supabase Pro | $25/month | Upgrade from free at launch |
| Vercel Pro | $20/month | Custom domain + analytics |
| Railway (API + ML) | $30–$50/month | Production traffic |
| Sentry Pro | $26/month | Error tracking at scale |
| Resend Pro | $20/month | Transactional email |
| Intercom / Crisp (support) | $29–$49/month | Customer support chat |
| Phase 6 Monthly Total | ~$180/month |
7.3 Operational Cost at Scale¶
| Customer Count | Monthly Rev | Infrastructure Cost | Gross Margin |
|---|---|---|---|
| 10 customers | $6,000 | $200/month | 97% |
| 25 customers | $18,000 | $350/month | 98% |
| 50 customers | $36,000 | $600/month | 98% |
| 100 customers | $72,000 | $1,200/month | 98% |
| 200 customers | $144,000 | $2,500/month | 98% |
Note: SaaS infrastructure scales logarithmically at this tier. Supabase, Railway, and Vercel all have generous tier pricing. The primary cost driver at scale is the OpenAI API for LLM summaries (~$0.02/athlete/week at current pricing) and storage for time-series athlete data.
7.4 Break-Even Analysis¶
Monthly fixed costs at launch (Phase 6+): ~$180/month infrastructure
Variable costs: ~2% of revenue (Stripe fees + API usage)
Jeff's time opportunity cost: Solo operator; no salary to model — but the business needs to cover Jeff's personal income target ($1M/year = $83K/month eventually)
| Scenario | Customers | Avg MRR/Customer | Monthly Revenue | Break-Even vs. Costs |
|---|---|---|---|---|
| Minimum viable | 1 | $500 | $500 | Covers infra; not sustainable |
| Ramen viable | 3 | $600 | $1,800 | Covers infra + tools budget |
| Self-sustaining | 10 | $650 | $6,500 | Covers full operations |
| Income-replacing | 50 | $700 | $35,000 | ~$420K ARR; viable founder salary |
| $1M ARR target | 100+ | $850 | $85,000/month | $1.02M ARR |
7.5 Total Investment to First Paying Customer¶
| Category | Cost |
|---|---|
| Developer accounts (Apple + Google) | $124 |
| Domain + hosting setup | $12 |
| Monthly infrastructure (6 months to Phase 6 launch) | $480 (avg $80/month × 6) |
| OpenAI API during development | $120 (6 months × $20/month) |
| NSCA conference (planned Year 1 GTM) | $5,000–$8,000 |
| Marketing site design/copywriting | $0–$500 (AI-generated) |
| Legal: privacy policy, ToS, DPA (template) | $0–$500 (Clerky or similar template service) |
| Provisional patent (BioThread, optional) | $1,500–$3,000 |
| Total to first paying customer | ~$8,000–$12,500 |
This is remarkably low. The AI agent team model eliminates the primary cost driver (human developer salaries). The main investment is Jeff's time and the NSCA conference.
7.6 18-Month Financial Projection¶
| Month | Customers | MRR | Cumulative Revenue | Cumulative Cost |
|---|---|---|---|---|
| 1–4 (development) | 0 | $0 | $0 | $640 |
| 5 (beta) | 5 beta | $0 | $0 | $800 |
| 6 | 3 paid | $1,800 | $1,800 | $1,000 |
| 7 | 7 paid | $4,200 | $6,000 | $1,200 |
| 8 | 12 paid | $7,800 | $13,800 | $1,400 |
| 9 | 18 paid | $12,000 | $25,800 | $1,600 |
| 10 | 25 paid | $17,500 | $43,300 | $1,800 |
| 11 | 32 paid | $22,400 | $65,700 | $2,000 |
| 12 | 40 paid | $28,000 | $93,700 | $2,200 |
| 13 | 50 paid | $37,500 | $131,200 | $2,500 |
| 14 | 60 paid | $48,000 | $179,200 | $3,000 |
| 15 | 70 paid | $59,500 | $238,700 | $3,500 |
| 16 | 80 paid | $68,000 | $306,700 | $4,000 |
| 17 | 90 paid | $76,500 | $383,200 | $4,500 |
| 18 | 100 paid | $85,000 | $468,200 | $5,000 |
Assumptions: Average MRR per customer scales from $600 (early, Starter-heavy mix) to $850 (mature mix of Starter/Pro/Elite). Customer acquisition accelerates after NSCA conference (Month ~8 equivalent from launch). Monthly growth rate: ~25% in early months, declining to ~10% by Month 18.
18-month cumulative investment: ~$35,000 in infrastructure + accounts
18-month cumulative revenue: ~$468,000
Net at 18 months: ~+$433,000 (before Jeff's draw)
8. Revenue Projections¶
8.1 Pricing Tiers — Detailed Feature Breakdown¶
| Feature | Starter ($500/mo) | Pro ($1,000/mo) | Elite ($2,000/mo) |
|---|---|---|---|
| Athletes | Up to 30 | Up to 75 | Unlimited |
| Teams | 1 | Up to 5 | Unlimited |
| Manual wellness check-in | ✓ | ✓ | ✓ |
| Training load tracking (sRPE) | ✓ | ✓ | ✓ |
| Basic dashboards | ✓ | ✓ | ✓ |
| Rule-based alerts | ✓ | ✓ | ✓ |
| Wearable integrations | 1 source | All sources | All sources |
| ACWR / injury risk model | — | ✓ | ✓ |
| AI insights + recommendations | — | ✓ | ✓ |
| LLM weekly summaries | — | ✓ | ✓ |
| Custom reports | — | ✓ | ✓ |
| CSV import | ✓ | ✓ | ✓ |
| BioThread GAP Score | — | Add-on ($175/mo) | Included |
| API access | — | — | ✓ |
| White-label option | — | — | ✓ (+fee) |
| Priority support | — | — | ✓ |
| NCAA compliance kit | ✓ | ✓ | ✓ |
| Annual contract discount | 10% | 10% | 10% |
BioThread Add-On ($175/month):
Available on Pro tier. Adds: GAP Score, CNS fatigue dashboard, "false green" alerts, running dynamics integration, prescriptive CNS recommendations.
8.2 Customer Acquisition Timeline¶
Year 1 Customer Mix (Target: 40 customers at end of Month 12)
| Tier | Customers | MRR Contribution |
|---|---|---|
| Starter | 20 | $10,000 |
| Pro (no BioThread) | 12 | $12,000 |
| Pro + BioThread | 5 | $5,875 |
| Elite | 3 | $6,000 |
| Total | 40 | $33,875/month |
Annual: ~$406,500 ARR at end of Year 1
Year 2 Customer Mix (Target: 100 customers at end of Month 24)
| Tier | Customers | MRR Contribution |
|---|---|---|
| Starter | 35 | $17,500 |
| Pro (no BioThread) | 30 | $30,000 |
| Pro + BioThread | 20 | $23,500 |
| Elite | 10 | $20,000 |
| BioThread licensing (B2B) | 1 license | $2,500 |
| Total | $93,500/month |
Annual: ~$1.12M ARR at end of Year 2
Year 3 Customer Mix (Target: 200 customers + 3 licenses)
| Tier | Customers | MRR Contribution |
|---|---|---|
| Starter | 60 | $30,000 |
| Pro (no BioThread) | 60 | $60,000 |
| Pro + BioThread | 45 | $52,875 |
| Elite | 25 | $50,000 |
| Gym/PT tier (new) | 100 | $15,000 |
| BioThread licensing | 3 licenses | $7,500 |
| Total | $215,375/month |
Annual: ~$2.58M ARR at end of Year 3
8.3 Path to $1M ARR¶
$1M ARR = ~$83,333/month in recurring revenue
At the average blended price of ~$835/month per customer (Pro/Elite mix):
$83,333 / $835 = ~100 customers needed
Timeline to 100 customers from launch:
| Customer Acquisition Rate | Time to 100 Customers |
|---|---|
| Conservative (5/month avg) | Month 20 from launch = ~Dec 2027 |
| Moderate (7/month avg) | Month 14 from launch = ~Jun 2027 |
| Aggressive (10/month avg) | Month 10 from launch = ~Feb 2027 |
Recommended planning assumption: Moderate pace (7/month avg) = $1M ARR by approximately June–July 2027 (Month 14 post-launch, Month ~20 from today)
Key levers to accelerate: 1. NSCA conference booth in July 2026 (should close 5–15 customers from one event) 2. Conference blanket deals (1 deal = 10–20 schools instantly) 3. BioThread word-of-mouth among S&C coaches (genuine differentiation drives organic referral) 4. Annual contract conversion (reduces churn, improves cash flow)
8.4 Expansion Revenue: Gyms and Personal Trainers¶
This is a Year 3+ initiative. The product must be meaningfully simplified for individual trainer use.
Potential product: "APA Solo" tier — individual trainer managing 10–30 athletes
Pricing: $99–$149/month
Market: 60,000 NSCA members + 170,000 NASM certified trainers → realistic addressable: 5,000–10,000 willing to pay
If 2,000 trainers subscribe at $120/month: $240K/month = $2.88M ARR from this segment alone
This segment requires: - Self-serve onboarding (no sales team support) - Simpler UX (trainers are not IT administrators) - Stripe billing (no enterprise contracts) - Content marketing acquisition (YouTube, Instagram fitness coach audience)
Do not attempt before APA core has 100 team customers. The product knowledge and brand credibility from the team market is what makes the individual trainer market trustworthy.
9. Risk Analysis¶
9.1 Technical Risks¶
Risk T1: Wearable API Instability and Breaking Changes¶
Probability: Medium-High
Impact: High — if a major API breaks, athlete data gaps appear; customer trust erodes
Detail: Garmin, WHOOP, and Apple have changed their APIs without warning before. Garmin deprecated certain Health API endpoints in 2023. Apple changes HealthKit data types periodically.
Mitigation:
- Store raw payload alongside normalized data — if schema changes, reprocess from raw
- Monitor API changelogs via RSS/GitHub
- Build graceful degradation: if one source fails, flag it to coach but don't crash the dashboard
- Never depend on a single wearable source for a critical alert
Risk T2: Data Integration Complexity Underestimated¶
Probability: High
Impact: Medium — delays Phase 2 by 2–4 weeks
Detail: Every wearable integration is a unique snowflake. OAuth flows, rate limits, data normalization edge cases, and timezone handling all take longer than expected.
Mitigation:
- Start with WHOOP (cleanest API) before Garmin (more complex webhook architecture)
- Budget 2 extra weeks as buffer in Phase 2
- Have a fallback: manual CSV import can replace broken API integrations temporarily
Risk T3: AI Model Accuracy / False Positives¶
Probability: Medium
Impact: High — a falsely flagged injury risk that pulls an athlete unnecessarily damages coach trust
Detail: Rule-based models (ACWR) have known limitations; LLM-generated summaries can hallucinate
Mitigation:
- Start with well-validated, published models (ACWR is peer-reviewed sports science)
- Mark AI recommendations as "advisory" not "definitive" in UI
- Give coaches easy "dismiss" and "feedback" buttons to improve model quality
- LLM summaries always grounded on real data passed in context; no inference beyond the numbers provided
- Human-in-the-loop: coach must confirm before any alert is surfaced to athlete
Risk T4: Scaling Database for Time-Series Data¶
Probability: Low (Year 1) / Medium (Year 2+)
Impact: Medium — slow dashboards at scale erode UX
Detail: Daily wearable metrics for 100 athletes × 5 sources × 365 days = ~182K rows/year per 100-athlete team; 100 teams = 18M rows/year
Mitigation:
- Supabase handles this comfortably at early scale
- Implement data retention policy (raw data: 2 years; aggregated summaries: indefinite)
- Consider TimescaleDB extension if query performance degrades
- Plan migration path to dedicated time-series DB (InfluxDB, TimescaleDB) at 500+ customers
9.2 Market Risks¶
Risk M1: Competitor Enters Mid-Market¶
Probability: Medium
Impact: High — if Catapult or a VC-backed startup targets the mid-market directly, APA faces a well-funded competitor
Detail: The mid-market gap is visible to anyone doing this analysis. It's a matter of when, not if, a well-capitalized player targets it.
Mitigation:
- Speed is the moat. Get to 100 customers before the next competitor launches.
- BioThread GAP Score is a defensible IP differentiator competitors can't immediately replicate
- Build switching costs: the longer a team uses APA, the more athlete history lives in the platform — migration is painful
- Community moat: if APA becomes the de facto tool for D2 S&C coaches (via NSCA relationships), brand loyalty matters
Risk M2: Price Sensitivity Higher Than Projected¶
Probability: Medium
Impact: Medium — slower sales cycle; need to lower entry price
Detail: The $400/month sweet spot for D2 schools is triangulated from secondary research; it has not been validated by direct customer conversations.
Mitigation:
- Jeff must conduct 15+ direct customer discovery calls before fixing pricing
- Offer a genuine free tier (30-day full-access trial) to reduce purchase friction
- Annual contract option de-risks the "is this worth it" conversation ($400/month feels bigger than $4,800/year for a line item)
- If D2 schools truly can't go above $300/month, lower Starter tier to $350/month and accept longer path to $1M ARR
Risk M3: Semi-Pro Market Stays Non-Viable¶
Probability: High
Impact: Low — semi-pro was always a secondary market; college is the beachhead
Mitigation: Stay college-focused. Don't spend sales resources on USL or MiLB in Year 1.
Risk M4: NCAA Compliance Requirement Becomes a Blocker¶
Probability: Low-Medium
Impact: Medium — if compliance requirements become complex enough to require institutional legal review, sales cycles lengthen to 6+ months
Detail: The December 2025 NCAA biometric consent guidance is new; procurement teams are still figuring out what they need.
Mitigation:
- Build compliance kit into the product (consent templates, DPA, data processing documentation)
- Make the compliance conversation easy: "Here's what you need to comply — we have templates for all of it"
- Engage NACDA (National Association of Collegiate Directors of Athletics) for guidance on best practices
9.3 Operational Risks¶
Risk O1: AI Agent Team Velocity Plateau¶
Probability: Medium
Impact: High — timeline slips; delayed launch means delayed revenue
Detail: AI coding agents (Melody, etc.) are fast on well-scoped tasks but can introduce architectural debt, get stuck on ambiguous requirements, or produce inconsistent code across sessions.
Mitigation:
- Invest Phase 0 time in strong architectural decisions and code standards — this multiplies AI agent quality
- Use a monorepo with strict TypeScript types — type errors catch AI-generated inconsistencies before they ship
- Quinn (QA agent) reviews all AI-generated code before merging
- Jeff maintains hands-on review of architecture decisions even when delegating implementation to agents
- Weekly integration tests to catch compounding technical debt
Risk O2: Solo Operator Capacity Constraint¶
Probability: High
Impact: Medium — Jeff wears too many hats; product quality or sales suffer
Detail: Jeff is simultaneously CEO, product manager, sales lead, and AI-team director. Something will get dropped.
Mitigation:
- Strict phase gates prevent starting new work before current phase is stable
- Delegate implementation aggressively to AI agents; Jeff should spend time on customer conversations and strategic decisions, not writing code
- Build an async sales process (demo videos, self-serve trial) that doesn't require Jeff on every sales call
- Hire a fractional sales consultant in Year 2 if customer acquisition becomes the bottleneck
Risk O3: Data Security / Privacy Incident¶
Probability: Low
Impact: Very High — athlete biometric data breach would be catastrophic for trust and potentially create FERPA/HIPAA liability
Detail: Handling sensitive athlete health data at NCAA programs creates real compliance obligations (FERPA for student records, state biometric privacy laws)
Mitigation:
- Supabase Row Level Security (RLS) ensures teams can never see each other's data
- All API endpoints authenticated; no public data exposure
- Encrypt data at rest (Supabase handles this automatically)
- Engage a privacy attorney early to confirm FERPA compliance design ($1,000–$2,000 consultation)
- Do not store PHI (Protected Health Information); position as "performance analytics" not "health records" — different regulatory category
- Publish clear data retention policy; offer data deletion on request
9.4 Summary Risk Matrix¶
| Risk | Probability | Impact | Priority | Key Mitigation |
|---|---|---|---|---|
| Competitor enters mid-market | Medium | High | HIGH | Speed to market; BioThread moat; community |
| API instability | Medium-High | High | HIGH | Raw storage; graceful degradation |
| AI team velocity plateau | Medium | High | HIGH | Strong architecture; weekly integration tests |
| Price sensitivity too high | Medium | Medium | MEDIUM | Direct discovery calls; free trial |
| Data integration underestimated | High | Medium | MEDIUM | Build buffer into Phase 2; CSV fallback |
| Solo operator capacity | High | Medium | MEDIUM | Aggressive delegation; async sales |
| Data security incident | Low | Very High | HIGH | RLS; encryption; privacy legal review |
| AI false positives | Medium | High | MEDIUM | Advisory framing; coach override; validated models |
10. Go-to-Market Strategy¶
10.1 Sales Channels¶
Channel 1: Direct Outbound (Primary — Year 1)¶
Target: Head S&C Coaches, Directors of Sports Performance at D2 schools + D1 mid-major/FCS
Outreach mechanism: 1. Build a list of 500 CSCS-certified coaches at target schools (LinkedIn, NSCA member directory) 2. 3-touch email sequence + LinkedIn connection request - Email 1: Problem framing ("Managing athlete load across 3 different platforms?") - Email 2: Social proof ("How [School Name] reduced overtraining injuries with unified data") - Email 3: Low-friction CTA ("Would 15 minutes to see it live be worth it?") 3. Book a product demo; 15-minute Zoom with pre-loaded demo environment 4. Offer 30-day free trial (no credit card); coach implements with 5 athletes 5. Trial → paid conversion with founding customer pricing offer
Expected conversion rates: - Cold email → demo: 2–4% (aggressive but achievable in a niche with genuine pain) - Demo → trial: 40–60% - Trial → paid: 30–50% - Full funnel: 500 contacts → 20 demos → 10 trials → 4–5 customers per outreach cycle
Cost: Primarily Jeff's time (or AI-assisted outreach tools like Apollo.io, $49/month)
Channel 2: NSCA National Conference 2026 (High-Leverage — July 6–12, 2026)¶
- NSCA Annual Conference in 2026 (confirmed July 6–12 per nsca.com)
- S&C coaches from every major NCAA program attend
- Exhibit Hall + speaking opportunities
- NSCA calls booth participation "Annual Engagement Packages" — contact jen.rutolo@nsca.com for pricing
- Budget: $5,000–$8,000 for booth space + sponsorship + travel
- Target: 100 conversations → 20 demo bookings → 5 trial starts at the conference
- Pre-conference strategy: reach out 4–6 weeks before to book 15-minute demo slots in advance
This is the single highest-leverage customer acquisition event in Year 1. Do not skip it.
Channel 3: Peer Referral Network (Organic — Year 1+)¶
S&C coaches have tight professional networks. They talk to each other at conference exhibitions, on private Facebook groups, in NSCA communities. One enthusiastic early adopter coach can directly refer 3–5 colleagues.
Referral program structure: - Referring coach gets 1 month free per successful referral - Referred coach gets first month free - Net cost: ~$600 per acquired customer (vs. $50–200 CAC for outbound) — highly efficient
Channel 4: Athletic Conference Block Deals (Year 2 Strategy)¶
Athletic conferences (Big South, MIAA, RMAC, Lone Star, Mountain East) negotiate software deals on behalf of member schools. A single conference deal can bring 8–25 schools in one contract.
Approach: - Identify conference ADs or SAAC chairs who are tech-forward - Offer conference-wide pricing: $350/school/month (discount from Starter tier) for a 12-month commitment - Pilot with one mid-size conference (10–15 schools); use that case study to approach others - Timeline: 12–18 months of product maturity needed before conference procurement committees will engage
10.2 Decision-Makers¶
| Role | Title | Function | How to Reach |
|---|---|---|---|
| Primary Champion | Head S&C Coach / Director of Sports Performance | Identifies pain, evaluates product, advocates upward | LinkedIn, NSCA conference, direct email |
| Budget Gatekeeper | Athletic Director | Signs purchase order; controls annual budget | Must be sold indirectly via champion |
| Influencer | Head Athletic Trainer | If product includes injury tracking; secondary champion | NATA conference, NSCA |
| Technical Evaluator | IT/Compliance (at larger D2s) | Reviews data security, FERPA compliance | Champion introduces; prepare compliance docs |
| Sport Coach | Head Football / Basketball Coach | Political muscle for budget approval at football/basketball-heavy schools | Not a direct target; engage through champion |
Practical reality: Most D2 purchases are a two-person sale: S&C coach champions → AD approves. Build the product to make this internal sell easy. Give coaches a one-page ROI summary they can share with their AD: - "APA consolidates 3 tools we already pay $4,200/year for into one platform at $6,000/year — net cost is $1,800 more, plus AI insights and injury prevention that we don't currently have."
10.3 Content Marketing & Thought Leadership¶
Building APA's authority in the collegiate S&C community is a Year 1 investment that pays dividends in Year 2+.
Content pillars: 1. The Science — Blog posts on ACWR, HRV, CNS fatigue (anchor BioThread here); cited research 2. Coach Stories — Case studies and interviews with S&C coaches; their workflows, wins, challenges 3. Product Education — How to interpret APA's dashboards; what the alerts mean; what to do with the data 4. Industry Commentary — NCAA rule changes, sports science emerging research, technology adoption
Content calendar (launch through Month 6): - 2 blog posts/month: alternating Science and Coach Stories - 1 LinkedIn post/week: single insight or data point for the S&C community - 1 short video/month: "APA in 60 seconds" demo-style
Distribution: - NSCA community sharing (members share relevant content within the community) - LinkedIn organic (where D2 S&C coaches spend professional time) - Submission to SimpliFaster (industry publication that S&C coaches read) - Email newsletter to trial and prospect list
10.4 Beta Program Design¶
Beta cohort: 10–15 teams, April–October 2026
Structure: - Free access to full platform (including BioThread module) during beta - Commitment required: at minimum 10 athletes using daily check-in, at least 1 wearable connected - Weekly 30-minute feedback call with Jeff (rotating schedule; 3–4 calls/month) - Beta coach agreement: willing to provide a written case study / testimonial after 60 days if satisfied - Data: beta coaches know their data is helping train and validate the AI models
Beta recruitment targets: - 4–5 D2 schools (primary ICP) - 2–3 D1 mid-major (FCS football or non-revenue sport) - 2–3 training academies - 1–2 USL clubs (exploratory; validate or kill semi-pro hypothesis)
Beta success metrics: - Daily active usage rate: >50% of athletes submitting weekly wellness check-ins - Net Promoter Score: >30 after 60 days - Conversion intention: >60% of beta coaches expressing intent to pay at end of trial
10.5 Pricing Negotiation Expectations¶
What will happen in the first 50 sales conversations: - 40% will ask for a discount - 20% will ask for a longer free trial - 15% will need to "run it by the AD" before committing - 10% will ask for a feature that's not built yet - 5% will want a multi-year deal in exchange for a lower rate
Negotiation guidelines: - Annual contract = 10% discount (equivalent to 1.2 free months) — this is the standard offer - Multi-year (2-year) = 15% discount — acceptable for stable, quality customers - "Budget-constrained" (genuine D2/D3 cases) = offer Starter tier at $400/month (vs. list $500) for first 6 months as goodwill gesture; standard price after - Do not discount Pro or Elite tiers in Year 1 — scarcity/value perception matters - Do not offer lifetime pricing — APA is not Appsumo - Feature requests: add to roadmap backlog; do not custom-build for a single customer unless it's strategically important (e.g., a large D1 school that becomes a flagship reference)
For genuinely budget-constrained D3 schools: Consider a D3 tier at $250/month — basic wellness + training load, no AI insights, no wearable integration. Self-serve, no dedicated support. This keeps them in the ecosystem and creates upgrade paths as their budgets grow.
Appendix A: Key Assumptions Registry¶
| # | Assumption | Risk if Wrong | Validation Method |
|---|---|---|---|
| A1 | D2 schools will pay $400–$600/month for APA | Pricing too high → slow sales | 15 direct discovery calls before fixing pricing |
| A2 | AI agent team builds at 3–5x human velocity for well-scoped work | Timeline slips by 2x | Monitor actual vs. estimated time per phase |
| A3 | WHOOP Developer API remains free | Adds cost to BioThread integration | Monitor developer.whoop.com changelog |
| A4 | Garmin API approval granted within 4 weeks | Phase 2 delayed | Apply in Phase 0; have WHOOP as backup priority |
| A5 | Monthly churn rate stays <5% | ARR growth slower than projected | Implement NPS tracking; respond to churn signals early |
| A6 | NSCA conference yields 5+ customer starts | Year 1 revenue below projection | Have parallel outbound motion as backup |
| A7 | BioThread GAP Score correlates with coach observations | Module adoption below 40% | Validate with 20+ athletes in Phase 5 beta |
| A8 | LLM weekly summaries at <$0.02/athlete/week | AI cost erodes margins at scale | Monitor OpenAI usage; optimize prompts early |
Appendix B: Competitive Monitoring List¶
Monitor these companies quarterly for new features, pricing changes, or strategic moves:
- Catapult Sports (catapultsports.com) — watch for mid-market product announcements
- TeamBuildr (teambuildr.com) — primary budget-tier competitor; watch for AI feature additions
- TrainHeroic (trainheroic.com) — watch for analytics layer expansion
- CoachMePlus (coachmeplus.com) — closest existing mid-market attempt
- Kitman Labs (kitmanlabs.com) — AI-powered injury prevention; potential future overlap
- XPS Network (xpsnetwork.com) — team management platform, European-heavy
- Any new NSCA sponsor companies (indicator of new entrants targeting this market)
Appendix C: Source References¶
- NCAA.org — Program counts by division (2025-26)
- NCSA Sports 2025 — Confirmed division counts
- MarketsandMarkets — Sports analytics market size ($2.29B in 2025, $4.75B by 2030, 15.7% CAGR)
- Grand View Research — Sports analytics market ($5.677B in 2025, 18.5% CAGR to 2033)
- Knight Commission on Intercollegiate Athletics — Athletic department financial data
- NCAA CSMAS December 2025 Guidance — Biometric consent requirements
- developer.garmin.com — Free API access for approved developers
- developer.whoop.com — Free developer platform access (2025)
- polar.com/developers — Polar Open AccessLink API
- expo.dev/pricing — EAS Build pricing
- developer.apple.com — Apple Developer Program ($99/year)
- nextnative.dev — Google Play Console ($25 one-time) confirmation
- nsca.com/events — NSCA 2026 National Conference dates (July 6–12, 2026)
- supabase.com/pricing — Supabase Pro plan ($25/month)
- MARKET_VALIDATION.md — Atlas prior research (2026-03-19)
- RESEARCH_BIOTHREAD_APA_FIT.md — Atlas BioThread analysis (2026-03-18)
- COMPETITIVE.md — APA competitive landscape
- BRIEF.md — APA + BioThread product briefs
Document prepared by Atlas, Director of Research & Intelligence, Vivere Vitalis LLC
Version 1.0 — March 2026
Next review: May 2026 (post-Phase 0 completion, adjust based on developer account experiences and any direct customer discovery conversations Jeff has conducted)