# eLab Agentic Capital Challenge
## Evaluation & Submission Guide

Version 1.0.1  
Prepared by eLab Ventures  
Purpose: structured, machine-readable diligence for agent-native startups

---

## 1. What this program is

The eLab Agentic Capital Challenge is a structured diligence process designed for AI-native and agent-augmented startups. Instead of primarily rewarding stage presence or pitch polish, eLab evaluates companies through a standardized submission package that can be reviewed by both human investors and AI agents.

Our view is simple: the next generation of venture investing will belong to firms that can properly measure agent leverage, capital efficiency, defensibility, and founder quality in a repeatable way. This approach is directly aligned with eLab's broader thesis that the opportunity is not just inventing a new capital structure, but becoming one of the first firms to systematically evaluate and support agent-augmented businesses.

This challenge also reflects eLab's current investment focus on capital-efficient seed-stage AI companies, including vertical AI, AI-native SaaS, capital-light AI services, and agentic infrastructure. 

---

## 2. What applicants submit

Each applicant submits three core components:

1. **Company submission JSON** Complete the official eLab company schema. Do your best to submit a robust, thoughtful .json that clearly represents your business—this is a core input to evaluation.
2. **Company pitch deck** Submit a .pdf version of your pitch deck. This will be used by human judges in the loop to complement the structured .json evaluation.
2. **Founder video** Record a short video using the behavioral prompts included in the downloadable .json.
Founders should aim to address at least one behavioral prompt while explaining their business.

---

## 3. Evaluation philosophy

### 3.1 We are not only judging the product

eLab does not believe a company can be judged solely by its software or solely by its founders. The strongest investment candidates combine both:

- a product that solves an important problem
- a company architecture that is genuinely AI-forward
- a team with unusually strong judgment, execution, and resilience

For that reason, the evaluation framework combines a **Company Score** and a **Founder Quality Score (FQS)**.

### 3.2 What we care about most

The challenge is designed to favor companies that can demonstrate:

- meaningful AI or agentic contribution to the business
- credible capital efficiency
- evidence of defensibility beyond a thin wrapper
- strong founder-market fit
- real execution, not just ambition

These priorities are consistent with eLab's existing internal thinking around Agent Contribution Score, founder quality, capital-light operations, and agent-led infrastructure opportunity. 

---

## 4. High-level scoring model

Total score: **100 points**

### Company Score — 75 points total

- **Agent Contribution / Agentic Readiness** — 20 points  
  Measures the extent to which agents are primary operators rather than assistants.

- **Capital Efficiency** — 15 points  
  Measures ability to reach major milestones with relatively little capital and lean staffing.

- **Revenue & Traction** — 15 points  
  Measures commercial proof, customer pull, and speed to revenue.

- **Defensibility** — 15 points  
  Measures proprietary data, workflow lock-in, vertical depth, and resistance to commoditization.

- **Market Opportunity** — 10 points  
  Measures market size, timing, and suitability for venture-scale returns.

### Founder Quality Score (FQS) — 25 points total

- **Founder Background & Fit** — 5 points  
- **Execution Signals** — 8 points  
- **Behavioral Prompt Responses** — 6 points  
- **Founder Video Assessment** — 6 points  

---

## 5. Company Score detail

### 5.1 Agent Contribution / Agentic Readiness — 20 points

We are looking for evidence that AI is not merely a feature layer but a core operating system for the company.

Strong submissions typically show:

- multiple workflows that are agent-owned or agent-led
- measurable workflow automation
- meaningful revenue influenced or generated by agents
- a clear explanation of where humans remain in the loop
- a compelling answer to why the company is truly agentic rather than a standard software product with AI added on

eLab's internal Agent Contribution Score framing has already defined a maturity ladder that runs from assisted to agent-native. Companies that operate mostly through human labor with AI copilots should score lower than companies where agents are primary operators and humans supervise systems and exceptions. 

### 5.2 Capital Efficiency — 15 points

We reward companies that show strong output relative to capital and headcount.

Signals include:

- revenue per employee
- burn relative to progress
- capital needed to next milestone
- runway discipline
- evidence that agentic workflows meaningfully reduce cost structure

This is consistent with eLab's draft thesis work, which emphasized lean AI-native startups and reward for capital-efficient businesses. 

### 5.3 Revenue & Traction — 15 points

We prefer evidence over optimism.

Relevant signals include:

- paying customers
- MRR / ARR
- recent revenue trend
- design partners and deployments
- contract value and pipeline quality

### 5.4 Defensibility — 15 points

We do not want generic wrappers. Strong defensibility often includes:

- proprietary data
- workflow lock-in
- vertical specialization
- regulatory or distribution advantage
- differentiated operating architecture

This reflects eLab's own stated preference for vertical AI, proprietary data, and workflow depth over easily replicated wrapper companies. 

### 5.5 Market Opportunity — 10 points

We evaluate:

- size of the market
- urgency of the problem
- clarity of the wedge
- likelihood of venture-scale outcomes

---

## 6. Founder Quality Score (FQS)

Founder evaluation remains central. The AI system is not intended to eliminate the human element. It is intended to structure it.

### 6.1 Founder Background & Fit — 5 points

We assess:

- domain experience
- technical experience
- prior company-building exposure
- role fit for the problem being solved
- why this team is uniquely positioned to win

### 6.2 Execution Signals — 8 points

We care more about action than charisma.

We score:

- speed to first product
- speed to first customer
- speed to first revenue
- release cadence
- experimentation tempo
- evidence of customer contact and iteration

### 6.3 Behavioral Prompt Responses — 6 points

Each applicant answers a required set of prompts designed to surface judgment, self-awareness, prioritization, and systems thinking.

Prompts include:

- what to do with three months of runway
- growth vs. margins tradeoff
- most likely failure mode
- why now
- founder conflict example
- customer obsession example
- what the business looks like with 90% fewer humans

We are not scoring for style. We are scoring for decision quality.

### 6.4 Founder Video Assessment — 6 points

A short founder video gives the AI system and the eLab team a structured way to evaluate:

- clarity of thought
- specificity
- conviction without hype
- domain fluency
- communication quality
- coherence of the founder's explanation

---

## 7. Founder video rubric

The founder video should be no longer than **120 seconds** and should answer this prompt:

> Explain what you are building, why customers urgently need it, why your team is uniquely suited to win, and how your company becomes dramatically more powerful through AI agents.  Include a behavioral prompt and answer if possible.

### Video scoring categories

#### 7.1 Clarity of thought — 0 to 2 points

High score:
- answers the prompt directly
- ideas are well organized
- avoids jargon fog

Low score:
- vague, circular, or overly buzzword-heavy explanation

#### 7.2 Specificity and evidence — 0 to 1.5 points

High score:
- cites concrete facts, customers, numbers, or examples

Low score:
- mostly aspirational statements without proof

#### 7.3 Founder-market fit and fluency — 0 to 1 point

High score:
- demonstrates real understanding of the customer and workflow

Low score:
- shallow understanding or borrowed language

#### 7.4 Conviction and realism — 0 to 1 point

High score:
- strong conviction paired with realism and tradeoff awareness

Low score:
- empty confidence or exaggerated claims

#### 7.5 Communication effectiveness — 0 to 0.5 points

High score:
- easy to follow and credible

Low score:
- hard to understand or incoherent

**Important:** the video score should not penalize founders for accent, polished production quality, or non-native English. The goal is judgment and clarity, not media training.

---

## 8. Submission rules

### Required rules

- JSON must validate against the official schema
- all required fields must be completed
- all monetary values must be in USD
- all percentages must be numeric and expressed from 0 to 100 unless explicitly noted otherwise
- all URLs must be accessible to eLab reviewers
- founder video must be recorded within the allowed time window

### Recommended rules

- be concise and evidence-based
- avoid inflated market claims without methodology
- explain where humans are still needed
- clearly distinguish current reality from future plans

---

## 9. Review process

### Step 1 — Validation

AI agents first validate:

- schema compliance
- required fields
- numerical consistency
- missing data
- obvious contradictions

### Step 2 — AI-assisted review

AI agents generate:

- normalized company metrics
- an initial Company Score
- an initial Founder Quality Score
- a short diligence memo
- follow-up questions if needed

### Step 3 — Human investment review

The eLab team reviews:

- top-ranked submissions
- AI-generated memos
- red flags and confidence levels
- any follow-up interactions with founders

### Step 4 — Recommendation

The final outcome is an eLab investment recommendation list, not a pure black-box AI decision. AI helps structure and accelerate the process; human investors remain accountable for the investment decision.

---

## 10. What weak submissions usually look like

Applicants should avoid the following patterns:

- generic "AI for X" positioning without workflow depth
- little evidence of customer pull
- agent claims that cannot be quantified
- no explanation of human-in-the-loop boundaries
- market size claims with no methodology
- founder responses that sound polished but reveal weak judgment under constraint

---

## 11. What strong submissions usually look like

Strong submissions often show:

- a narrow, painful problem
- real customer activity
- measurable capital efficiency
- a clear AI-native operating model
- explicit tradeoffs and risks
- founders who act quickly and think clearly

---

## 12. Package contents

The official challenge package includes:

- `elab_clawcon_ai_competition_schema.json`
- this evaluation document

---

## 13. Contact / operational note

Applicants should assume that their materials may be reviewed by both eLab personnel and authorized AI systems operating on behalf of eLab. Submission of materials constitutes consent to that review process.  For additional questions email info@elabvc.com.
