πŸ’° The $100 Test

SpecKit + Copilot CLI β€” Hands-on Lab

AI Forward Session

πŸ“‹ Table of Contents

  1. Prerequisites
  2. Getting Started
  3. Lab 1: View Results Dashboard ⭐ Primary
  4. Lab 2: Share Session / Invite Link πŸ”₯ Stretch
  5. Lab 3: Close Session & Lock Voting πŸ”₯ Stretch
  6. Lab 4: Export Results as CSV πŸ”₯ Stretch
  7. Wrap-up: Why SpecKit?
  8. Additional Resources

βœ… Prerequisites

Make sure you have the following ready before the session starts:

πŸ“– Optional Pre-Read

New to SpecKit? These short reads will give you context before the workshop (5-10 min total):

☐ GitHub account with Copilot access
☐ GitHub CLI installed and authenticated (gh --version)
How to install GitHub CLI
  1. Install via winget:
    winget install --id GitHub.cli
  2. Close and reopen your terminal
  3. Authenticate: gh auth login (select GitHub.com, HTTPS, and log in via browser)
  4. Verify: gh --version

πŸ“– GitHub CLI Docs

☐ VS Code installed (code --version)
How to install VS Code
  1. Install via winget:
    winget install --id Microsoft.VisualStudioCode
  2. Close and reopen your terminal
  3. Verify: code --version

πŸ“– VS Code Setup Guide

☐ Node.js 20 LTS installed (node --version)
How to install Node.js
  1. Install via winget:
    winget install --id OpenJS.NodeJS.20
  2. Close and reopen your terminal
  3. Verify:
    • node --version (should be v20.x or higher)
    • npm --version (should come bundled with Node.js)

πŸ“– Node.js Installation Guide

☐ Copilot CLI installed and authenticated (copilot --version)
How to install Copilot CLI
  1. Install via npm (requires Node.js from the step above):
    npm install -g @github/copilot
  2. Authenticate: copilot login
  3. Verify: copilot --version

πŸ“– Copilot CLI Docs

☐ Git installed (git --version)
How to install Git
  1. Install via winget:
    winget install --id Git.Git
  2. Close and reopen your terminal
  3. Configure your identity (required for commits):
    git config --global user.name "Your Name"
    git config --global user.email "your.email@example.com"
  4. Verify: git --version

πŸ“– Git Installation Guide

πŸš€ Getting Started

Fork the starter repo

Go to github.com/bgervin/speckit-jam-starter and click Fork to create your own copy.

⚠️ Can't fork from bgervin? If you're using an Enterprise Managed (_microsoft) account, use the backup repo instead: github.com/agency-microsoft/speckit-jam-starter
πŸ’‘ Why fork? Forking gives you your own copy of the repo where you can freely experiment. Your changes won't affect anyone else's work.
Learn how to fork a repo
  1. Open the starter repo
    Go to github.com/bgervin/speckit-jam-starter
  2. Click the "Fork" button
    It's in the top-right corner of the page, next to "Star" and "Watch".
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚ bgervin / speckit-jam-starter πŸ‘ Watch   ⭐ Star   🍴 Fork
    β”‚
    β”‚ The $100 Test - starter repo for SpecKit HOL
    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  3. Choose your account as the owner
    On the "Create a new fork" page, leave the defaults β€” your personal account should already be selected as the owner. You can keep the same repo name.
  4. Click "Create fork"
    GitHub will create a copy of the repo under your account (e.g., github.com/YOUR_USERNAME/speckit-jam-starter). You'll be redirected to your new fork automatically.
  5. Copy the clone URL
    On your fork's page, click the green <> Code button and copy the HTTPS URL. You'll use this in the next step.

πŸ“– GitHub Docs: Fork a repo

Open a terminal and clone your fork

Open a terminal window, navigate to a folder where you'd like to keep the project (e.g., your Documents or a projects folder), then clone your fork:

cd $env:USERPROFILE\Documents
git clone https://github.com/YOUR_USERNAME/speckit-jam-starter.git
cd speckit-jam-starter
How to open a terminal on Windows

You have a few options:

  • Windows Terminal (recommended): Press Win key, type Terminal, and press Enter. This works on Windows 11 and Windows 10 with Windows Terminal installed.
  • PowerShell: Press Win key, type PowerShell, and press Enter.
  • VS Code terminal: Open VS Code, then press Ctrl+` (backtick) to open the built-in terminal. This is what we'll use during the lab.
  • Command Prompt: Press Win+R, type cmd, and press Enter.
πŸ’‘ Tip: During the lab, we recommend using the VS Code integrated terminal (Ctrl+`) so you can see your code and terminal side by side.

Install dependencies and start the app

npm install
npm start

Open http://localhost:3000 β€” you should see The $100 Test landing page.

Try it out!

Create a session, then use the session code to cast a vote. Notice there's no way to see the results yet β€” that's what you'll build in Lab 1! 🎯

Run the existing tests

npm test

You should see 25 tests passing. These cover the pre-built Create Session and Cast Votes features.

Open the repo in VS Code and explore

If you haven't already, open the project in VS Code from your terminal:

code .

This opens VS Code with the project folder loaded. Use the Explorer panel on the left (or press Ctrl+Shift+E) to browse the files. Take a look at:

  • .specify/memory/constitution.md β€” the app's identity and standards
  • specs/001-create-session/spec.md β€” a fully written spec
  • specs/002-cast-votes/spec.md β€” another fully written spec
  • src/ β€” the implementation

This is what a spec-driven codebase looks like. Now you'll add to it!

πŸ’‘ Tip: Open the VS Code terminal with Ctrl+` β€” you'll use it for all the remaining steps. This way you can see your code and terminal side by side.
1

View Results Dashboard

Primary Exercise

Build a results dashboard that shows aggregated voting data with a visual bar chart. This is the "wow" feature β€” the missing piece that brings The $100 Test to life.

Start Copilot CLI with the recommended model

In the VS Code terminal (Ctrl+`), start a Copilot CLI session with the Claude Sonnet 4 model β€” it's the best balance of speed and quality for this workshop:

copilot --model claude-sonnet-4 --yolo

You should see the Copilot CLI prompt appear, ready for commands.

πŸ’‘ What does --yolo do? It enables all permissions (file access, shell commands, URLs) so Copilot can work without asking you to approve each action. This is safe here β€” you're working in a forked workshop repo, not production code.
Why Claude Sonnet 4?

For this workshop, we want fast responses during the interactive session. The $100 Test is a straightforward Node.js/Express app β€” it doesn't need the most powerful model.

  • Claude Sonnet 4 (recommended) β€” Fast, great for this complexity level. Best for a live workshop where you don't want to wait.
  • GPT-5.1 β€” Also fast and capable. Good alternative if you prefer OpenAI models.
  • Claude Opus 4.6 β€” Most capable but slower. Use if you want the highest quality output and don't mind waiting.

You can change models anytime by restarting Copilot CLI with a different --model flag.

Having trouble starting Copilot CLI?
  • Make sure it's installed: copilot --version
  • If not found, install it: npm install -g @github/copilot
  • If not authenticated, run copilot login

Specify the feature with SpecKit

In the Copilot CLI terminal, type /speckit.specify followed by the feature description below:

Create a "View Results" feature for the $100 Test app. Given a session code, display the aggregated voting results as a dashboard. Show a ranked list of items sorted by total dollars received (highest first). For each item, display: total dollars, average dollars per voter, and percentage of total budget. Show the total number of participants who voted. Include a simple bar chart visualization using HTML/CSS (no chart library needed). Only show results if at least one vote has been cast. Highlight the top-funded item. Include a REST API endpoint GET /api/sessions/:code/results and a web page at /results/:code.

Wait for SpecKit to finish generating the spec. You'll see it create a new file in the specs/ folder.

While you wait β€” Why specify instead of just prompting?

Vibe coding: "Build me a results page"

When you prompt an AI directly, your requirements live only in the chat window. Close the tab and they're gone. Next time someone asks "what was the spec for the results page?" β€” there's no answer.

Spec-driven: /speckit.specify

SpecKit captures your intent as a structured spec.md file in your repo. It's versioned, diffable, and reviewable. It becomes the single source of truth for what this feature does β€” not your chat history.

Think of it this way: you wouldn't deploy code that only existed in a chat window. Why would you do that with your requirements?

Review the generated spec in VS Code

In VS Code, open specs/003-view-results/spec.md from the Explorer panel (Ctrl+Shift+E).

To see a nicely formatted preview, press Ctrl+Shift+V to open the Markdown Preview, or click the preview icon (πŸ“–) in the top-right corner of the editor tab.

How to preview Markdown in VS Code
  • Full preview: With the .md file open, press Ctrl+Shift+V β€” this opens the rendered Markdown in a new tab.
  • Side-by-side preview: Press Ctrl+K V (press Ctrl+K, release, then press V) β€” this opens the preview next to the source so you can see both.
  • Preview button: Look for the πŸ“– icon (Open Preview to the Side) in the top-right corner of the editor when a .md file is open.

As you review, consider:

  • Does it capture all the requirements?
  • Are the API response shapes well-defined?
  • Are edge cases covered (no votes, single voter, etc.)?

Clarify and refine the spec

Back in the Copilot CLI terminal, run the clarify command to let SpecKit ask questions and tighten the spec:

/speckit.clarify

Answer SpecKit's questions to resolve any ambiguities. This is where the spec gets sharper.

While you wait β€” Why clarify matters

The #1 source of bugs? Ambiguity.

Most bugs don't come from bad code β€” they come from unclear requirements. "Show the results" sounds obvious, but: What if nobody voted? What order? Percentages of what? Rounded how?

/speckit.clarify forces these questions before a line of code is written. In traditional development, these ambiguities surface during code review or QA β€” weeks later, at much higher cost.

πŸ’‘ Fun fact: Studies show that fixing a requirements bug after coding costs 10-100x more than catching it during specification.

πŸ”€ Checkpoint: The PM–Engineering Handoff

In a real team workflow, this is where a PM would create a Pull Request containing the spec and hand it off to engineering. The PR becomes the collaboration surface:

  • Engineers review the spec and ask questions via PR comments
  • PMs refine the spec based on technical feedback
  • The team iterates until the spec is solid β€” before any code is written
  • The approved spec PR becomes the contract between PM and engineering

Today we'll keep going in one flow, but imagine this as the boundary: everything before this point is the PM's job, everything after is engineering's. The spec is the shared artifact that connects them.

Create the implementation plan

In the Copilot CLI terminal, generate the plan:

/speckit.plan
While you wait β€” Plan mode vs. SpecKit planning

Copilot has a "plan mode" β€” how is this different?

Plan mode in Copilot Chat creates a one-time plan in the conversation. It's useful, but ephemeral β€” it disappears when you close the chat.

/speckit.plan generates a persistent plan.md file in your spec folder. It includes:

  • Technical approach and architecture decisions
  • Research notes with rejected alternatives
  • Data models and API contracts
  • A constitution compliance check

This plan is reviewable by your team before any code is written. Imagine sending a PR with the plan before the implementation β€” that's what SpecKit enables.

Review the plan in VS Code

In VS Code, open the generated plan.md in the spec folder. Preview it with Ctrl+Shift+V. Review the approach β€” does the implementation strategy make sense?

Generate the tasks

In the Copilot CLI terminal, break the plan into actionable tasks:

/speckit.tasks
While you wait β€” Why tasks before implementation?

Would you start building a house without a task list?

When you "vibe code," the AI decides on the fly what to build and in what order. Sometimes it works. Often it misses things, builds in the wrong order, or creates code that contradicts itself.

/speckit.tasks creates a structured checklist derived from the plan. Each task has clear acceptance criteria. But here's the powerful part β€” it also determines:

  • Sequential order: Which tasks must happen first (e.g., data model before API, API before UI)
  • Parallel opportunities: Which tasks are independent and could be worked on concurrently
  • Dependencies: If task B depends on task A's output, that's captured explicitly

In a team setting, this task breakdown becomes your work assignment plan β€” you can split parallel tasks across developers. Even with AI doing the work, understanding the dependency graph means you know what's safe to change independently.

Review the tasks in VS Code

In VS Code, open tasks.md in the spec folder. You should see a structured task list like:

  • Create results aggregation logic
  • Add the API endpoint
  • Build the results HTML page
  • Write tests

Implement!

In the Copilot CLI terminal, let SpecKit implement each task. It will generate source code and tests based on the spec and plan.

/speckit.implement
While you wait β€” Watch the magic happen

This is the longest step β€” here's what to do while it runs:

  • Watch tasks.md: Open it in VS Code and watch tasks get checked off as the AI completes each one. This is your real-time progress tracker.
  • Watch the Explorer panel: New files will appear in src/ and tests/ as code is generated.
  • Watch the terminal: You'll see the AI reading specs, writing code, running tests β€” all guided by the plan you just reviewed.

What's different from "just asking the AI to build it"?

Right now the AI is implementing against a spec, plan, and task list that you reviewed. It's not guessing what you want β€” it's executing against documented requirements. If something goes wrong, you can point to the spec and say "this doesn't match." Try doing that with a chat prompt you typed 20 minutes ago.

πŸ’‘ Tip: Watch the terminal as SpecKit works, and keep VS Code open alongside β€” you'll see new files appear in the Explorer panel in real time. It's creating code that matches your spec β€” not just "vibing" without requirements.

Review task progress in VS Code

In VS Code, reopen tasks.md β€” you should see the tasks are now marked as completed. This is your implementation audit trail.

Test it!

In a separate terminal (open a new one with Ctrl+Shift+` in VS Code), run the tests to make sure everything works:

npm test

Then start the server and visit /results/YOUR_SESSION_CODE in your browser to see your dashboard!

npm start
πŸŽ‰ Checkpoint! You just built a complete feature using the SpecKit workflow: specify β†’ clarify β†’ plan β†’ tasks β†’ implement. The spec, plan, tasks, and code are all in your repo as source-controlled artifacts.
Learn More: What is spec-driven development?

Spec-driven development treats natural language requirements as source code. Instead of writing requirements in a wiki or ticket that gets lost, SpecKit keeps specs in your repo alongside the code they describe.

  • Specs are durable β€” they live in the repo, not in someone's head
  • Specs are versioned β€” you can see how requirements evolved over time
  • Specs are reviewable β€” teammates can PR-review your requirements
  • Specs drive implementation β€” the AI uses your spec as the source of truth
2

Share Session / Invite Link

Stretch Goal

Build a shareable voting page so participants can join and vote via a direct link β€” no need to manually enter a session code.

Specify the feature

In the Copilot CLI terminal, run /speckit.specify with the prompt below:

Create a "Share Session" feature for the $100 Test app. After a facilitator creates a session, generate a shareable URL that participants can use to join and vote directly. The URL should include the session code (e.g., /vote/:code). Create a landing page at that URL that shows the session title, the list of items, and a form where participants enter their name and allocate their $100. The page should show a running total so voters can see they've allocated exactly $100 before submitting. Include form validation that prevents submission unless the total equals $100.

Review the spec in VS Code

In VS Code, open the generated spec from the Explorer panel. Use Ctrl+Shift+V to preview the Markdown.

Clarify

In the Copilot CLI terminal:

/speckit.clarify

Plan

In the Copilot CLI terminal:

/speckit.plan

Review the plan in VS Code

In VS Code, open the generated plan.md and review the implementation approach.

Generate tasks

In the Copilot CLI terminal:

/speckit.tasks

Review tasks in VS Code

In VS Code, open tasks.md and review the task breakdown.

Implement

In the Copilot CLI terminal:

/speckit.implement

Review task progress and test

In VS Code, reopen tasks.md to see completed tasks. Then in a separate terminal (Ctrl+Shift+`), run npm test and visit /vote/YOUR_CODE in the browser.

Learn More: Building shareable UIs

A shareable link is one of the most impactful UX improvements you can make. It turns a multi-step flow ("open app β†’ find session β†’ enter code") into a single click.

  • The URL itself becomes the invitation
  • Form validation with a running total provides real-time feedback
  • This pattern is common in survey tools, polls, and collaborative apps
3

Close Session & Lock Voting

Stretch Goal

Add the ability for a facilitator to close a session so no more votes can be cast, and optionally reopen it.

Specify the feature

In the Copilot CLI terminal, run /speckit.specify with the prompt below:

Create a "Close Session" feature for the $100 Test app. The facilitator should be able to close/lock a voting session so no more votes can be cast. Add a PATCH /api/sessions/:code/close endpoint that marks the session as closed. Once closed, the Cast Votes endpoint should reject new votes with a clear error message. The results page should show a "Voting Closed" badge. The facilitator should also be able to reopen a session if needed via PATCH /api/sessions/:code/reopen.

Review the spec in VS Code

In VS Code, open the generated spec from the Explorer panel. Use Ctrl+Shift+V to preview the Markdown.

Clarify

In the Copilot CLI terminal:

/speckit.clarify

Plan

In the Copilot CLI terminal:

/speckit.plan

Review the plan in VS Code

In VS Code, open the generated plan.md and review the implementation approach.

Generate tasks

In the Copilot CLI terminal:

/speckit.tasks

Review tasks in VS Code

In VS Code, open tasks.md and review the task breakdown.

Implement

In the Copilot CLI terminal:

/speckit.implement

Review task progress and test

In VS Code, reopen tasks.md to see completed tasks. Then in a separate terminal (Ctrl+Shift+`), test closing and reopening a session.

Learn More: State management in APIs

Adding a "closed" state to sessions introduces state management β€” a pattern you'll see everywhere in real-world apps (drafts, published, archived, etc.).

  • State transitions should be explicit (open β†’ closed β†’ reopened)
  • Other endpoints need to respect the current state
  • The UI should reflect the state clearly
4

Export Results as CSV

Stretch Goal

Add the ability to download voting results as a CSV file for further analysis in Excel or Google Sheets.

Specify the feature

In the Copilot CLI terminal, run /speckit.specify with the prompt below:

Create an "Export Results" feature for the $100 Test app. Add the ability to download the voting results as a CSV file. The CSV should include columns: Rank, Item Name, Total Dollars, Average Dollars, Percentage of Budget, Number of Votes. Include a GET /api/sessions/:code/results/export endpoint that returns the CSV with appropriate Content-Type and Content-Disposition headers. Add a "Download CSV" button to the results dashboard page.

Review the spec in VS Code

In VS Code, open the generated spec from the Explorer panel. Use Ctrl+Shift+V to preview the Markdown.

Clarify

In the Copilot CLI terminal:

/speckit.clarify

Plan

In the Copilot CLI terminal:

/speckit.plan

Review the plan in VS Code

In VS Code, open the generated plan.md and review the implementation approach.

Generate tasks

In the Copilot CLI terminal:

/speckit.tasks

Review tasks in VS Code

In VS Code, open tasks.md and review the task breakdown.

Implement

In the Copilot CLI terminal:

/speckit.implement

Review task progress and test

In VS Code, reopen tasks.md to see completed tasks. Then in a separate terminal (Ctrl+Shift+`), test the CSV download by visiting /api/sessions/YOUR_CODE/results/export in the browser.

Learn More: File generation in web apps

Generating downloadable files from web APIs is a common pattern. Key considerations:

  • Set Content-Type: text/csv for CSV files
  • Use Content-Disposition: attachment; filename="results.csv" to trigger download
  • Escape commas and quotes in CSV values
  • This pattern extends to PDFs, Excel files, and other export formats

🎯 Wrap-up: Why SpecKit?

You just built features using the SpecKit workflow. Here's why this matters:

English is the new programming language. With AI coding assistants, describing what you want in plain English is how software gets built. But without SpecKit, those English requirements are ephemeral β€” they exist only in chat history.

SpecKit vs. "Vibe Coding"

Vibe Coding Spec-Driven (SpecKit)
Requirements in chat history Requirements in specs/ folder
Not reproducible Fully reproducible
Can't be code-reviewed PR-reviewable specs
No audit trail Full git history
Works for prototypes Works for production
What you built today: Features with specs, plans, tests, and implementations β€” all source-controlled in your repo. This is how PMs and engineers collaborate in the AI era.

πŸ“š Additional Resources