SpecKit + Copilot CLI β Hands-on Lab
AI Forward SessionMake sure you have the following ready before the session starts:
New to SpecKit? These short reads will give you context before the workshop (5-10 min total):
gh --version)
gh auth login (select GitHub.com, HTTPS, and log in via browser)gh --versionπ GitHub CLI Docs
code --version)
code --versionπ VS Code Setup Guide
node --version)
node --version (should be v20.x or higher)npm --version (should come bundled with Node.js)copilot --version)
copilot logincopilot --versionπ Copilot CLI Docs
git --version)
git --versionGo to github.com/bgervin/speckit-jam-starter and click Fork to create your own copy.
github.com/YOUR_USERNAME/speckit-jam-starter). You'll be redirected to your new fork automatically.
Open a terminal window, navigate to a folder where you'd like to keep the project (e.g., your Documents or a projects folder), then clone your fork:
cd $env:USERPROFILE\Documents
git clone https://github.com/YOUR_USERNAME/speckit-jam-starter.git
cd speckit-jam-starter
You have a few options:
Win key, type Terminal, and press Enter. This works on Windows 11 and Windows 10 with Windows Terminal installed.Win key, type PowerShell, and press Enter.Ctrl+` (backtick) to open the built-in terminal. This is what we'll use during the lab.Win+R, type cmd, and press Enter.Ctrl+`) so you can see your code and terminal side by side.
npm install
npm start
Open http://localhost:3000 β you should see The $100 Test landing page.
Create a session, then use the session code to cast a vote. Notice there's no way to see the results yet β that's what you'll build in Lab 1! π―
npm test
You should see 25 tests passing. These cover the pre-built Create Session and Cast Votes features.
If you haven't already, open the project in VS Code from your terminal:
code .
This opens VS Code with the project folder loaded. Use the Explorer panel on the left (or press Ctrl+Shift+E) to browse the files. Take a look at:
.specify/memory/constitution.md β the app's identity and standardsspecs/001-create-session/spec.md β a fully written specspecs/002-cast-votes/spec.md β another fully written specsrc/ β the implementationThis is what a spec-driven codebase looks like. Now you'll add to it!
Ctrl+` β you'll use it for all the remaining steps. This way you can see your code and terminal side by side.
Build a results dashboard that shows aggregated voting data with a visual bar chart. This is the "wow" feature β the missing piece that brings The $100 Test to life.
In the VS Code terminal (Ctrl+`), start a Copilot CLI session with the Claude Sonnet 4 model β it's the best balance of speed and quality for this workshop:
copilot --model claude-sonnet-4 --yolo
You should see the Copilot CLI prompt appear, ready for commands.
For this workshop, we want fast responses during the interactive session. The $100 Test is a straightforward Node.js/Express app β it doesn't need the most powerful model.
You can change models anytime by restarting Copilot CLI with a different --model flag.
copilot --versionnpm install -g @github/copilotcopilot loginIn the Copilot CLI terminal, type /speckit.specify followed by the feature description below:
Create a "View Results" feature for the $100 Test app. Given a session code, display the aggregated voting results as a dashboard. Show a ranked list of items sorted by total dollars received (highest first). For each item, display: total dollars, average dollars per voter, and percentage of total budget. Show the total number of participants who voted. Include a simple bar chart visualization using HTML/CSS (no chart library needed). Only show results if at least one vote has been cast. Highlight the top-funded item. Include a REST API endpoint GET /api/sessions/:code/results and a web page at /results/:code.
Wait for SpecKit to finish generating the spec. You'll see it create a new file in the specs/ folder.
When you prompt an AI directly, your requirements live only in the chat window. Close the tab and they're gone. Next time someone asks "what was the spec for the results page?" β there's no answer.
SpecKit captures your intent as a structured spec.md file in your repo. It's versioned, diffable, and reviewable. It becomes the single source of truth for what this feature does β not your chat history.
Think of it this way: you wouldn't deploy code that only existed in a chat window. Why would you do that with your requirements?
In VS Code, open specs/003-view-results/spec.md from the Explorer panel (Ctrl+Shift+E).
To see a nicely formatted preview, press Ctrl+Shift+V to open the Markdown Preview, or click the preview icon (π) in the top-right corner of the editor tab.
.md file open, press Ctrl+Shift+V β this opens the rendered Markdown in a new tab.Ctrl+K V (press Ctrl+K, release, then press V) β this opens the preview next to the source so you can see both..md file is open.As you review, consider:
Back in the Copilot CLI terminal, run the clarify command to let SpecKit ask questions and tighten the spec:
/speckit.clarify
Answer SpecKit's questions to resolve any ambiguities. This is where the spec gets sharper.
Most bugs don't come from bad code β they come from unclear requirements. "Show the results" sounds obvious, but: What if nobody voted? What order? Percentages of what? Rounded how?
/speckit.clarify forces these questions before a line of code is written. In traditional development, these ambiguities surface during code review or QA β weeks later, at much higher cost.
π‘ Fun fact: Studies show that fixing a requirements bug after coding costs 10-100x more than catching it during specification.
In a real team workflow, this is where a PM would create a Pull Request containing the spec and hand it off to engineering. The PR becomes the collaboration surface:
Today we'll keep going in one flow, but imagine this as the boundary: everything before this point is the PM's job, everything after is engineering's. The spec is the shared artifact that connects them.
In the Copilot CLI terminal, generate the plan:
/speckit.plan
Plan mode in Copilot Chat creates a one-time plan in the conversation. It's useful, but ephemeral β it disappears when you close the chat.
/speckit.plan generates a persistent plan.md file in your spec folder. It includes:
This plan is reviewable by your team before any code is written. Imagine sending a PR with the plan before the implementation β that's what SpecKit enables.
In VS Code, open the generated plan.md in the spec folder. Preview it with Ctrl+Shift+V. Review the approach β does the implementation strategy make sense?
In the Copilot CLI terminal, break the plan into actionable tasks:
/speckit.tasks
When you "vibe code," the AI decides on the fly what to build and in what order. Sometimes it works. Often it misses things, builds in the wrong order, or creates code that contradicts itself.
/speckit.tasks creates a structured checklist derived from the plan. Each task has clear acceptance criteria. But here's the powerful part β it also determines:
In a team setting, this task breakdown becomes your work assignment plan β you can split parallel tasks across developers. Even with AI doing the work, understanding the dependency graph means you know what's safe to change independently.
In VS Code, open tasks.md in the spec folder. You should see a structured task list like:
In the Copilot CLI terminal, let SpecKit implement each task. It will generate source code and tests based on the spec and plan.
/speckit.implement
src/ and tests/ as code is generated.Right now the AI is implementing against a spec, plan, and task list that you reviewed. It's not guessing what you want β it's executing against documented requirements. If something goes wrong, you can point to the spec and say "this doesn't match." Try doing that with a chat prompt you typed 20 minutes ago.
In VS Code, reopen tasks.md β you should see the tasks are now marked as completed. This is your implementation audit trail.
In a separate terminal (open a new one with Ctrl+Shift+` in VS Code), run the tests to make sure everything works:
npm test
Then start the server and visit /results/YOUR_SESSION_CODE in your browser to see your dashboard!
npm start
Spec-driven development treats natural language requirements as source code. Instead of writing requirements in a wiki or ticket that gets lost, SpecKit keeps specs in your repo alongside the code they describe.
Build a shareable voting page so participants can join and vote via a direct link β no need to manually enter a session code.
In the Copilot CLI terminal, run /speckit.specify with the prompt below:
Create a "Share Session" feature for the $100 Test app. After a facilitator creates a session, generate a shareable URL that participants can use to join and vote directly. The URL should include the session code (e.g., /vote/:code). Create a landing page at that URL that shows the session title, the list of items, and a form where participants enter their name and allocate their $100. The page should show a running total so voters can see they've allocated exactly $100 before submitting. Include form validation that prevents submission unless the total equals $100.
In VS Code, open the generated spec from the Explorer panel. Use Ctrl+Shift+V to preview the Markdown.
In the Copilot CLI terminal:
/speckit.clarify
In the Copilot CLI terminal:
/speckit.plan
In VS Code, open the generated plan.md and review the implementation approach.
In the Copilot CLI terminal:
/speckit.tasks
In VS Code, open tasks.md and review the task breakdown.
In the Copilot CLI terminal:
/speckit.implement
In VS Code, reopen tasks.md to see completed tasks. Then in a separate terminal (Ctrl+Shift+`), run npm test and visit /vote/YOUR_CODE in the browser.
A shareable link is one of the most impactful UX improvements you can make. It turns a multi-step flow ("open app β find session β enter code") into a single click.
Add the ability for a facilitator to close a session so no more votes can be cast, and optionally reopen it.
In the Copilot CLI terminal, run /speckit.specify with the prompt below:
Create a "Close Session" feature for the $100 Test app. The facilitator should be able to close/lock a voting session so no more votes can be cast. Add a PATCH /api/sessions/:code/close endpoint that marks the session as closed. Once closed, the Cast Votes endpoint should reject new votes with a clear error message. The results page should show a "Voting Closed" badge. The facilitator should also be able to reopen a session if needed via PATCH /api/sessions/:code/reopen.
In VS Code, open the generated spec from the Explorer panel. Use Ctrl+Shift+V to preview the Markdown.
In the Copilot CLI terminal:
/speckit.clarify
In the Copilot CLI terminal:
/speckit.plan
In VS Code, open the generated plan.md and review the implementation approach.
In the Copilot CLI terminal:
/speckit.tasks
In VS Code, open tasks.md and review the task breakdown.
In the Copilot CLI terminal:
/speckit.implement
In VS Code, reopen tasks.md to see completed tasks. Then in a separate terminal (Ctrl+Shift+`), test closing and reopening a session.
Adding a "closed" state to sessions introduces state management β a pattern you'll see everywhere in real-world apps (drafts, published, archived, etc.).
Add the ability to download voting results as a CSV file for further analysis in Excel or Google Sheets.
In the Copilot CLI terminal, run /speckit.specify with the prompt below:
Create an "Export Results" feature for the $100 Test app. Add the ability to download the voting results as a CSV file. The CSV should include columns: Rank, Item Name, Total Dollars, Average Dollars, Percentage of Budget, Number of Votes. Include a GET /api/sessions/:code/results/export endpoint that returns the CSV with appropriate Content-Type and Content-Disposition headers. Add a "Download CSV" button to the results dashboard page.
In VS Code, open the generated spec from the Explorer panel. Use Ctrl+Shift+V to preview the Markdown.
In the Copilot CLI terminal:
/speckit.clarify
In the Copilot CLI terminal:
/speckit.plan
In VS Code, open the generated plan.md and review the implementation approach.
In the Copilot CLI terminal:
/speckit.tasks
In VS Code, open tasks.md and review the task breakdown.
In the Copilot CLI terminal:
/speckit.implement
In VS Code, reopen tasks.md to see completed tasks. Then in a separate terminal (Ctrl+Shift+`), test the CSV download by visiting /api/sessions/YOUR_CODE/results/export in the browser.
Generating downloadable files from web APIs is a common pattern. Key considerations:
Content-Type: text/csv for CSV filesContent-Disposition: attachment; filename="results.csv" to trigger downloadYou just built features using the SpecKit workflow. Here's why this matters:
| Vibe Coding | Spec-Driven (SpecKit) |
|---|---|
| Requirements in chat history | Requirements in specs/ folder |
| Not reproducible | Fully reproducible |
| Can't be code-reviewed | PR-reviewable specs |
| No audit trail | Full git history |
| Works for prototypes | Works for production |