[Spec Kit] Initial commit — constitution, spec, plan, and tasks for Reaction Image Board v1

This commit is contained in:
2026-05-02 15:56:39 +00:00
commit 691f7570fe
46 changed files with 6149 additions and 0 deletions

148
.specify/extensions.yml Normal file
View File

@@ -0,0 +1,148 @@
installed: []
settings:
auto_execute_hooks: true
hooks:
before_constitution:
- extension: git
command: speckit.git.initialize
enabled: true
optional: false
prompt: Execute speckit.git.initialize?
description: Initialize Git repository before constitution setup
condition: null
before_specify:
- extension: git
command: speckit.git.feature
enabled: true
optional: false
prompt: Execute speckit.git.feature?
description: Create feature branch before specification
condition: null
before_clarify:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit outstanding changes before clarification?
description: Auto-commit before spec clarification
condition: null
before_plan:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit outstanding changes before planning?
description: Auto-commit before implementation planning
condition: null
before_tasks:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit outstanding changes before task generation?
description: Auto-commit before task generation
condition: null
before_implement:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit outstanding changes before implementation?
description: Auto-commit before implementation
condition: null
before_checklist:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit outstanding changes before checklist?
description: Auto-commit before checklist generation
condition: null
before_analyze:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit outstanding changes before analysis?
description: Auto-commit before analysis
condition: null
before_taskstoissues:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit outstanding changes before issue sync?
description: Auto-commit before tasks-to-issues conversion
condition: null
after_constitution:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit constitution changes?
description: Auto-commit after constitution update
condition: null
after_specify:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit specification changes?
description: Auto-commit after specification
condition: null
after_clarify:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit clarification changes?
description: Auto-commit after spec clarification
condition: null
after_plan:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit plan changes?
description: Auto-commit after implementation planning
condition: null
after_tasks:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit task changes?
description: Auto-commit after task generation
condition: null
after_implement:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit implementation changes?
description: Auto-commit after implementation
condition: null
after_checklist:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit checklist changes?
description: Auto-commit after checklist generation
condition: null
after_analyze:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit analysis results?
description: Auto-commit after analysis
condition: null
after_taskstoissues:
- extension: git
command: speckit.git.commit
enabled: true
optional: true
prompt: Commit after syncing issues?
description: Auto-commit after tasks-to-issues conversion
condition: null

View File

@@ -0,0 +1,23 @@
{
"schema_version": "1.0",
"extensions": {
"git": {
"version": "1.0.0",
"source": "local",
"manifest_hash": "sha256:9731aa8143a72fbebfdb440f155038ab42642517c2b2bdbbf67c8fdbe076ed79",
"enabled": true,
"priority": 10,
"registered_commands": {
"claude": [
"speckit.git.feature",
"speckit.git.validate",
"speckit.git.remote",
"speckit.git.initialize",
"speckit.git.commit"
]
},
"registered_skills": [],
"installed_at": "2026-05-02T15:15:14.534434+00:00"
}
}
}

View File

@@ -0,0 +1,100 @@
# Git Branching Workflow Extension
Git repository initialization, feature branch creation, numbering (sequential/timestamp), validation, remote detection, and auto-commit for Spec Kit.
## Overview
This extension provides Git operations as an optional, self-contained module. It manages:
- **Repository initialization** with configurable commit messages
- **Feature branch creation** with sequential (`001-feature-name`) or timestamp (`20260319-143022-feature-name`) numbering
- **Branch validation** to ensure branches follow naming conventions
- **Git remote detection** for GitHub integration (e.g., issue creation)
- **Auto-commit** after core commands (configurable per-command with custom messages)
## Commands
| Command | Description |
|---------|-------------|
| `speckit.git.initialize` | Initialize a Git repository with a configurable commit message |
| `speckit.git.feature` | Create a feature branch with sequential or timestamp numbering |
| `speckit.git.validate` | Validate current branch follows feature branch naming conventions |
| `speckit.git.remote` | Detect Git remote URL for GitHub integration |
| `speckit.git.commit` | Auto-commit changes (configurable per-command enable/disable and messages) |
## Hooks
| Event | Command | Optional | Description |
|-------|---------|----------|-------------|
| `before_constitution` | `speckit.git.initialize` | No | Init git repo before constitution |
| `before_specify` | `speckit.git.feature` | No | Create feature branch before specification |
| `before_clarify` | `speckit.git.commit` | Yes | Commit outstanding changes before clarification |
| `before_plan` | `speckit.git.commit` | Yes | Commit outstanding changes before planning |
| `before_tasks` | `speckit.git.commit` | Yes | Commit outstanding changes before task generation |
| `before_implement` | `speckit.git.commit` | Yes | Commit outstanding changes before implementation |
| `before_checklist` | `speckit.git.commit` | Yes | Commit outstanding changes before checklist |
| `before_analyze` | `speckit.git.commit` | Yes | Commit outstanding changes before analysis |
| `before_taskstoissues` | `speckit.git.commit` | Yes | Commit outstanding changes before issue sync |
| `after_constitution` | `speckit.git.commit` | Yes | Auto-commit after constitution update |
| `after_specify` | `speckit.git.commit` | Yes | Auto-commit after specification |
| `after_clarify` | `speckit.git.commit` | Yes | Auto-commit after clarification |
| `after_plan` | `speckit.git.commit` | Yes | Auto-commit after planning |
| `after_tasks` | `speckit.git.commit` | Yes | Auto-commit after task generation |
| `after_implement` | `speckit.git.commit` | Yes | Auto-commit after implementation |
| `after_checklist` | `speckit.git.commit` | Yes | Auto-commit after checklist |
| `after_analyze` | `speckit.git.commit` | Yes | Auto-commit after analysis |
| `after_taskstoissues` | `speckit.git.commit` | Yes | Auto-commit after issue sync |
## Configuration
Configuration is stored in `.specify/extensions/git/git-config.yml`:
```yaml
# Branch numbering strategy: "sequential" or "timestamp"
branch_numbering: sequential
# Custom commit message for git init
init_commit_message: "[Spec Kit] Initial commit"
# Auto-commit per command (all disabled by default)
# Example: enable auto-commit after specify
auto_commit:
default: false
after_specify:
enabled: true
message: "[Spec Kit] Add specification"
```
## Installation
```bash
# Install the bundled git extension (no network required)
specify extension add git
```
## Disabling
```bash
# Disable the git extension (spec creation continues without branching)
specify extension disable git
# Re-enable it
specify extension enable git
```
## Graceful Degradation
When Git is not installed or the directory is not a Git repository:
- Spec directories are still created under `specs/`
- Branch creation is skipped with a warning
- Branch validation is skipped with a warning
- Remote detection returns empty results
## Scripts
The extension bundles cross-platform scripts:
- `scripts/bash/create-new-feature.sh` — Bash implementation
- `scripts/bash/git-common.sh` — Shared Git utilities (Bash)
- `scripts/powershell/create-new-feature.ps1` — PowerShell implementation
- `scripts/powershell/git-common.ps1` — Shared Git utilities (PowerShell)

View File

@@ -0,0 +1,48 @@
---
description: "Auto-commit changes after a Spec Kit command completes"
---
# Auto-Commit Changes
Automatically stage and commit all changes after a Spec Kit command completes.
## Behavior
This command is invoked as a hook after (or before) core commands. It:
1. Determines the event name from the hook context (e.g., if invoked as an `after_specify` hook, the event is `after_specify`; if `before_plan`, the event is `before_plan`)
2. Checks `.specify/extensions/git/git-config.yml` for the `auto_commit` section
3. Looks up the specific event key to see if auto-commit is enabled
4. Falls back to `auto_commit.default` if no event-specific key exists
5. Uses the per-command `message` if configured, otherwise a default message
6. If enabled and there are uncommitted changes, runs `git add .` + `git commit`
## Execution
Determine the event name from the hook that triggered this command, then run the script:
- **Bash**: `.specify/extensions/git/scripts/bash/auto-commit.sh <event_name>`
- **PowerShell**: `.specify/extensions/git/scripts/powershell/auto-commit.ps1 <event_name>`
Replace `<event_name>` with the actual hook event (e.g., `after_specify`, `before_plan`, `after_implement`).
## Configuration
In `.specify/extensions/git/git-config.yml`:
```yaml
auto_commit:
default: false # Global toggle — set true to enable for all commands
after_specify:
enabled: true # Override per-command
message: "[Spec Kit] Add specification"
after_plan:
enabled: false
message: "[Spec Kit] Add implementation plan"
```
## Graceful Degradation
- If Git is not available or the current directory is not a repository: skips with a warning
- If no config file exists: skips (disabled by default)
- If no changes to commit: skips with a message

View File

@@ -0,0 +1,67 @@
---
description: "Create a feature branch with sequential or timestamp numbering"
---
# Create Feature Branch
Create and switch to a new git feature branch for the given specification. This command handles **branch creation only** — the spec directory and files are created by the core `/speckit.specify` workflow.
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Environment Variable Override
If the user explicitly provided `GIT_BRANCH_NAME` (e.g., via environment variable, argument, or in their request), pass it through to the script by setting the `GIT_BRANCH_NAME` environment variable before invoking the script. When `GIT_BRANCH_NAME` is set:
- The script uses the exact value as the branch name, bypassing all prefix/suffix generation
- `--short-name`, `--number`, and `--timestamp` flags are ignored
- `FEATURE_NUM` is extracted from the name if it starts with a numeric prefix, otherwise set to the full branch name
## Prerequisites
- Verify Git is available by running `git rev-parse --is-inside-work-tree 2>/dev/null`
- If Git is not available, warn the user and skip branch creation
## Branch Numbering Mode
Determine the branch numbering strategy by checking configuration in this order:
1. Check `.specify/extensions/git/git-config.yml` for `branch_numbering` value
2. Check `.specify/init-options.json` for `branch_numbering` value (backward compatibility)
3. Default to `sequential` if neither exists
## Execution
Generate a concise short name (2-4 words) for the branch:
- Analyze the feature description and extract the most meaningful keywords
- Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug")
- Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)
Run the appropriate script based on your platform:
- **Bash**: `.specify/extensions/git/scripts/bash/create-new-feature.sh --json --short-name "<short-name>" "<feature description>"`
- **Bash (timestamp)**: `.specify/extensions/git/scripts/bash/create-new-feature.sh --json --timestamp --short-name "<short-name>" "<feature description>"`
- **PowerShell**: `.specify/extensions/git/scripts/powershell/create-new-feature.ps1 -Json -ShortName "<short-name>" "<feature description>"`
- **PowerShell (timestamp)**: `.specify/extensions/git/scripts/powershell/create-new-feature.ps1 -Json -Timestamp -ShortName "<short-name>" "<feature description>"`
**IMPORTANT**:
- Do NOT pass `--number` — the script determines the correct next number automatically
- Always include the JSON flag (`--json` for Bash, `-Json` for PowerShell) so the output can be parsed reliably
- You must only ever run this script once per feature
- The JSON output will contain `BRANCH_NAME` and `FEATURE_NUM`
## Graceful Degradation
If Git is not installed or the current directory is not a Git repository:
- Branch creation is skipped with a warning: `[specify] Warning: Git repository not detected; skipped branch creation`
- The script still outputs `BRANCH_NAME` and `FEATURE_NUM` so the caller can reference them
## Output
The script outputs JSON with:
- `BRANCH_NAME`: The branch name (e.g., `003-user-auth` or `20260319-143022-user-auth`)
- `FEATURE_NUM`: The numeric or timestamp prefix used

View File

@@ -0,0 +1,49 @@
---
description: "Initialize a Git repository with an initial commit"
---
# Initialize Git Repository
Initialize a Git repository in the current project directory if one does not already exist.
## Execution
Run the appropriate script from the project root:
- **Bash**: `.specify/extensions/git/scripts/bash/initialize-repo.sh`
- **PowerShell**: `.specify/extensions/git/scripts/powershell/initialize-repo.ps1`
If the extension scripts are not found, fall back to:
- **Bash**: `git init && git add . && git commit -m "Initial commit from Specify template"`
- **PowerShell**: `git init; git add .; git commit -m "Initial commit from Specify template"`
The script handles all checks internally:
- Skips if Git is not available
- Skips if already inside a Git repository
- Runs `git init`, `git add .`, and `git commit` with an initial commit message
## Customization
Replace the script to add project-specific Git initialization steps:
- Custom `.gitignore` templates
- Default branch naming (`git config init.defaultBranch`)
- Git LFS setup
- Git hooks installation
- Commit signing configuration
- Git Flow initialization
## Output
On success:
- `✓ Git repository initialized`
## Graceful Degradation
If Git is not installed:
- Warn the user
- Skip repository initialization
- The project continues to function without Git (specs can still be created under `specs/`)
If Git is installed but `git init`, `git add .`, or `git commit` fails:
- Surface the error to the user
- Stop this command rather than continuing with a partially initialized repository

View File

@@ -0,0 +1,45 @@
---
description: "Detect Git remote URL for GitHub integration"
---
# Detect Git Remote URL
Detect the Git remote URL for integration with GitHub services (e.g., issue creation).
## Prerequisites
- Check if Git is available by running `git rev-parse --is-inside-work-tree 2>/dev/null`
- If Git is not available, output a warning and return empty:
```
[specify] Warning: Git repository not detected; cannot determine remote URL
```
## Execution
Run the following command to get the remote URL:
```bash
git config --get remote.origin.url
```
## Output
Parse the remote URL and determine:
1. **Repository owner**: Extract from the URL (e.g., `github` from `https://github.com/github/spec-kit.git`)
2. **Repository name**: Extract from the URL (e.g., `spec-kit` from `https://github.com/github/spec-kit.git`)
3. **Is GitHub**: Whether the remote points to a GitHub repository
Supported URL formats:
- HTTPS: `https://github.com/<owner>/<repo>.git`
- SSH: `git@github.com:<owner>/<repo>.git`
> [!CAUTION]
> ONLY report a GitHub repository if the remote URL actually points to github.com.
> Do NOT assume the remote is GitHub if the URL format doesn't match.
## Graceful Degradation
If Git is not installed, the directory is not a Git repository, or no remote is configured:
- Return an empty result
- Do NOT error — other workflows should continue without Git remote information

View File

@@ -0,0 +1,49 @@
---
description: "Validate current branch follows feature branch naming conventions"
---
# Validate Feature Branch
Validate that the current Git branch follows the expected feature branch naming conventions.
## Prerequisites
- Check if Git is available by running `git rev-parse --is-inside-work-tree 2>/dev/null`
- If Git is not available, output a warning and skip validation:
```
[specify] Warning: Git repository not detected; skipped branch validation
```
## Validation Rules
Get the current branch name:
```bash
git rev-parse --abbrev-ref HEAD
```
The branch name must match one of these patterns:
1. **Sequential**: `^[0-9]{3,}-` (e.g., `001-feature-name`, `042-fix-bug`, `1000-big-feature`)
2. **Timestamp**: `^[0-9]{8}-[0-9]{6}-` (e.g., `20260319-143022-feature-name`)
## Execution
If on a feature branch (matches either pattern):
- Output: `✓ On feature branch: <branch-name>`
- Check if the corresponding spec directory exists under `specs/`:
- For sequential branches, look for `specs/<prefix>-*` where prefix matches the numeric portion
- For timestamp branches, look for `specs/<prefix>-*` where prefix matches the `YYYYMMDD-HHMMSS` portion
- If spec directory exists: `✓ Spec directory found: <path>`
- If spec directory missing: `⚠ No spec directory found for prefix <prefix>`
If NOT on a feature branch:
- Output: `✗ Not on a feature branch. Current branch: <branch-name>`
- Output: `Feature branches should be named like: 001-feature-name or 20260319-143022-feature-name`
## Graceful Degradation
If Git is not installed or the directory is not a Git repository:
- Check the `SPECIFY_FEATURE` environment variable as a fallback
- If set, validate that value against the naming patterns
- If not set, skip validation with a warning

View File

@@ -0,0 +1,62 @@
# Git Branching Workflow Extension Configuration
# Copied to .specify/extensions/git/git-config.yml on install
# Branch numbering strategy: "sequential" (001, 002, ...) or "timestamp" (YYYYMMDD-HHMMSS)
branch_numbering: sequential
# Commit message used by `git commit` during repository initialization
init_commit_message: "[Spec Kit] Initial commit"
# Auto-commit before/after core commands.
# Set "default" to enable for all commands, then override per-command.
# Each key can be true/false. Message is customizable per-command.
auto_commit:
default: false
before_clarify:
enabled: false
message: "[Spec Kit] Save progress before clarification"
before_plan:
enabled: false
message: "[Spec Kit] Save progress before planning"
before_tasks:
enabled: false
message: "[Spec Kit] Save progress before task generation"
before_implement:
enabled: false
message: "[Spec Kit] Save progress before implementation"
before_checklist:
enabled: false
message: "[Spec Kit] Save progress before checklist"
before_analyze:
enabled: false
message: "[Spec Kit] Save progress before analysis"
before_taskstoissues:
enabled: false
message: "[Spec Kit] Save progress before issue sync"
after_constitution:
enabled: false
message: "[Spec Kit] Add project constitution"
after_specify:
enabled: false
message: "[Spec Kit] Add specification"
after_clarify:
enabled: false
message: "[Spec Kit] Clarify specification"
after_plan:
enabled: false
message: "[Spec Kit] Add implementation plan"
after_tasks:
enabled: false
message: "[Spec Kit] Add tasks"
after_implement:
enabled: false
message: "[Spec Kit] Implementation progress"
after_checklist:
enabled: false
message: "[Spec Kit] Add checklist"
after_analyze:
enabled: false
message: "[Spec Kit] Add analysis report"
after_taskstoissues:
enabled: false
message: "[Spec Kit] Sync tasks to issues"

View File

@@ -0,0 +1,140 @@
schema_version: "1.0"
extension:
id: git
name: "Git Branching Workflow"
version: "1.0.0"
description: "Feature branch creation, numbering (sequential/timestamp), validation, and Git remote detection"
author: spec-kit-core
repository: https://github.com/github/spec-kit
license: MIT
requires:
speckit_version: ">=0.2.0"
tools:
- name: git
required: false
provides:
commands:
- name: speckit.git.feature
file: commands/speckit.git.feature.md
description: "Create a feature branch with sequential or timestamp numbering"
- name: speckit.git.validate
file: commands/speckit.git.validate.md
description: "Validate current branch follows feature branch naming conventions"
- name: speckit.git.remote
file: commands/speckit.git.remote.md
description: "Detect Git remote URL for GitHub integration"
- name: speckit.git.initialize
file: commands/speckit.git.initialize.md
description: "Initialize a Git repository with an initial commit"
- name: speckit.git.commit
file: commands/speckit.git.commit.md
description: "Auto-commit changes after a Spec Kit command completes"
config:
- name: "git-config.yml"
template: "config-template.yml"
description: "Git branching configuration"
required: false
hooks:
before_constitution:
command: speckit.git.initialize
optional: false
description: "Initialize Git repository before constitution setup"
before_specify:
command: speckit.git.feature
optional: false
description: "Create feature branch before specification"
before_clarify:
command: speckit.git.commit
optional: true
prompt: "Commit outstanding changes before clarification?"
description: "Auto-commit before spec clarification"
before_plan:
command: speckit.git.commit
optional: true
prompt: "Commit outstanding changes before planning?"
description: "Auto-commit before implementation planning"
before_tasks:
command: speckit.git.commit
optional: true
prompt: "Commit outstanding changes before task generation?"
description: "Auto-commit before task generation"
before_implement:
command: speckit.git.commit
optional: true
prompt: "Commit outstanding changes before implementation?"
description: "Auto-commit before implementation"
before_checklist:
command: speckit.git.commit
optional: true
prompt: "Commit outstanding changes before checklist?"
description: "Auto-commit before checklist generation"
before_analyze:
command: speckit.git.commit
optional: true
prompt: "Commit outstanding changes before analysis?"
description: "Auto-commit before analysis"
before_taskstoissues:
command: speckit.git.commit
optional: true
prompt: "Commit outstanding changes before issue sync?"
description: "Auto-commit before tasks-to-issues conversion"
after_constitution:
command: speckit.git.commit
optional: true
prompt: "Commit constitution changes?"
description: "Auto-commit after constitution update"
after_specify:
command: speckit.git.commit
optional: true
prompt: "Commit specification changes?"
description: "Auto-commit after specification"
after_clarify:
command: speckit.git.commit
optional: true
prompt: "Commit clarification changes?"
description: "Auto-commit after spec clarification"
after_plan:
command: speckit.git.commit
optional: true
prompt: "Commit plan changes?"
description: "Auto-commit after implementation planning"
after_tasks:
command: speckit.git.commit
optional: true
prompt: "Commit task changes?"
description: "Auto-commit after task generation"
after_implement:
command: speckit.git.commit
optional: true
prompt: "Commit implementation changes?"
description: "Auto-commit after implementation"
after_checklist:
command: speckit.git.commit
optional: true
prompt: "Commit checklist changes?"
description: "Auto-commit after checklist generation"
after_analyze:
command: speckit.git.commit
optional: true
prompt: "Commit analysis results?"
description: "Auto-commit after analysis"
after_taskstoissues:
command: speckit.git.commit
optional: true
prompt: "Commit after syncing issues?"
description: "Auto-commit after tasks-to-issues conversion"
tags:
- "git"
- "branching"
- "workflow"
config:
defaults:
branch_numbering: sequential
init_commit_message: "[Spec Kit] Initial commit"

View File

@@ -0,0 +1,62 @@
# Git Branching Workflow Extension Configuration
# Copied to .specify/extensions/git/git-config.yml on install
# Branch numbering strategy: "sequential" (001, 002, ...) or "timestamp" (YYYYMMDD-HHMMSS)
branch_numbering: sequential
# Commit message used by `git commit` during repository initialization
init_commit_message: "[Spec Kit] Initial commit"
# Auto-commit before/after core commands.
# Set "default" to enable for all commands, then override per-command.
# Each key can be true/false. Message is customizable per-command.
auto_commit:
default: false
before_clarify:
enabled: false
message: "[Spec Kit] Save progress before clarification"
before_plan:
enabled: false
message: "[Spec Kit] Save progress before planning"
before_tasks:
enabled: false
message: "[Spec Kit] Save progress before task generation"
before_implement:
enabled: false
message: "[Spec Kit] Save progress before implementation"
before_checklist:
enabled: false
message: "[Spec Kit] Save progress before checklist"
before_analyze:
enabled: false
message: "[Spec Kit] Save progress before analysis"
before_taskstoissues:
enabled: false
message: "[Spec Kit] Save progress before issue sync"
after_constitution:
enabled: false
message: "[Spec Kit] Add project constitution"
after_specify:
enabled: false
message: "[Spec Kit] Add specification"
after_clarify:
enabled: false
message: "[Spec Kit] Clarify specification"
after_plan:
enabled: false
message: "[Spec Kit] Add implementation plan"
after_tasks:
enabled: false
message: "[Spec Kit] Add tasks"
after_implement:
enabled: false
message: "[Spec Kit] Implementation progress"
after_checklist:
enabled: false
message: "[Spec Kit] Add checklist"
after_analyze:
enabled: false
message: "[Spec Kit] Add analysis report"
after_taskstoissues:
enabled: false
message: "[Spec Kit] Sync tasks to issues"

View File

@@ -0,0 +1,140 @@
#!/usr/bin/env bash
# Git extension: auto-commit.sh
# Automatically commit changes after a Spec Kit command completes.
# Checks per-command config keys in git-config.yml before committing.
#
# Usage: auto-commit.sh <event_name>
# e.g.: auto-commit.sh after_specify
set -e
EVENT_NAME="${1:-}"
if [ -z "$EVENT_NAME" ]; then
echo "Usage: $0 <event_name>" >&2
exit 1
fi
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
_find_project_root() {
local dir="$1"
while [ "$dir" != "/" ]; do
if [ -d "$dir/.specify" ] || [ -d "$dir/.git" ]; then
echo "$dir"
return 0
fi
dir="$(dirname "$dir")"
done
return 1
}
REPO_ROOT=$(_find_project_root "$SCRIPT_DIR") || REPO_ROOT="$(pwd)"
cd "$REPO_ROOT"
# Check if git is available
if ! command -v git >/dev/null 2>&1; then
echo "[specify] Warning: Git not found; skipped auto-commit" >&2
exit 0
fi
if ! git rev-parse --is-inside-work-tree >/dev/null 2>&1; then
echo "[specify] Warning: Not a Git repository; skipped auto-commit" >&2
exit 0
fi
# Read per-command config from git-config.yml
_config_file="$REPO_ROOT/.specify/extensions/git/git-config.yml"
_enabled=false
_commit_msg=""
if [ -f "$_config_file" ]; then
# Parse the auto_commit section for this event.
# Look for auto_commit.<event_name>.enabled and .message
# Also check auto_commit.default as fallback.
_in_auto_commit=false
_in_event=false
_default_enabled=false
while IFS= read -r _line; do
# Detect auto_commit: section
if echo "$_line" | grep -q '^auto_commit:'; then
_in_auto_commit=true
_in_event=false
continue
fi
# Exit auto_commit section on next top-level key
if $_in_auto_commit && echo "$_line" | grep -Eq '^[a-z]'; then
break
fi
if $_in_auto_commit; then
# Check default key
if echo "$_line" | grep -Eq "^[[:space:]]+default:[[:space:]]"; then
_val=$(echo "$_line" | sed 's/^[^:]*:[[:space:]]*//' | tr -d '[:space:]' | tr '[:upper:]' '[:lower:]')
[ "$_val" = "true" ] && _default_enabled=true
fi
# Detect our event subsection
if echo "$_line" | grep -Eq "^[[:space:]]+${EVENT_NAME}:"; then
_in_event=true
continue
fi
# Inside our event subsection
if $_in_event; then
# Exit on next sibling key (same indent level as event name)
if echo "$_line" | grep -Eq '^[[:space:]]{2}[a-z]' && ! echo "$_line" | grep -Eq '^[[:space:]]{4}'; then
_in_event=false
continue
fi
if echo "$_line" | grep -Eq '[[:space:]]+enabled:'; then
_val=$(echo "$_line" | sed 's/^[^:]*:[[:space:]]*//' | tr -d '[:space:]' | tr '[:upper:]' '[:lower:]')
[ "$_val" = "true" ] && _enabled=true
[ "$_val" = "false" ] && _enabled=false
fi
if echo "$_line" | grep -Eq '[[:space:]]+message:'; then
_commit_msg=$(echo "$_line" | sed 's/^[^:]*:[[:space:]]*//' | sed 's/^["'\'']//' | sed 's/["'\'']*$//')
fi
fi
fi
done < "$_config_file"
# If event-specific key not found, use default
if [ "$_enabled" = "false" ] && [ "$_default_enabled" = "true" ]; then
# Only use default if the event wasn't explicitly set to false
# Check if event section existed at all
if ! grep -q "^[[:space:]]*${EVENT_NAME}:" "$_config_file" 2>/dev/null; then
_enabled=true
fi
fi
else
# No config file — auto-commit disabled by default
exit 0
fi
if [ "$_enabled" != "true" ]; then
exit 0
fi
# Check if there are changes to commit
if git diff --quiet HEAD 2>/dev/null && git diff --cached --quiet 2>/dev/null && [ -z "$(git ls-files --others --exclude-standard 2>/dev/null)" ]; then
echo "[specify] No changes to commit after $EVENT_NAME" >&2
exit 0
fi
# Derive a human-readable command name from the event
# e.g., after_specify -> specify, before_plan -> plan
_command_name=$(echo "$EVENT_NAME" | sed 's/^after_//' | sed 's/^before_//')
_phase=$(echo "$EVENT_NAME" | grep -q '^before_' && echo 'before' || echo 'after')
# Use custom message if configured, otherwise default
if [ -z "$_commit_msg" ]; then
_commit_msg="[Spec Kit] Auto-commit ${_phase} ${_command_name}"
fi
# Stage and commit
_git_out=$(git add . 2>&1) || { echo "[specify] Error: git add failed: $_git_out" >&2; exit 1; }
_git_out=$(git commit -q -m "$_commit_msg" 2>&1) || { echo "[specify] Error: git commit failed: $_git_out" >&2; exit 1; }
echo "[OK] Changes committed ${_phase} ${_command_name}" >&2

View File

@@ -0,0 +1,453 @@
#!/usr/bin/env bash
# Git extension: create-new-feature.sh
# Adapted from core scripts/bash/create-new-feature.sh for extension layout.
# Sources common.sh from the project's installed scripts, falling back to
# git-common.sh for minimal git helpers.
set -e
JSON_MODE=false
DRY_RUN=false
ALLOW_EXISTING=false
SHORT_NAME=""
BRANCH_NUMBER=""
USE_TIMESTAMP=false
ARGS=()
i=1
while [ $i -le $# ]; do
arg="${!i}"
case "$arg" in
--json)
JSON_MODE=true
;;
--dry-run)
DRY_RUN=true
;;
--allow-existing-branch)
ALLOW_EXISTING=true
;;
--short-name)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
if [[ "$next_arg" == --* ]]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
SHORT_NAME="$next_arg"
;;
--number)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
if [[ "$next_arg" == --* ]]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
BRANCH_NUMBER="$next_arg"
if [[ ! "$BRANCH_NUMBER" =~ ^[0-9]+$ ]]; then
echo 'Error: --number must be a non-negative integer' >&2
exit 1
fi
;;
--timestamp)
USE_TIMESTAMP=true
;;
--help|-h)
echo "Usage: $0 [--json] [--dry-run] [--allow-existing-branch] [--short-name <name>] [--number N] [--timestamp] <feature_description>"
echo ""
echo "Options:"
echo " --json Output in JSON format"
echo " --dry-run Compute branch name without creating the branch"
echo " --allow-existing-branch Switch to branch if it already exists instead of failing"
echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
echo " --number N Specify branch number manually (overrides auto-detection)"
echo " --timestamp Use timestamp prefix (YYYYMMDD-HHMMSS) instead of sequential numbering"
echo " --help, -h Show this help message"
echo ""
echo "Environment variables:"
echo " GIT_BRANCH_NAME Use this exact branch name, bypassing all prefix/suffix generation"
echo ""
echo "Examples:"
echo " $0 'Add user authentication system' --short-name 'user-auth'"
echo " $0 'Implement OAuth2 integration for API' --number 5"
echo " $0 --timestamp --short-name 'user-auth' 'Add user authentication'"
echo " GIT_BRANCH_NAME=my-branch $0 'feature description'"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
i=$((i + 1))
done
FEATURE_DESCRIPTION="${ARGS[*]}"
if [ -z "$FEATURE_DESCRIPTION" ]; then
echo "Usage: $0 [--json] [--dry-run] [--allow-existing-branch] [--short-name <name>] [--number N] [--timestamp] <feature_description>" >&2
exit 1
fi
# Trim whitespace and validate description is not empty
FEATURE_DESCRIPTION=$(echo "$FEATURE_DESCRIPTION" | sed -E 's/^[[:space:]]+|[[:space:]]+$//g')
if [ -z "$FEATURE_DESCRIPTION" ]; then
echo "Error: Feature description cannot be empty or contain only whitespace" >&2
exit 1
fi
# Function to get highest number from specs directory
get_highest_from_specs() {
local specs_dir="$1"
local highest=0
if [ -d "$specs_dir" ]; then
for dir in "$specs_dir"/*; do
[ -d "$dir" ] || continue
dirname=$(basename "$dir")
# Match sequential prefixes (>=3 digits), but skip timestamp dirs.
if echo "$dirname" | grep -Eq '^[0-9]{3,}-' && ! echo "$dirname" | grep -Eq '^[0-9]{8}-[0-9]{6}-'; then
number=$(echo "$dirname" | grep -Eo '^[0-9]+')
number=$((10#$number))
if [ "$number" -gt "$highest" ]; then
highest=$number
fi
fi
done
fi
echo "$highest"
}
# Function to get highest number from git branches
get_highest_from_branches() {
git branch -a 2>/dev/null | sed 's/^[* ]*//; s|^remotes/[^/]*/||' | _extract_highest_number
}
# Extract the highest sequential feature number from a list of ref names (one per line).
_extract_highest_number() {
local highest=0
while IFS= read -r name; do
[ -z "$name" ] && continue
if echo "$name" | grep -Eq '^[0-9]{3,}-' && ! echo "$name" | grep -Eq '^[0-9]{8}-[0-9]{6}-'; then
number=$(echo "$name" | grep -Eo '^[0-9]+' || echo "0")
number=$((10#$number))
if [ "$number" -gt "$highest" ]; then
highest=$number
fi
fi
done
echo "$highest"
}
# Function to get highest number from remote branches without fetching (side-effect-free)
get_highest_from_remote_refs() {
local highest=0
for remote in $(git remote 2>/dev/null); do
local remote_highest
remote_highest=$(GIT_TERMINAL_PROMPT=0 git ls-remote --heads "$remote" 2>/dev/null | sed 's|.*refs/heads/||' | _extract_highest_number)
if [ "$remote_highest" -gt "$highest" ]; then
highest=$remote_highest
fi
done
echo "$highest"
}
# Function to check existing branches and return next available number.
check_existing_branches() {
local specs_dir="$1"
local skip_fetch="${2:-false}"
if [ "$skip_fetch" = true ]; then
local highest_remote=$(get_highest_from_remote_refs)
local highest_branch=$(get_highest_from_branches)
if [ "$highest_remote" -gt "$highest_branch" ]; then
highest_branch=$highest_remote
fi
else
git fetch --all --prune >/dev/null 2>&1 || true
local highest_branch=$(get_highest_from_branches)
fi
local highest_spec=$(get_highest_from_specs "$specs_dir")
local max_num=$highest_branch
if [ "$highest_spec" -gt "$max_num" ]; then
max_num=$highest_spec
fi
echo $((max_num + 1))
}
# Function to clean and format a branch name
clean_branch_name() {
local name="$1"
echo "$name" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//'
}
# ---------------------------------------------------------------------------
# Source common.sh for resolve_template, json_escape, get_repo_root, has_git.
#
# Search locations in priority order:
# 1. .specify/scripts/bash/common.sh under the project root (installed project)
# 2. scripts/bash/common.sh under the project root (source checkout fallback)
# 3. git-common.sh next to this script (minimal fallback — lacks resolve_template)
# ---------------------------------------------------------------------------
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Find project root by walking up from the script location
_find_project_root() {
local dir="$1"
while [ "$dir" != "/" ]; do
if [ -d "$dir/.specify" ] || [ -d "$dir/.git" ]; then
echo "$dir"
return 0
fi
dir="$(dirname "$dir")"
done
return 1
}
_common_loaded=false
_PROJECT_ROOT=$(_find_project_root "$SCRIPT_DIR") || true
if [ -n "$_PROJECT_ROOT" ] && [ -f "$_PROJECT_ROOT/.specify/scripts/bash/common.sh" ]; then
source "$_PROJECT_ROOT/.specify/scripts/bash/common.sh"
_common_loaded=true
elif [ -n "$_PROJECT_ROOT" ] && [ -f "$_PROJECT_ROOT/scripts/bash/common.sh" ]; then
source "$_PROJECT_ROOT/scripts/bash/common.sh"
_common_loaded=true
elif [ -f "$SCRIPT_DIR/git-common.sh" ]; then
source "$SCRIPT_DIR/git-common.sh"
_common_loaded=true
fi
if [ "$_common_loaded" != "true" ]; then
echo "Error: Could not locate common.sh or git-common.sh. Please ensure the Specify core scripts are installed." >&2
exit 1
fi
# Resolve repository root
if type get_repo_root >/dev/null 2>&1; then
REPO_ROOT=$(get_repo_root)
elif git rev-parse --show-toplevel >/dev/null 2>&1; then
REPO_ROOT=$(git rev-parse --show-toplevel)
elif [ -n "$_PROJECT_ROOT" ]; then
REPO_ROOT="$_PROJECT_ROOT"
else
echo "Error: Could not determine repository root." >&2
exit 1
fi
# Check if git is available at this repo root
if type has_git >/dev/null 2>&1; then
if has_git "$REPO_ROOT"; then
HAS_GIT=true
else
HAS_GIT=false
fi
elif git -C "$REPO_ROOT" rev-parse --is-inside-work-tree >/dev/null 2>&1; then
HAS_GIT=true
else
HAS_GIT=false
fi
cd "$REPO_ROOT"
SPECS_DIR="$REPO_ROOT/specs"
# Function to generate branch name with stop word filtering
generate_branch_name() {
local description="$1"
local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$"
local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g')
local meaningful_words=()
for word in $clean_name; do
[ -z "$word" ] && continue
if ! echo "$word" | grep -qiE "$stop_words"; then
if [ ${#word} -ge 3 ]; then
meaningful_words+=("$word")
elif echo "$description" | grep -qw -- "${word^^}"; then
meaningful_words+=("$word")
fi
fi
done
if [ ${#meaningful_words[@]} -gt 0 ]; then
local max_words=3
if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi
local result=""
local count=0
for word in "${meaningful_words[@]}"; do
if [ $count -ge $max_words ]; then break; fi
if [ -n "$result" ]; then result="$result-"; fi
result="$result$word"
count=$((count + 1))
done
echo "$result"
else
local cleaned=$(clean_branch_name "$description")
echo "$cleaned" | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//'
fi
}
# Check for GIT_BRANCH_NAME env var override (exact branch name, no prefix/suffix)
if [ -n "${GIT_BRANCH_NAME:-}" ]; then
BRANCH_NAME="$GIT_BRANCH_NAME"
# Extract FEATURE_NUM from the branch name if it starts with a numeric prefix
# Check timestamp pattern first (YYYYMMDD-HHMMSS-) since it also matches the simpler ^[0-9]+ pattern
if echo "$BRANCH_NAME" | grep -Eq '^[0-9]{8}-[0-9]{6}-'; then
FEATURE_NUM=$(echo "$BRANCH_NAME" | grep -Eo '^[0-9]{8}-[0-9]{6}')
BRANCH_SUFFIX="${BRANCH_NAME#${FEATURE_NUM}-}"
elif echo "$BRANCH_NAME" | grep -Eq '^[0-9]+-'; then
FEATURE_NUM=$(echo "$BRANCH_NAME" | grep -Eo '^[0-9]+')
BRANCH_SUFFIX="${BRANCH_NAME#${FEATURE_NUM}-}"
else
FEATURE_NUM="$BRANCH_NAME"
BRANCH_SUFFIX="$BRANCH_NAME"
fi
else
# Generate branch name
if [ -n "$SHORT_NAME" ]; then
BRANCH_SUFFIX=$(clean_branch_name "$SHORT_NAME")
else
BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION")
fi
# Warn if --number and --timestamp are both specified
if [ "$USE_TIMESTAMP" = true ] && [ -n "$BRANCH_NUMBER" ]; then
>&2 echo "[specify] Warning: --number is ignored when --timestamp is used"
BRANCH_NUMBER=""
fi
# Determine branch prefix
if [ "$USE_TIMESTAMP" = true ]; then
FEATURE_NUM=$(date +%Y%m%d-%H%M%S)
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
else
if [ -z "$BRANCH_NUMBER" ]; then
if [ "$DRY_RUN" = true ] && [ "$HAS_GIT" = true ]; then
BRANCH_NUMBER=$(check_existing_branches "$SPECS_DIR" true)
elif [ "$DRY_RUN" = true ]; then
HIGHEST=$(get_highest_from_specs "$SPECS_DIR")
BRANCH_NUMBER=$((HIGHEST + 1))
elif [ "$HAS_GIT" = true ]; then
BRANCH_NUMBER=$(check_existing_branches "$SPECS_DIR")
else
HIGHEST=$(get_highest_from_specs "$SPECS_DIR")
BRANCH_NUMBER=$((HIGHEST + 1))
fi
fi
FEATURE_NUM=$(printf "%03d" "$((10#$BRANCH_NUMBER))")
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
fi
fi
# GitHub enforces a 244-byte limit on branch names
MAX_BRANCH_LENGTH=244
_byte_length() { printf '%s' "$1" | LC_ALL=C wc -c | tr -d ' '; }
BRANCH_BYTE_LEN=$(_byte_length "$BRANCH_NAME")
if [ -n "${GIT_BRANCH_NAME:-}" ] && [ "$BRANCH_BYTE_LEN" -gt $MAX_BRANCH_LENGTH ]; then
>&2 echo "Error: GIT_BRANCH_NAME must be 244 bytes or fewer in UTF-8. Provided value is ${BRANCH_BYTE_LEN} bytes."
exit 1
elif [ "$BRANCH_BYTE_LEN" -gt $MAX_BRANCH_LENGTH ]; then
PREFIX_LENGTH=$(( ${#FEATURE_NUM} + 1 ))
MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - PREFIX_LENGTH))
TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH)
TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//')
ORIGINAL_BRANCH_NAME="$BRANCH_NAME"
BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}"
>&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
>&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)"
>&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)"
fi
if [ "$DRY_RUN" != true ]; then
if [ "$HAS_GIT" = true ]; then
branch_create_error=""
if ! branch_create_error=$(git checkout -q -b "$BRANCH_NAME" 2>&1); then
current_branch="$(git rev-parse --abbrev-ref HEAD 2>/dev/null || true)"
if git branch --list "$BRANCH_NAME" | grep -q .; then
if [ "$ALLOW_EXISTING" = true ]; then
if [ "$current_branch" = "$BRANCH_NAME" ]; then
:
elif ! switch_branch_error=$(git checkout -q "$BRANCH_NAME" 2>&1); then
>&2 echo "Error: Failed to switch to existing branch '$BRANCH_NAME'. Please resolve any local changes or conflicts and try again."
if [ -n "$switch_branch_error" ]; then
>&2 printf '%s\n' "$switch_branch_error"
fi
exit 1
fi
elif [ "$USE_TIMESTAMP" = true ]; then
>&2 echo "Error: Branch '$BRANCH_NAME' already exists. Rerun to get a new timestamp or use a different --short-name."
exit 1
else
>&2 echo "Error: Branch '$BRANCH_NAME' already exists. Please use a different feature name or specify a different number with --number."
exit 1
fi
else
>&2 echo "Error: Failed to create git branch '$BRANCH_NAME'."
if [ -n "$branch_create_error" ]; then
>&2 printf '%s\n' "$branch_create_error"
else
>&2 echo "Please check your git configuration and try again."
fi
exit 1
fi
fi
else
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
fi
printf '# To persist: export SPECIFY_FEATURE=%q\n' "$BRANCH_NAME" >&2
fi
if $JSON_MODE; then
if command -v jq >/dev/null 2>&1; then
if [ "$DRY_RUN" = true ]; then
jq -cn \
--arg branch_name "$BRANCH_NAME" \
--arg feature_num "$FEATURE_NUM" \
'{BRANCH_NAME:$branch_name,FEATURE_NUM:$feature_num,DRY_RUN:true}'
else
jq -cn \
--arg branch_name "$BRANCH_NAME" \
--arg feature_num "$FEATURE_NUM" \
'{BRANCH_NAME:$branch_name,FEATURE_NUM:$feature_num}'
fi
else
if type json_escape >/dev/null 2>&1; then
_je_branch=$(json_escape "$BRANCH_NAME")
_je_num=$(json_escape "$FEATURE_NUM")
else
_je_branch="$BRANCH_NAME"
_je_num="$FEATURE_NUM"
fi
if [ "$DRY_RUN" = true ]; then
printf '{"BRANCH_NAME":"%s","FEATURE_NUM":"%s","DRY_RUN":true}\n' "$_je_branch" "$_je_num"
else
printf '{"BRANCH_NAME":"%s","FEATURE_NUM":"%s"}\n' "$_je_branch" "$_je_num"
fi
fi
else
echo "BRANCH_NAME: $BRANCH_NAME"
echo "FEATURE_NUM: $FEATURE_NUM"
if [ "$DRY_RUN" != true ]; then
printf '# To persist in your shell: export SPECIFY_FEATURE=%q\n' "$BRANCH_NAME"
fi
fi

View File

@@ -0,0 +1,54 @@
#!/usr/bin/env bash
# Git-specific common functions for the git extension.
# Extracted from scripts/bash/common.sh — contains only git-specific
# branch validation and detection logic.
# Check if we have git available at the repo root
has_git() {
local repo_root="${1:-$(pwd)}"
{ [ -d "$repo_root/.git" ] || [ -f "$repo_root/.git" ]; } && \
command -v git >/dev/null 2>&1 && \
git -C "$repo_root" rev-parse --is-inside-work-tree >/dev/null 2>&1
}
# Strip a single optional path segment (e.g. gitflow "feat/004-name" -> "004-name").
# Only when the full name is exactly two slash-free segments; otherwise returns the raw name.
spec_kit_effective_branch_name() {
local raw="$1"
if [[ "$raw" =~ ^([^/]+)/([^/]+)$ ]]; then
printf '%s\n' "${BASH_REMATCH[2]}"
else
printf '%s\n' "$raw"
fi
}
# Validate that a branch name matches the expected feature branch pattern.
# Accepts sequential (###-* with >=3 digits) or timestamp (YYYYMMDD-HHMMSS-*) formats.
# Logic aligned with scripts/bash/common.sh check_feature_branch after effective-name normalization.
check_feature_branch() {
local raw="$1"
local has_git_repo="$2"
# For non-git repos, we can't enforce branch naming but still provide output
if [[ "$has_git_repo" != "true" ]]; then
echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2
return 0
fi
local branch
branch=$(spec_kit_effective_branch_name "$raw")
# Accept sequential prefix (3+ digits) but exclude malformed timestamps
# Malformed: 7-or-8 digit date + 6-digit time with no trailing slug (e.g. "2026031-143022" or "20260319-143022")
local is_sequential=false
if [[ "$branch" =~ ^[0-9]{3,}- ]] && [[ ! "$branch" =~ ^[0-9]{7}-[0-9]{6}- ]] && [[ ! "$branch" =~ ^[0-9]{7,8}-[0-9]{6}$ ]]; then
is_sequential=true
fi
if [[ "$is_sequential" != "true" ]] && [[ ! "$branch" =~ ^[0-9]{8}-[0-9]{6}- ]]; then
echo "ERROR: Not on a feature branch. Current branch: $raw" >&2
echo "Feature branches should be named like: 001-feature-name, 1234-feature-name, or 20260319-143022-feature-name" >&2
return 1
fi
return 0
}

View File

@@ -0,0 +1,54 @@
#!/usr/bin/env bash
# Git extension: initialize-repo.sh
# Initialize a Git repository with an initial commit.
# Customizable — replace this script to add .gitignore templates,
# default branch config, git-flow, LFS, signing, etc.
set -e
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# Find project root
_find_project_root() {
local dir="$1"
while [ "$dir" != "/" ]; do
if [ -d "$dir/.specify" ] || [ -d "$dir/.git" ]; then
echo "$dir"
return 0
fi
dir="$(dirname "$dir")"
done
return 1
}
REPO_ROOT=$(_find_project_root "$SCRIPT_DIR") || REPO_ROOT="$(pwd)"
cd "$REPO_ROOT"
# Read commit message from extension config, fall back to default
COMMIT_MSG="[Spec Kit] Initial commit"
_config_file="$REPO_ROOT/.specify/extensions/git/git-config.yml"
if [ -f "$_config_file" ]; then
_msg=$(grep '^init_commit_message:' "$_config_file" 2>/dev/null | sed 's/^init_commit_message:[[:space:]]*//' | sed 's/^["'\'']//' | sed 's/["'\'']*$//')
if [ -n "$_msg" ]; then
COMMIT_MSG="$_msg"
fi
fi
# Check if git is available
if ! command -v git >/dev/null 2>&1; then
echo "[specify] Warning: Git not found; skipped repository initialization" >&2
exit 0
fi
# Check if already a git repo
if git rev-parse --is-inside-work-tree >/dev/null 2>&1; then
echo "[specify] Git repository already initialized; skipping" >&2
exit 0
fi
# Initialize
_git_out=$(git init -q 2>&1) || { echo "[specify] Error: git init failed: $_git_out" >&2; exit 1; }
_git_out=$(git add . 2>&1) || { echo "[specify] Error: git add failed: $_git_out" >&2; exit 1; }
_git_out=$(git commit --allow-empty -q -m "$COMMIT_MSG" 2>&1) || { echo "[specify] Error: git commit failed: $_git_out" >&2; exit 1; }
echo "✓ Git repository initialized" >&2

View File

@@ -0,0 +1,169 @@
#!/usr/bin/env pwsh
# Git extension: auto-commit.ps1
# Automatically commit changes after a Spec Kit command completes.
# Checks per-command config keys in git-config.yml before committing.
#
# Usage: auto-commit.ps1 <event_name>
# e.g.: auto-commit.ps1 after_specify
param(
[Parameter(Position = 0, Mandatory = $true)]
[string]$EventName
)
$ErrorActionPreference = 'Stop'
function Find-ProjectRoot {
param([string]$StartDir)
$current = Resolve-Path $StartDir
while ($true) {
foreach ($marker in @('.specify', '.git')) {
if (Test-Path (Join-Path $current $marker)) {
return $current
}
}
$parent = Split-Path $current -Parent
if ($parent -eq $current) { return $null }
$current = $parent
}
}
$repoRoot = Find-ProjectRoot -StartDir $PSScriptRoot
if (-not $repoRoot) { $repoRoot = Get-Location }
Set-Location $repoRoot
# Check if git is available
if (-not (Get-Command git -ErrorAction SilentlyContinue)) {
Write-Warning "[specify] Warning: Git not found; skipped auto-commit"
exit 0
}
# Temporarily relax ErrorActionPreference so git stderr warnings
# (e.g. CRLF notices on Windows) do not become terminating errors.
$savedEAP = $ErrorActionPreference
$ErrorActionPreference = 'Continue'
try {
git rev-parse --is-inside-work-tree 2>$null | Out-Null
$isRepo = $LASTEXITCODE -eq 0
} finally {
$ErrorActionPreference = $savedEAP
}
if (-not $isRepo) {
Write-Warning "[specify] Warning: Not a Git repository; skipped auto-commit"
exit 0
}
# Read per-command config from git-config.yml
$configFile = Join-Path $repoRoot ".specify/extensions/git/git-config.yml"
$enabled = $false
$commitMsg = ""
if (Test-Path $configFile) {
# Parse YAML to find auto_commit section
$inAutoCommit = $false
$inEvent = $false
$defaultEnabled = $false
foreach ($line in Get-Content $configFile) {
# Detect auto_commit: section
if ($line -match '^auto_commit:') {
$inAutoCommit = $true
$inEvent = $false
continue
}
# Exit auto_commit section on next top-level key
if ($inAutoCommit -and $line -match '^[a-z]') {
break
}
if ($inAutoCommit) {
# Check default key
if ($line -match '^\s+default:\s*(.+)$') {
$val = $matches[1].Trim().ToLower()
if ($val -eq 'true') { $defaultEnabled = $true }
}
# Detect our event subsection
if ($line -match "^\s+${EventName}:") {
$inEvent = $true
continue
}
# Inside our event subsection
if ($inEvent) {
# Exit on next sibling key (2-space indent, not 4+)
if ($line -match '^\s{2}[a-z]' -and $line -notmatch '^\s{4}') {
$inEvent = $false
continue
}
if ($line -match '\s+enabled:\s*(.+)$') {
$val = $matches[1].Trim().ToLower()
if ($val -eq 'true') { $enabled = $true }
if ($val -eq 'false') { $enabled = $false }
}
if ($line -match '\s+message:\s*(.+)$') {
$commitMsg = $matches[1].Trim() -replace '^["'']' -replace '["'']$'
}
}
}
}
# If event-specific key not found, use default
if (-not $enabled -and $defaultEnabled) {
$hasEventKey = Select-String -Path $configFile -Pattern "^\s*${EventName}:" -Quiet
if (-not $hasEventKey) {
$enabled = $true
}
}
} else {
# No config file — auto-commit disabled by default
exit 0
}
if (-not $enabled) {
exit 0
}
# Check if there are changes to commit
# Relax ErrorActionPreference so CRLF warnings on stderr do not terminate.
$savedEAP = $ErrorActionPreference
$ErrorActionPreference = 'Continue'
try {
git diff --quiet HEAD 2>$null; $d1 = $LASTEXITCODE
git diff --cached --quiet 2>$null; $d2 = $LASTEXITCODE
$untracked = git ls-files --others --exclude-standard 2>$null
} finally {
$ErrorActionPreference = $savedEAP
}
if ($d1 -eq 0 -and $d2 -eq 0 -and -not $untracked) {
Write-Host "[specify] No changes to commit after $EventName" -ForegroundColor DarkGray
exit 0
}
# Derive a human-readable command name from the event
$commandName = $EventName -replace '^after_', '' -replace '^before_', ''
$phase = if ($EventName -match '^before_') { 'before' } else { 'after' }
# Use custom message if configured, otherwise default
if (-not $commitMsg) {
$commitMsg = "[Spec Kit] Auto-commit $phase $commandName"
}
# Stage and commit
# Relax ErrorActionPreference so CRLF warnings on stderr do not terminate,
# while still allowing redirected error output to be captured for diagnostics.
$savedEAP = $ErrorActionPreference
$ErrorActionPreference = 'Continue'
try {
$out = git add . 2>&1 | Out-String
if ($LASTEXITCODE -ne 0) { throw "git add failed: $out" }
$out = git commit -q -m $commitMsg 2>&1 | Out-String
if ($LASTEXITCODE -ne 0) { throw "git commit failed: $out" }
} catch {
Write-Warning "[specify] Error: $_"
exit 1
} finally {
$ErrorActionPreference = $savedEAP
}
Write-Host "[OK] Changes committed $phase $commandName"

View File

@@ -0,0 +1,403 @@
#!/usr/bin/env pwsh
# Git extension: create-new-feature.ps1
# Adapted from core scripts/powershell/create-new-feature.ps1 for extension layout.
# Sources common.ps1 from the project's installed scripts, falling back to
# git-common.ps1 for minimal git helpers.
[CmdletBinding()]
param(
[switch]$Json,
[switch]$AllowExistingBranch,
[switch]$DryRun,
[string]$ShortName,
[Parameter()]
[long]$Number = 0,
[switch]$Timestamp,
[switch]$Help,
[Parameter(Position = 0, ValueFromRemainingArguments = $true)]
[string[]]$FeatureDescription
)
$ErrorActionPreference = 'Stop'
if ($Help) {
Write-Host "Usage: ./create-new-feature.ps1 [-Json] [-DryRun] [-AllowExistingBranch] [-ShortName <name>] [-Number N] [-Timestamp] <feature description>"
Write-Host ""
Write-Host "Options:"
Write-Host " -Json Output in JSON format"
Write-Host " -DryRun Compute branch name without creating the branch"
Write-Host " -AllowExistingBranch Switch to branch if it already exists instead of failing"
Write-Host " -ShortName <name> Provide a custom short name (2-4 words) for the branch"
Write-Host " -Number N Specify branch number manually (overrides auto-detection)"
Write-Host " -Timestamp Use timestamp prefix (YYYYMMDD-HHMMSS) instead of sequential numbering"
Write-Host " -Help Show this help message"
Write-Host ""
Write-Host "Environment variables:"
Write-Host " GIT_BRANCH_NAME Use this exact branch name, bypassing all prefix/suffix generation"
Write-Host ""
exit 0
}
if (-not $FeatureDescription -or $FeatureDescription.Count -eq 0) {
Write-Error "Usage: ./create-new-feature.ps1 [-Json] [-DryRun] [-AllowExistingBranch] [-ShortName <name>] [-Number N] [-Timestamp] <feature description>"
exit 1
}
$featureDesc = ($FeatureDescription -join ' ').Trim()
if ([string]::IsNullOrWhiteSpace($featureDesc)) {
Write-Error "Error: Feature description cannot be empty or contain only whitespace"
exit 1
}
function Get-HighestNumberFromSpecs {
param([string]$SpecsDir)
[long]$highest = 0
if (Test-Path $SpecsDir) {
Get-ChildItem -Path $SpecsDir -Directory | ForEach-Object {
if ($_.Name -match '^(\d{3,})-' -and $_.Name -notmatch '^\d{8}-\d{6}-') {
[long]$num = 0
if ([long]::TryParse($matches[1], [ref]$num) -and $num -gt $highest) {
$highest = $num
}
}
}
}
return $highest
}
function Get-HighestNumberFromNames {
param([string[]]$Names)
[long]$highest = 0
foreach ($name in $Names) {
if ($name -match '^(\d{3,})-' -and $name -notmatch '^\d{8}-\d{6}-') {
[long]$num = 0
if ([long]::TryParse($matches[1], [ref]$num) -and $num -gt $highest) {
$highest = $num
}
}
}
return $highest
}
function Get-HighestNumberFromBranches {
param()
try {
$branches = git branch -a 2>$null
if ($LASTEXITCODE -eq 0 -and $branches) {
$cleanNames = $branches | ForEach-Object {
$_.Trim() -replace '^\*?\s+', '' -replace '^remotes/[^/]+/', ''
}
return Get-HighestNumberFromNames -Names $cleanNames
}
} catch {
Write-Verbose "Could not check Git branches: $_"
}
return 0
}
function Get-HighestNumberFromRemoteRefs {
[long]$highest = 0
try {
$remotes = git remote 2>$null
if ($remotes) {
foreach ($remote in $remotes) {
$env:GIT_TERMINAL_PROMPT = '0'
$refs = git ls-remote --heads $remote 2>$null
$env:GIT_TERMINAL_PROMPT = $null
if ($LASTEXITCODE -eq 0 -and $refs) {
$refNames = $refs | ForEach-Object {
if ($_ -match 'refs/heads/(.+)$') { $matches[1] }
} | Where-Object { $_ }
$remoteHighest = Get-HighestNumberFromNames -Names $refNames
if ($remoteHighest -gt $highest) { $highest = $remoteHighest }
}
}
}
} catch {
Write-Verbose "Could not query remote refs: $_"
}
return $highest
}
function Get-NextBranchNumber {
param(
[string]$SpecsDir,
[switch]$SkipFetch
)
if ($SkipFetch) {
$highestBranch = Get-HighestNumberFromBranches
$highestRemote = Get-HighestNumberFromRemoteRefs
$highestBranch = [Math]::Max($highestBranch, $highestRemote)
} else {
try {
git fetch --all --prune 2>$null | Out-Null
} catch { }
$highestBranch = Get-HighestNumberFromBranches
}
$highestSpec = Get-HighestNumberFromSpecs -SpecsDir $SpecsDir
$maxNum = [Math]::Max($highestBranch, $highestSpec)
return $maxNum + 1
}
function ConvertTo-CleanBranchName {
param([string]$Name)
return $Name.ToLower() -replace '[^a-z0-9]', '-' -replace '-{2,}', '-' -replace '^-', '' -replace '-$', ''
}
# ---------------------------------------------------------------------------
# Source common.ps1 from the project's installed scripts.
# Search locations in priority order:
# 1. .specify/scripts/powershell/common.ps1 under the project root
# 2. scripts/powershell/common.ps1 under the project root (source checkout)
# 3. git-common.ps1 next to this script (minimal fallback)
# ---------------------------------------------------------------------------
function Find-ProjectRoot {
param([string]$StartDir)
$current = Resolve-Path $StartDir
while ($true) {
foreach ($marker in @('.specify', '.git')) {
if (Test-Path (Join-Path $current $marker)) {
return $current
}
}
$parent = Split-Path $current -Parent
if ($parent -eq $current) { return $null }
$current = $parent
}
}
$projectRoot = Find-ProjectRoot -StartDir $PSScriptRoot
$commonLoaded = $false
if ($projectRoot) {
$candidates = @(
(Join-Path $projectRoot ".specify/scripts/powershell/common.ps1"),
(Join-Path $projectRoot "scripts/powershell/common.ps1")
)
foreach ($candidate in $candidates) {
if (Test-Path $candidate) {
. $candidate
$commonLoaded = $true
break
}
}
}
if (-not $commonLoaded -and (Test-Path "$PSScriptRoot/git-common.ps1")) {
. "$PSScriptRoot/git-common.ps1"
$commonLoaded = $true
}
if (-not $commonLoaded) {
throw "Unable to locate common script file. Please ensure the Specify core scripts are installed."
}
# Resolve repository root
if (Get-Command Get-RepoRoot -ErrorAction SilentlyContinue) {
$repoRoot = Get-RepoRoot
} elseif ($projectRoot) {
$repoRoot = $projectRoot
} else {
throw "Could not determine repository root."
}
# Check if git is available
if (Get-Command Test-HasGit -ErrorAction SilentlyContinue) {
# Call without parameters for compatibility with core common.ps1 (no -RepoRoot param)
# and git-common.ps1 (has -RepoRoot param with default).
$hasGit = Test-HasGit
} else {
try {
git -C $repoRoot rev-parse --is-inside-work-tree 2>$null | Out-Null
$hasGit = ($LASTEXITCODE -eq 0)
} catch {
$hasGit = $false
}
}
Set-Location $repoRoot
$specsDir = Join-Path $repoRoot 'specs'
function Get-BranchName {
param([string]$Description)
$stopWords = @(
'i', 'a', 'an', 'the', 'to', 'for', 'of', 'in', 'on', 'at', 'by', 'with', 'from',
'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had',
'do', 'does', 'did', 'will', 'would', 'should', 'could', 'can', 'may', 'might', 'must', 'shall',
'this', 'that', 'these', 'those', 'my', 'your', 'our', 'their',
'want', 'need', 'add', 'get', 'set'
)
$cleanName = $Description.ToLower() -replace '[^a-z0-9\s]', ' '
$words = $cleanName -split '\s+' | Where-Object { $_ }
$meaningfulWords = @()
foreach ($word in $words) {
if ($stopWords -contains $word) { continue }
if ($word.Length -ge 3) {
$meaningfulWords += $word
} elseif ($Description -match "\b$($word.ToUpper())\b") {
$meaningfulWords += $word
}
}
if ($meaningfulWords.Count -gt 0) {
$maxWords = if ($meaningfulWords.Count -eq 4) { 4 } else { 3 }
$result = ($meaningfulWords | Select-Object -First $maxWords) -join '-'
return $result
} else {
$result = ConvertTo-CleanBranchName -Name $Description
$fallbackWords = ($result -split '-') | Where-Object { $_ } | Select-Object -First 3
return [string]::Join('-', $fallbackWords)
}
}
# Check for GIT_BRANCH_NAME env var override (exact branch name, no prefix/suffix)
if ($env:GIT_BRANCH_NAME) {
$branchName = $env:GIT_BRANCH_NAME
# Check 244-byte limit (UTF-8) for override names
$branchNameUtf8ByteCount = [System.Text.Encoding]::UTF8.GetByteCount($branchName)
if ($branchNameUtf8ByteCount -gt 244) {
throw "GIT_BRANCH_NAME must be 244 bytes or fewer in UTF-8. Provided value is $branchNameUtf8ByteCount bytes; please supply a shorter override branch name."
}
# Extract FEATURE_NUM from the branch name if it starts with a numeric prefix
# Check timestamp pattern first (YYYYMMDD-HHMMSS-) since it also matches the simpler ^\d+ pattern
if ($branchName -match '^(\d{8}-\d{6})-') {
$featureNum = $matches[1]
} elseif ($branchName -match '^(\d+)-') {
$featureNum = $matches[1]
} else {
$featureNum = $branchName
}
} else {
if ($ShortName) {
$branchSuffix = ConvertTo-CleanBranchName -Name $ShortName
} else {
$branchSuffix = Get-BranchName -Description $featureDesc
}
if ($Timestamp -and $Number -ne 0) {
Write-Warning "[specify] Warning: -Number is ignored when -Timestamp is used"
$Number = 0
}
if ($Timestamp) {
$featureNum = Get-Date -Format 'yyyyMMdd-HHmmss'
$branchName = "$featureNum-$branchSuffix"
} else {
if ($Number -eq 0) {
if ($DryRun -and $hasGit) {
$Number = Get-NextBranchNumber -SpecsDir $specsDir -SkipFetch
} elseif ($DryRun) {
$Number = (Get-HighestNumberFromSpecs -SpecsDir $specsDir) + 1
} elseif ($hasGit) {
$Number = Get-NextBranchNumber -SpecsDir $specsDir
} else {
$Number = (Get-HighestNumberFromSpecs -SpecsDir $specsDir) + 1
}
}
$featureNum = ('{0:000}' -f $Number)
$branchName = "$featureNum-$branchSuffix"
}
}
$maxBranchLength = 244
if ($branchName.Length -gt $maxBranchLength) {
$prefixLength = $featureNum.Length + 1
$maxSuffixLength = $maxBranchLength - $prefixLength
$truncatedSuffix = $branchSuffix.Substring(0, [Math]::Min($branchSuffix.Length, $maxSuffixLength))
$truncatedSuffix = $truncatedSuffix -replace '-$', ''
$originalBranchName = $branchName
$branchName = "$featureNum-$truncatedSuffix"
Write-Warning "[specify] Branch name exceeded GitHub's 244-byte limit"
Write-Warning "[specify] Original: $originalBranchName ($($originalBranchName.Length) bytes)"
Write-Warning "[specify] Truncated to: $branchName ($($branchName.Length) bytes)"
}
if (-not $DryRun) {
if ($hasGit) {
$branchCreated = $false
$branchCreateError = ''
try {
$branchCreateError = git checkout -q -b $branchName 2>&1 | Out-String
if ($LASTEXITCODE -eq 0) {
$branchCreated = $true
}
} catch {
$branchCreateError = $_.Exception.Message
}
if (-not $branchCreated) {
$currentBranch = ''
try { $currentBranch = (git rev-parse --abbrev-ref HEAD 2>$null).Trim() } catch {}
$existingBranch = git branch --list $branchName 2>$null
if ($existingBranch) {
if ($AllowExistingBranch) {
if ($currentBranch -eq $branchName) {
# Already on the target branch
} else {
$switchBranchError = git checkout -q $branchName 2>&1 | Out-String
if ($LASTEXITCODE -ne 0) {
if ($switchBranchError) {
Write-Error "Error: Branch '$branchName' exists but could not be checked out.`n$($switchBranchError.Trim())"
} else {
Write-Error "Error: Branch '$branchName' exists but could not be checked out. Resolve any uncommitted changes or conflicts and try again."
}
exit 1
}
}
} elseif ($Timestamp) {
Write-Error "Error: Branch '$branchName' already exists. Rerun to get a new timestamp or use a different -ShortName."
exit 1
} else {
Write-Error "Error: Branch '$branchName' already exists. Please use a different feature name or specify a different number with -Number."
exit 1
}
} else {
if ($branchCreateError) {
Write-Error "Error: Failed to create git branch '$branchName'.`n$($branchCreateError.Trim())"
} else {
Write-Error "Error: Failed to create git branch '$branchName'. Please check your git configuration and try again."
}
exit 1
}
}
} else {
if ($Json) {
[Console]::Error.WriteLine("[specify] Warning: Git repository not detected; skipped branch creation for $branchName")
} else {
Write-Warning "[specify] Warning: Git repository not detected; skipped branch creation for $branchName"
}
}
$env:SPECIFY_FEATURE = $branchName
}
if ($Json) {
$obj = [PSCustomObject]@{
BRANCH_NAME = $branchName
FEATURE_NUM = $featureNum
HAS_GIT = $hasGit
}
if ($DryRun) {
$obj | Add-Member -NotePropertyName 'DRY_RUN' -NotePropertyValue $true
}
$obj | ConvertTo-Json -Compress
} else {
Write-Output "BRANCH_NAME: $branchName"
Write-Output "FEATURE_NUM: $featureNum"
Write-Output "HAS_GIT: $hasGit"
if (-not $DryRun) {
Write-Output "SPECIFY_FEATURE environment variable set to: $branchName"
}
}

View File

@@ -0,0 +1,51 @@
#!/usr/bin/env pwsh
# Git-specific common functions for the git extension.
# Extracted from scripts/powershell/common.ps1 — contains only git-specific
# branch validation and detection logic.
function Test-HasGit {
param([string]$RepoRoot = (Get-Location))
try {
if (-not (Test-Path (Join-Path $RepoRoot '.git'))) { return $false }
if (-not (Get-Command git -ErrorAction SilentlyContinue)) { return $false }
git -C $RepoRoot rev-parse --is-inside-work-tree 2>$null | Out-Null
return ($LASTEXITCODE -eq 0)
} catch {
return $false
}
}
function Get-SpecKitEffectiveBranchName {
param([string]$Branch)
if ($Branch -match '^([^/]+)/([^/]+)$') {
return $Matches[2]
}
return $Branch
}
function Test-FeatureBranch {
param(
[string]$Branch,
[bool]$HasGit = $true
)
# For non-git repos, we can't enforce branch naming but still provide output
if (-not $HasGit) {
Write-Warning "[specify] Warning: Git repository not detected; skipped branch validation"
return $true
}
$raw = $Branch
$Branch = Get-SpecKitEffectiveBranchName $raw
# Accept sequential prefix (3+ digits) but exclude malformed timestamps
# Malformed: 7-or-8 digit date + 6-digit time with no trailing slug (e.g. "2026031-143022" or "20260319-143022")
$hasMalformedTimestamp = ($Branch -match '^[0-9]{7}-[0-9]{6}-') -or ($Branch -match '^(?:\d{7}|\d{8})-\d{6}$')
$isSequential = ($Branch -match '^[0-9]{3,}-') -and (-not $hasMalformedTimestamp)
if (-not $isSequential -and $Branch -notmatch '^\d{8}-\d{6}-') {
[Console]::Error.WriteLine("ERROR: Not on a feature branch. Current branch: $raw")
[Console]::Error.WriteLine("Feature branches should be named like: 001-feature-name, 1234-feature-name, or 20260319-143022-feature-name")
return $false
}
return $true
}

View File

@@ -0,0 +1,69 @@
#!/usr/bin/env pwsh
# Git extension: initialize-repo.ps1
# Initialize a Git repository with an initial commit.
# Customizable — replace this script to add .gitignore templates,
# default branch config, git-flow, LFS, signing, etc.
$ErrorActionPreference = 'Stop'
# Find project root
function Find-ProjectRoot {
param([string]$StartDir)
$current = Resolve-Path $StartDir
while ($true) {
foreach ($marker in @('.specify', '.git')) {
if (Test-Path (Join-Path $current $marker)) {
return $current
}
}
$parent = Split-Path $current -Parent
if ($parent -eq $current) { return $null }
$current = $parent
}
}
$repoRoot = Find-ProjectRoot -StartDir $PSScriptRoot
if (-not $repoRoot) { $repoRoot = Get-Location }
Set-Location $repoRoot
# Read commit message from extension config, fall back to default
$commitMsg = "[Spec Kit] Initial commit"
$configFile = Join-Path $repoRoot ".specify/extensions/git/git-config.yml"
if (Test-Path $configFile) {
foreach ($line in Get-Content $configFile) {
if ($line -match '^init_commit_message:\s*(.+)$') {
$val = $matches[1].Trim() -replace '^["'']' -replace '["'']$'
if ($val) { $commitMsg = $val }
break
}
}
}
# Check if git is available
if (-not (Get-Command git -ErrorAction SilentlyContinue)) {
Write-Warning "[specify] Warning: Git not found; skipped repository initialization"
exit 0
}
# Check if already a git repo
try {
git rev-parse --is-inside-work-tree 2>$null | Out-Null
if ($LASTEXITCODE -eq 0) {
Write-Warning "[specify] Git repository already initialized; skipping"
exit 0
}
} catch { }
# Initialize
try {
$out = git init -q 2>&1 | Out-String
if ($LASTEXITCODE -ne 0) { throw "git init failed: $out" }
$out = git add . 2>&1 | Out-String
if ($LASTEXITCODE -ne 0) { throw "git add failed: $out" }
$out = git commit --allow-empty -q -m $commitMsg 2>&1 | Out-String
if ($LASTEXITCODE -ne 0) { throw "git commit failed: $out" }
} catch {
Write-Warning "[specify] Error: $_"
exit 1
}
Write-Host "✓ Git repository initialized"

3
.specify/feature.json Normal file
View File

@@ -0,0 +1,3 @@
{
"feature_directory": "specs/001-reaction-image-board"
}

View File

@@ -0,0 +1,10 @@
{
"ai": "claude",
"ai_skills": true,
"branch_numbering": "sequential",
"context_file": "CLAUDE.md",
"here": true,
"integration": "claude",
"script": "sh",
"speckit_version": "0.8.2.dev0"
}

View File

@@ -0,0 +1,4 @@
{
"integration": "claude",
"version": "0.8.2.dev0"
}

View File

@@ -0,0 +1,16 @@
{
"integration": "claude",
"version": "0.8.2.dev0",
"installed_at": "2026-05-02T15:15:14.461699+00:00",
"files": {
".claude/skills/speckit-analyze/SKILL.md": "2eef0fbff6cad15c9d4714d8986192387811c971a82a1135ab0404f3db0c5e90",
".claude/skills/speckit-checklist/SKILL.md": "26419fc118dcd9c4e1e977460696a04b7757b8fb0a2d1ff9c64732669deb7977",
".claude/skills/speckit-clarify/SKILL.md": "f2560f9f2007b4e995130f0c42633f08837a76a35d94e84091713a6f39bb1064",
".claude/skills/speckit-constitution/SKILL.md": "c1a044aba243ca6aff627fb5e4404feb6f1108d4f7dd174631bee3ae477d6c15",
".claude/skills/speckit-implement/SKILL.md": "da9b4d6f9894d300515c66c057cee74025b27f2238895e3c22b59c6266b5be74",
".claude/skills/speckit-plan/SKILL.md": "8141ebbce228ad0b422a84e3b995d2bd85de917b96eadd02b5fcb56fb23f2594",
".claude/skills/speckit-specify/SKILL.md": "8599f8e2e3463de7d4f47591565340be2f775fd61b7dd9d2175503bc3b713b77",
".claude/skills/speckit-tasks/SKILL.md": "792589edf0ebf89af797c6bdda4e9d2c9938c696181d6f1484bf7a7cd090efaa",
".claude/skills/speckit-taskstoissues/SKILL.md": "99bf5ffd90dcb57b63007c7f659a5160a18ce6feb82889895808e2d277abe83b"
}
}

View File

@@ -0,0 +1,16 @@
{
"integration": "speckit",
"version": "0.8.2.dev0",
"installed_at": "2026-05-02T15:15:14.478105+00:00",
"files": {
".specify/scripts/bash/create-new-feature.sh": "bcf4964ca0c6c78717bb42d9e66b8c7e5ee82779cd96afc5aa7b08b75abe5790",
".specify/scripts/bash/check-prerequisites.sh": "aff361639c504b95a2901493f5022788adc01a6792fd37f132de8f57782e4b80",
".specify/scripts/bash/setup-plan.sh": "0d1d7a66de157b0be1385bb91aa71e5bf95550217abf47a73270dab0dc52895a",
".specify/scripts/bash/common.sh": "dd638316259e699fd466542c77ef16af5eb198efe0447c081f86b890db414ba8",
".specify/templates/tasks-template.md": "fb7a30a6e8e7319b7134bd52a26dd52fb7dd9106ab8fa08b6fb551d704dac498",
".specify/templates/plan-template.md": "5ad267630e370c73fe957dafa61bf76d633f3aea9d2f0b5195087d729cdd1e41",
".specify/templates/constitution-template.md": "ce7549540fa45543cca797a150201d868e64495fdff39dc38246fb17bd4024b3",
".specify/templates/spec-template.md": "785dc50d856dd92d6515eca0761e16dce0c9ba0a3cd07154fd33eae77932422a",
".specify/templates/checklist-template.md": "c37695297e5d3153d64f82c21223509940b13932046c7961c42d1d669516130c"
}
}

View File

@@ -0,0 +1,288 @@
<!--
SYNC IMPACT REPORT
==================
Version change: [TEMPLATE — no prior version] → 1.1.0
Ratified: 2026-05-01 | Last amended: 2026-05-02
Principles introduced (first population from docs/CONSTITUTION.md):
- §2 Architecture Principles (6 sub-principles)
- §3 API Design (4 sub-principles)
- §4 Data Model Principles (4 items — fixed duplicate §4.3 → §4.4)
- §5 Testing Discipline (4 sub-principles)
Sections introduced:
- §1 Project Identity
- §6 Tech Stack Constraints
- §7 Developer Experience
- §8 Scope Boundaries (v1)
- §9 Governance (new — derived from preamble + standard governance rules)
- §10 Revision Log
Templates updated:
✅ .specify/memory/constitution.md — this file
✅ .specify/templates/tasks-template.md — removed "Tests are OPTIONAL" language
that contradicted §5.1 TDD mandate
⚠ .specify/templates/plan-template.md — Constitution Check section references
"[Gates determined based on constitution file]";
no structural change needed, gates are
resolved at plan-generation time
⚠ .specify/templates/spec-template.md — no conflicts found; no update needed
Deferred TODOs:
- None. All placeholders resolved.
-->
# Reaction Image Board — Project Constitution
> This document is the authoritative source of project-wide principles.
> Every spec, plan, and task produced for this project MUST comply with it.
> Contradictions between a spec/plan/task and this constitution are resolved
> in favour of the constitution. Changes to the constitution require an
> explicit revision with a noted rationale.
---
## 1. Project Identity
A personal, self-hosted web application for uploading, tagging, and searching
reaction images. The system is split into two independently deployable units:
- **API** — Python / FastAPI
- **UI** — Angular (standalone SPA, communicates with the API over HTTP)
There is no server-side rendering. The UI is a pure client that consumes the
API. These two units may live in the same repository but are treated as
separate deployable artifacts with separate dependency manifests.
---
## 2. Architecture Principles
### 2.1 Strict separation of concerns
The API knows nothing about the UI framework. The UI knows nothing about
storage or database implementation details. All cross-unit communication is
through versioned HTTP contracts (OpenAPI spec is the source of truth).
### 2.2 Dependency direction
```
UI → API → Storage (S3-compatible)
→ Database (PostgreSQL)
```
No layer MAY import or directly call a layer above it in this diagram.
### 2.3 Storage abstraction
All file I/O goes through a `StorageBackend` interface. The initial
implementation is S3-compatible (MinIO locally, real S3 in production).
No code outside the storage module MAY reference bucket names, S3 URLs,
or SDK-specific types directly — only the interface contract.
### 2.4 Auth abstraction (progressive)
Authentication is treated as a pluggable backend from day one, even though
Phase 1 ships with no auth. The API MUST route all request-identity resolution
through a single `AuthProvider` interface. The no-op provider (Phase 1) returns
a static anonymous identity. Adding username/password or OIDC in a later phase
MUST be a new provider implementation, not a rewrite of business logic.
**Phase 1 implements: no-auth (localhost only).**
**Planned phases: username/password, then OIDC.**
The constitution acknowledges all three; the spec governs which is built.
### 2.5 Database abstraction
PostgreSQL is the Phase 1 database. All DB access MUST go through a repository
layer (one repository class per domain aggregate). Raw SQL or an ORM is
acceptable, but no query logic MAY live outside a repository. This makes the
planned PostgreSQL → SQLite refactor a repository-layer change only.
### 2.6 No speculative abstraction
Interfaces are introduced only when a second implementation either exists or
is explicitly planned in the constitution (StorageBackend, AuthProvider,
repositories). Everything else is concrete until a real second implementation
is needed. "Just in case" generics, base classes, or plugin systems are
prohibited.
---
## 3. API Design
### 3.1 Versioning
All API routes MUST be prefixed `/api/v1/`. Breaking changes require a new
version prefix. Adding fields to responses is non-breaking; removing or
renaming is breaking.
### 3.2 OpenAPI as contract
The FastAPI-generated OpenAPI schema is the binding contract between API and
UI. The UI MUST NOT depend on behaviour not described in the schema.
### 3.3 Error shape
All error responses MUST use a consistent envelope:
```json
{ "detail": "<human-readable message>", "code": "<machine-readable code>" }
```
The `code` field is required. Naked HTTP status codes with empty bodies are
prohibited.
### 3.4 Pagination
Any endpoint that returns a list MUST support cursor- or offset-based
pagination from the start. No endpoint MAY return an unbounded list.
---
## 4. Data Model Principles
### 4.1 Tags are lowercase, normalised strings
Tags are stored and matched case-insensitively. The API MUST normalise tag
input to lowercase and strip whitespace before persistence. The UI may display
tags in any case but MUST send them lowercase.
### 4.2 Images are immutable after upload
Once an image is stored, its file content is never replaced. The stored object
MUST NOT be mutated.
### 4.3 Deduplication by content hash
Every image MUST be hashed (SHA-256) on upload before storage. If a hash
already exists in the database, the upload is rejected or the existing record
is returned — the spec will decide the UX — but no duplicate object is written
to storage. The hash is stored on the image record and may also serve as the
storage object key.
### 4.4 Tag search is AND logic in v1
A search for tags `[cat, funny]` returns only images that have **both** tags.
OR/NOT logic is explicitly out of scope until the constitution is revised.
---
## 5. Testing Discipline
### 5.1 TDD is non-negotiable
No production code MAY be written before a failing test exists for it. This
applies to both API and UI. Tasks MUST include a "write failing test" step
before any implementation step.
### 5.2 Test pyramid
- **Unit tests** — pure logic, repository mocks, no I/O
- **Integration tests** — API routes tested against a real (test) database
and a real (test) S3-compatible bucket (e.g. MinIO in Docker)
- **E2E tests** — Angular + API, minimal set covering the core happy paths
Unit and integration tests are required. E2E tests are best-effort in v1.
### 5.3 Tests live next to the code they test
API tests in `api/tests/`, UI tests colocated with their components. No
separate top-level `tests/` directory that mirrors the source tree.
### 5.4 CI must pass before any task is considered done
"Done" means: all tests pass, linter passes, type checker passes. A task MUST
NOT be marked complete while CI is failing.
---
## 6. Tech Stack Constraints
| Concern | Choice | Rationale |
|---|---|---|
| API language | Python 3.12+ | Primary language, type hints required |
| API framework | FastAPI | Async, OpenAPI-native |
| ORM / query | SQLAlchemy 2.x (async) + asyncpg driver | Repository layer owns all queries |
| DB migrations | Alembic | Schema changes tracked in version control |
| Object storage | S3-compatible via `boto3` / `aiobotocore` | Swap MinIO ↔ S3 via env config |
| UI framework | Angular (latest stable) | Job-relevant, learning goal |
| UI language | TypeScript strict mode | No `any`, no implicit types |
| Containerisation | Docker + Docker Compose | Local dev must start with one command |
---
## 7. Developer Experience
### 7.1 One-command local start
`docker compose up` MUST start the full stack: PostgreSQL, MinIO, API, UI dev
server. No manual environment setup beyond copying `.env.example`.
### 7.2 Environment configuration
All environment-specific values (DB URL, S3 endpoint, bucket name, ports) MUST
come from environment variables. Hardcoded hostnames, credentials, or ports in
source code are prohibited. `.env.example` is committed; `.env` is gitignored.
### 7.3 Linting and formatting are not optional
API: `ruff` (lint + format). UI: `eslint` + `prettier`. Both are enforced in
CI. PRs/commits with lint failures MUST NOT be considered complete tasks.
---
## 8. Scope Boundaries (v1)
The following are **explicitly out of scope** for v1 and MUST NOT be designed
for, abstracted toward, or mentioned in specs unless the constitution is
revised:
- Multi-user support
- Public sharing or embeds
- Collections or albums beyond tag-based grouping
- Image editing or transformation
- OR/NOT tag logic
- Mobile-native app
- Username/password auth (planned Phase 2)
- OIDC auth (planned Phase 3)
---
## 9. Governance
This constitution supersedes all other project practices, conventions, and
prior documentation. Where a conflict exists between a spec, plan, or task and
this constitution, the constitution prevails.
**Amendment procedure**: Any change to this document MUST be accompanied by:
1. An explicit rationale entry in the Revision Log (§10).
2. A version bump following semantic versioning (see below).
3. A review of all active specs and plans for compliance with the change.
**Versioning policy**:
- MAJOR — backward-incompatible governance change: a principle removed,
redefined in a way that invalidates prior work, or a hard scope boundary
moved inward.
- MINOR — new principle or section added, or materially expanded guidance that
requires new compliance work.
- PATCH — clarifications, wording improvements, typo fixes, non-semantic
refinements that do not change what is required.
**Compliance review**: Every new spec MUST include a "Constitution Check" gate
(verified by the plan command) before Phase 0 research begins, and again after
Phase 1 design is complete.
---
## 10. Revision Log
| Version | Date | Change |
|---|---|---|
| 1.0.0 | 2026-05-01 | Initial constitution |
| 1.1.0 | 2026-05-01 | asyncpg driver explicit; SHA-256 deduplication added to data model; deduplication removed from out-of-scope |
| 1.1.0 | 2026-05-02 | Adopted into Spec Kit memory; fixed duplicate §4.3 → §4.4; strengthened "should" language to MUST/MUST NOT; added §9 Governance |
---
**Version**: 1.1.0 | **Ratified**: 2026-05-01 | **Last Amended**: 2026-05-02

View File

@@ -0,0 +1,190 @@
#!/usr/bin/env bash
# Consolidated prerequisite checking script
#
# This script provides unified prerequisite checking for Spec-Driven Development workflow.
# It replaces the functionality previously spread across multiple scripts.
#
# Usage: ./check-prerequisites.sh [OPTIONS]
#
# OPTIONS:
# --json Output in JSON format
# --require-tasks Require tasks.md to exist (for implementation phase)
# --include-tasks Include tasks.md in AVAILABLE_DOCS list
# --paths-only Only output path variables (no validation)
# --help, -h Show help message
#
# OUTPUTS:
# JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]}
# Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n ✓/✗ file.md
# Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc.
set -e
# Parse command line arguments
JSON_MODE=false
REQUIRE_TASKS=false
INCLUDE_TASKS=false
PATHS_ONLY=false
for arg in "$@"; do
case "$arg" in
--json)
JSON_MODE=true
;;
--require-tasks)
REQUIRE_TASKS=true
;;
--include-tasks)
INCLUDE_TASKS=true
;;
--paths-only)
PATHS_ONLY=true
;;
--help|-h)
cat << 'EOF'
Usage: check-prerequisites.sh [OPTIONS]
Consolidated prerequisite checking for Spec-Driven Development workflow.
OPTIONS:
--json Output in JSON format
--require-tasks Require tasks.md to exist (for implementation phase)
--include-tasks Include tasks.md in AVAILABLE_DOCS list
--paths-only Only output path variables (no prerequisite validation)
--help, -h Show this help message
EXAMPLES:
# Check task prerequisites (plan.md required)
./check-prerequisites.sh --json
# Check implementation prerequisites (plan.md + tasks.md required)
./check-prerequisites.sh --json --require-tasks --include-tasks
# Get feature paths only (no validation)
./check-prerequisites.sh --paths-only
EOF
exit 0
;;
*)
echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2
exit 1
;;
esac
done
# Source common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get feature paths and validate branch
_paths_output=$(get_feature_paths) || { echo "ERROR: Failed to resolve feature paths" >&2; exit 1; }
eval "$_paths_output"
unset _paths_output
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
# If paths-only mode, output paths and exit (support JSON + paths-only combined)
if $PATHS_ONLY; then
if $JSON_MODE; then
# Minimal JSON paths payload (no validation performed)
if has_jq; then
jq -cn \
--arg repo_root "$REPO_ROOT" \
--arg branch "$CURRENT_BRANCH" \
--arg feature_dir "$FEATURE_DIR" \
--arg feature_spec "$FEATURE_SPEC" \
--arg impl_plan "$IMPL_PLAN" \
--arg tasks "$TASKS" \
'{REPO_ROOT:$repo_root,BRANCH:$branch,FEATURE_DIR:$feature_dir,FEATURE_SPEC:$feature_spec,IMPL_PLAN:$impl_plan,TASKS:$tasks}'
else
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
"$(json_escape "$REPO_ROOT")" "$(json_escape "$CURRENT_BRANCH")" "$(json_escape "$FEATURE_DIR")" "$(json_escape "$FEATURE_SPEC")" "$(json_escape "$IMPL_PLAN")" "$(json_escape "$TASKS")"
fi
else
echo "REPO_ROOT: $REPO_ROOT"
echo "BRANCH: $CURRENT_BRANCH"
echo "FEATURE_DIR: $FEATURE_DIR"
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
echo "TASKS: $TASKS"
fi
exit 0
fi
# Validate required directories and files
if [[ ! -d "$FEATURE_DIR" ]]; then
echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2
echo "Run /speckit.specify first to create the feature structure." >&2
exit 1
fi
if [[ ! -f "$IMPL_PLAN" ]]; then
echo "ERROR: plan.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.plan first to create the implementation plan." >&2
exit 1
fi
# Check for tasks.md if required
if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then
echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2
echo "Run /speckit.tasks first to create the task list." >&2
exit 1
fi
# Build list of available documents
docs=()
# Always check these optional docs
[[ -f "$RESEARCH" ]] && docs+=("research.md")
[[ -f "$DATA_MODEL" ]] && docs+=("data-model.md")
# Check contracts directory (only if it exists and has files)
if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then
docs+=("contracts/")
fi
[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md")
# Include tasks.md if requested and it exists
if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then
docs+=("tasks.md")
fi
# Output results
if $JSON_MODE; then
# Build JSON array of documents
if has_jq; then
if [[ ${#docs[@]} -eq 0 ]]; then
json_docs="[]"
else
json_docs=$(printf '%s\n' "${docs[@]}" | jq -R . | jq -s .)
fi
jq -cn \
--arg feature_dir "$FEATURE_DIR" \
--argjson docs "$json_docs" \
'{FEATURE_DIR:$feature_dir,AVAILABLE_DOCS:$docs}'
else
if [[ ${#docs[@]} -eq 0 ]]; then
json_docs="[]"
else
json_docs=$(for d in "${docs[@]}"; do printf '"%s",' "$(json_escape "$d")"; done)
json_docs="[${json_docs%,}]"
fi
printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$(json_escape "$FEATURE_DIR")" "$json_docs"
fi
else
# Text output
echo "FEATURE_DIR:$FEATURE_DIR"
echo "AVAILABLE_DOCS:"
# Show status of each potential document
check_file "$RESEARCH" "research.md"
check_file "$DATA_MODEL" "data-model.md"
check_dir "$CONTRACTS_DIR" "contracts/"
check_file "$QUICKSTART" "quickstart.md"
if $INCLUDE_TASKS; then
check_file "$TASKS" "tasks.md"
fi
fi

645
.specify/scripts/bash/common.sh Executable file
View File

@@ -0,0 +1,645 @@
#!/usr/bin/env bash
# Common functions and variables for all scripts
# Find repository root by searching upward for .specify directory
# This is the primary marker for spec-kit projects
find_specify_root() {
local dir="${1:-$(pwd)}"
# Normalize to absolute path to prevent infinite loop with relative paths
# Use -- to handle paths starting with - (e.g., -P, -L)
dir="$(cd -- "$dir" 2>/dev/null && pwd)" || return 1
local prev_dir=""
while true; do
if [ -d "$dir/.specify" ]; then
echo "$dir"
return 0
fi
# Stop if we've reached filesystem root or dirname stops changing
if [ "$dir" = "/" ] || [ "$dir" = "$prev_dir" ]; then
break
fi
prev_dir="$dir"
dir="$(dirname "$dir")"
done
return 1
}
# Get repository root, prioritizing .specify directory over git
# This prevents using a parent git repo when spec-kit is initialized in a subdirectory
get_repo_root() {
# First, look for .specify directory (spec-kit's own marker)
local specify_root
if specify_root=$(find_specify_root); then
echo "$specify_root"
return
fi
# Fallback to git if no .specify found
if git rev-parse --show-toplevel >/dev/null 2>&1; then
git rev-parse --show-toplevel
return
fi
# Final fallback to script location for non-git repos
local script_dir="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
(cd "$script_dir/../../.." && pwd)
}
# Get current branch, with fallback for non-git repositories
get_current_branch() {
# First check if SPECIFY_FEATURE environment variable is set
if [[ -n "${SPECIFY_FEATURE:-}" ]]; then
echo "$SPECIFY_FEATURE"
return
fi
# Then check git if available at the spec-kit root (not parent)
local repo_root=$(get_repo_root)
if has_git; then
git -C "$repo_root" rev-parse --abbrev-ref HEAD
return
fi
# For non-git repos, try to find the latest feature directory
local specs_dir="$repo_root/specs"
if [[ -d "$specs_dir" ]]; then
local latest_feature=""
local highest=0
local latest_timestamp=""
for dir in "$specs_dir"/*; do
if [[ -d "$dir" ]]; then
local dirname=$(basename "$dir")
if [[ "$dirname" =~ ^([0-9]{8}-[0-9]{6})- ]]; then
# Timestamp-based branch: compare lexicographically
local ts="${BASH_REMATCH[1]}"
if [[ "$ts" > "$latest_timestamp" ]]; then
latest_timestamp="$ts"
latest_feature=$dirname
fi
elif [[ "$dirname" =~ ^([0-9]{3,})- ]]; then
local number=${BASH_REMATCH[1]}
number=$((10#$number))
if [[ "$number" -gt "$highest" ]]; then
highest=$number
# Only update if no timestamp branch found yet
if [[ -z "$latest_timestamp" ]]; then
latest_feature=$dirname
fi
fi
fi
fi
done
if [[ -n "$latest_feature" ]]; then
echo "$latest_feature"
return
fi
fi
echo "main" # Final fallback
}
# Check if we have git available at the spec-kit root level
# Returns true only if git is installed and the repo root is inside a git work tree
# Handles both regular repos (.git directory) and worktrees/submodules (.git file)
has_git() {
# First check if git command is available (before calling get_repo_root which may use git)
command -v git >/dev/null 2>&1 || return 1
local repo_root=$(get_repo_root)
# Check if .git exists (directory or file for worktrees/submodules)
[ -e "$repo_root/.git" ] || return 1
# Verify it's actually a valid git work tree
git -C "$repo_root" rev-parse --is-inside-work-tree >/dev/null 2>&1
}
# Strip a single optional path segment (e.g. gitflow "feat/004-name" -> "004-name").
# Only when the full name is exactly two slash-free segments; otherwise returns the raw name.
spec_kit_effective_branch_name() {
local raw="$1"
if [[ "$raw" =~ ^([^/]+)/([^/]+)$ ]]; then
printf '%s\n' "${BASH_REMATCH[2]}"
else
printf '%s\n' "$raw"
fi
}
check_feature_branch() {
local raw="$1"
local has_git_repo="$2"
# For non-git repos, we can't enforce branch naming but still provide output
if [[ "$has_git_repo" != "true" ]]; then
echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2
return 0
fi
local branch
branch=$(spec_kit_effective_branch_name "$raw")
# Accept sequential prefix (3+ digits) but exclude malformed timestamps
# Malformed: 7-or-8 digit date + 6-digit time with no trailing slug (e.g. "2026031-143022" or "20260319-143022")
local is_sequential=false
if [[ "$branch" =~ ^[0-9]{3,}- ]] && [[ ! "$branch" =~ ^[0-9]{7}-[0-9]{6}- ]] && [[ ! "$branch" =~ ^[0-9]{7,8}-[0-9]{6}$ ]]; then
is_sequential=true
fi
if [[ "$is_sequential" != "true" ]] && [[ ! "$branch" =~ ^[0-9]{8}-[0-9]{6}- ]]; then
echo "ERROR: Not on a feature branch. Current branch: $raw" >&2
echo "Feature branches should be named like: 001-feature-name, 1234-feature-name, or 20260319-143022-feature-name" >&2
return 1
fi
return 0
}
# Safely read .specify/feature.json's "feature_directory" value.
# Prints the raw value (possibly relative) to stdout, or empty string if the file
# is missing, unparseable, or does not contain the key. Always returns 0 so callers
# under `set -e` cannot be aborted by parser failure.
# Parser order mirrors the historical get_feature_paths behavior: jq -> python3 -> grep/sed.
read_feature_json_feature_directory() {
local repo_root="$1"
local fj="$repo_root/.specify/feature.json"
[[ -f "$fj" ]] || { printf '%s' ''; return 0; }
local _fd=''
if command -v jq >/dev/null 2>&1; then
if ! _fd=$(jq -r '.feature_directory // empty' "$fj" 2>/dev/null); then
_fd=''
fi
elif command -v python3 >/dev/null 2>&1; then
# Use Python so pretty-printed/multi-line JSON still parses correctly.
if ! _fd=$(python3 -c "import json,sys; d=json.load(open(sys.argv[1])); v=d.get('feature_directory'); print(v if v else '')" "$fj" 2>/dev/null); then
_fd=''
fi
else
# Last-resort single-line grep/sed fallback. The `|| true` guards against
# grep returning 1 (no match) aborting under `set -e` / `pipefail`.
_fd=$( { grep -E '"feature_directory"[[:space:]]*:' "$fj" 2>/dev/null || true; } \
| head -n 1 \
| sed -E 's/^[^:]*:[[:space:]]*"([^"]*)".*$/\1/' )
fi
printf '%s' "$_fd"
return 0
}
# Returns 0 when .specify/feature.json lists feature_directory that exists as a directory
# and matches the resolved active FEATURE_DIR (so /speckit.plan can skip git branch pattern checks).
# Delegates parsing to read_feature_json_feature_directory, which is safe under `set -e`.
feature_json_matches_feature_dir() {
local repo_root="$1"
local active_feature_dir="$2"
local _fd
_fd=$(read_feature_json_feature_directory "$repo_root")
[[ -n "$_fd" ]] || return 1
[[ "$_fd" != /* ]] && _fd="$repo_root/$_fd"
[[ -d "$_fd" ]] || return 1
local norm_json norm_active
norm_json="$(cd -- "$_fd" 2>/dev/null && pwd -P)" || return 1
norm_active="$(cd -- "$active_feature_dir" 2>/dev/null && pwd -P)" || return 1
[[ "$norm_json" == "$norm_active" ]]
}
# Find feature directory by numeric prefix instead of exact branch match
# This allows multiple branches to work on the same spec (e.g., 004-fix-bug, 004-add-feature)
find_feature_dir_by_prefix() {
local repo_root="$1"
local branch_name
branch_name=$(spec_kit_effective_branch_name "$2")
local specs_dir="$repo_root/specs"
# Extract prefix from branch (e.g., "004" from "004-whatever" or "20260319-143022" from timestamp branches)
local prefix=""
if [[ "$branch_name" =~ ^([0-9]{8}-[0-9]{6})- ]]; then
prefix="${BASH_REMATCH[1]}"
elif [[ "$branch_name" =~ ^([0-9]{3,})- ]]; then
prefix="${BASH_REMATCH[1]}"
else
# If branch doesn't have a recognized prefix, fall back to exact match
echo "$specs_dir/$branch_name"
return
fi
# Search for directories in specs/ that start with this prefix
local matches=()
if [[ -d "$specs_dir" ]]; then
for dir in "$specs_dir"/"$prefix"-*; do
if [[ -d "$dir" ]]; then
matches+=("$(basename "$dir")")
fi
done
fi
# Handle results
if [[ ${#matches[@]} -eq 0 ]]; then
# No match found - return the branch name path (will fail later with clear error)
echo "$specs_dir/$branch_name"
elif [[ ${#matches[@]} -eq 1 ]]; then
# Exactly one match - perfect!
echo "$specs_dir/${matches[0]}"
else
# Multiple matches - this shouldn't happen with proper naming convention
echo "ERROR: Multiple spec directories found with prefix '$prefix': ${matches[*]}" >&2
echo "Please ensure only one spec directory exists per prefix." >&2
return 1
fi
}
get_feature_paths() {
local repo_root=$(get_repo_root)
local current_branch=$(get_current_branch)
local has_git_repo="false"
if has_git; then
has_git_repo="true"
fi
# Resolve feature directory. Priority:
# 1. SPECIFY_FEATURE_DIRECTORY env var (explicit override)
# 2. .specify/feature.json "feature_directory" key (persisted by /speckit.specify)
# 3. Branch-name-based prefix lookup (legacy fallback)
local feature_dir
if [[ -n "${SPECIFY_FEATURE_DIRECTORY:-}" ]]; then
feature_dir="$SPECIFY_FEATURE_DIRECTORY"
# Normalize relative paths to absolute under repo root
[[ "$feature_dir" != /* ]] && feature_dir="$repo_root/$feature_dir"
elif [[ -f "$repo_root/.specify/feature.json" ]]; then
# Shared, set -e-safe parser: jq -> python3 -> grep/sed. Returns empty on
# missing/unparseable/unset so we fall through to the branch-prefix lookup.
local _fd
_fd=$(read_feature_json_feature_directory "$repo_root")
if [[ -n "$_fd" ]]; then
feature_dir="$_fd"
# Normalize relative paths to absolute under repo root
[[ "$feature_dir" != /* ]] && feature_dir="$repo_root/$feature_dir"
elif ! feature_dir=$(find_feature_dir_by_prefix "$repo_root" "$current_branch"); then
echo "ERROR: Failed to resolve feature directory" >&2
return 1
fi
elif ! feature_dir=$(find_feature_dir_by_prefix "$repo_root" "$current_branch"); then
echo "ERROR: Failed to resolve feature directory" >&2
return 1
fi
# Use printf '%q' to safely quote values, preventing shell injection
# via crafted branch names or paths containing special characters
printf 'REPO_ROOT=%q\n' "$repo_root"
printf 'CURRENT_BRANCH=%q\n' "$current_branch"
printf 'HAS_GIT=%q\n' "$has_git_repo"
printf 'FEATURE_DIR=%q\n' "$feature_dir"
printf 'FEATURE_SPEC=%q\n' "$feature_dir/spec.md"
printf 'IMPL_PLAN=%q\n' "$feature_dir/plan.md"
printf 'TASKS=%q\n' "$feature_dir/tasks.md"
printf 'RESEARCH=%q\n' "$feature_dir/research.md"
printf 'DATA_MODEL=%q\n' "$feature_dir/data-model.md"
printf 'QUICKSTART=%q\n' "$feature_dir/quickstart.md"
printf 'CONTRACTS_DIR=%q\n' "$feature_dir/contracts"
}
# Check if jq is available for safe JSON construction
has_jq() {
command -v jq >/dev/null 2>&1
}
# Escape a string for safe embedding in a JSON value (fallback when jq is unavailable).
# Handles backslash, double-quote, and JSON-required control character escapes (RFC 8259).
json_escape() {
local s="$1"
s="${s//\\/\\\\}"
s="${s//\"/\\\"}"
s="${s//$'\n'/\\n}"
s="${s//$'\t'/\\t}"
s="${s//$'\r'/\\r}"
s="${s//$'\b'/\\b}"
s="${s//$'\f'/\\f}"
# Escape any remaining U+0001-U+001F control characters as \uXXXX.
# (U+0000/NUL cannot appear in bash strings and is excluded.)
# LC_ALL=C ensures ${#s} counts bytes and ${s:$i:1} yields single bytes,
# so multi-byte UTF-8 sequences (first byte >= 0xC0) pass through intact.
local LC_ALL=C
local i char code
for (( i=0; i<${#s}; i++ )); do
char="${s:$i:1}"
printf -v code '%d' "'$char" 2>/dev/null || code=256
if (( code >= 1 && code <= 31 )); then
printf '\\u%04x' "$code"
else
printf '%s' "$char"
fi
done
}
check_file() { [[ -f "$1" ]] && echo "$2" || echo "$2"; }
check_dir() { [[ -d "$1" && -n $(ls -A "$1" 2>/dev/null) ]] && echo "$2" || echo "$2"; }
# Resolve a template name to a file path using the priority stack:
# 1. .specify/templates/overrides/
# 2. .specify/presets/<preset-id>/templates/ (sorted by priority from .registry)
# 3. .specify/extensions/<ext-id>/templates/
# 4. .specify/templates/ (core)
resolve_template() {
local template_name="$1"
local repo_root="$2"
local base="$repo_root/.specify/templates"
# Priority 1: Project overrides
local override="$base/overrides/${template_name}.md"
[ -f "$override" ] && echo "$override" && return 0
# Priority 2: Installed presets (sorted by priority from .registry)
local presets_dir="$repo_root/.specify/presets"
if [ -d "$presets_dir" ]; then
local registry_file="$presets_dir/.registry"
if [ -f "$registry_file" ] && command -v python3 >/dev/null 2>&1; then
# Read preset IDs sorted by priority (lower number = higher precedence).
# The python3 call is wrapped in an if-condition so that set -e does not
# abort the function when python3 exits non-zero (e.g. invalid JSON).
local sorted_presets=""
if sorted_presets=$(SPECKIT_REGISTRY="$registry_file" python3 -c "
import json, sys, os
try:
with open(os.environ['SPECKIT_REGISTRY']) as f:
data = json.load(f)
presets = data.get('presets', {})
for pid, meta in sorted(presets.items(), key=lambda x: x[1].get('priority', 10) if isinstance(x[1], dict) else 10):
if isinstance(meta, dict) and meta.get('enabled', True) is not False:
print(pid)
except Exception:
sys.exit(1)
" 2>/dev/null); then
if [ -n "$sorted_presets" ]; then
# python3 succeeded and returned preset IDs — search in priority order
while IFS= read -r preset_id; do
local candidate="$presets_dir/$preset_id/templates/${template_name}.md"
[ -f "$candidate" ] && echo "$candidate" && return 0
done <<< "$sorted_presets"
fi
# python3 succeeded but registry has no presets — nothing to search
else
# python3 failed (missing, or registry parse error) — fall back to unordered directory scan
for preset in "$presets_dir"/*/; do
[ -d "$preset" ] || continue
local candidate="$preset/templates/${template_name}.md"
[ -f "$candidate" ] && echo "$candidate" && return 0
done
fi
else
# Fallback: alphabetical directory order (no python3 available)
for preset in "$presets_dir"/*/; do
[ -d "$preset" ] || continue
local candidate="$preset/templates/${template_name}.md"
[ -f "$candidate" ] && echo "$candidate" && return 0
done
fi
fi
# Priority 3: Extension-provided templates
local ext_dir="$repo_root/.specify/extensions"
if [ -d "$ext_dir" ]; then
for ext in "$ext_dir"/*/; do
[ -d "$ext" ] || continue
# Skip hidden directories (e.g. .backup, .cache)
case "$(basename "$ext")" in .*) continue;; esac
local candidate="$ext/templates/${template_name}.md"
[ -f "$candidate" ] && echo "$candidate" && return 0
done
fi
# Priority 4: Core templates
local core="$base/${template_name}.md"
[ -f "$core" ] && echo "$core" && return 0
# Template not found in any location.
# Return 1 so callers can distinguish "not found" from "found".
# Callers running under set -e should use: TEMPLATE=$(resolve_template ...) || true
return 1
}
# Resolve a template name to composed content using composition strategies.
# Reads strategy metadata from preset manifests and composes content
# from multiple layers using prepend, append, or wrap strategies.
#
# Usage: CONTENT=$(resolve_template_content "template-name" "$REPO_ROOT")
# Returns composed content string on stdout; exit code 1 if not found.
resolve_template_content() {
local template_name="$1"
local repo_root="$2"
local base="$repo_root/.specify/templates"
# Collect all layers (highest priority first)
local -a layer_paths=()
local -a layer_strategies=()
# Priority 1: Project overrides (always "replace")
local override="$base/overrides/${template_name}.md"
if [ -f "$override" ]; then
layer_paths+=("$override")
layer_strategies+=("replace")
fi
# Priority 2: Installed presets (sorted by priority from .registry)
local presets_dir="$repo_root/.specify/presets"
if [ -d "$presets_dir" ]; then
local registry_file="$presets_dir/.registry"
local sorted_presets=""
if [ -f "$registry_file" ] && command -v python3 >/dev/null 2>&1; then
if sorted_presets=$(SPECKIT_REGISTRY="$registry_file" python3 -c "
import json, sys, os
try:
with open(os.environ['SPECKIT_REGISTRY']) as f:
data = json.load(f)
presets = data.get('presets', {})
for pid, meta in sorted(presets.items(), key=lambda x: x[1].get('priority', 10) if isinstance(x[1], dict) else 10):
if isinstance(meta, dict) and meta.get('enabled', True) is not False:
print(pid)
except Exception:
sys.exit(1)
" 2>/dev/null); then
if [ -n "$sorted_presets" ]; then
local yaml_warned=false
while IFS= read -r preset_id; do
# Read strategy and file path from preset manifest
local strategy="replace"
local manifest_file=""
local manifest="$presets_dir/$preset_id/preset.yml"
if [ -f "$manifest" ] && command -v python3 >/dev/null 2>&1; then
# Requires PyYAML; falls back to replace/convention if unavailable
local result
local py_stderr
py_stderr=$(mktemp)
result=$(SPECKIT_MANIFEST="$manifest" SPECKIT_TMPL="$template_name" python3 -c "
import sys, os
try:
import yaml
except ImportError:
print('yaml_missing', file=sys.stderr)
print('replace\t')
sys.exit(0)
try:
with open(os.environ['SPECKIT_MANIFEST']) as f:
data = yaml.safe_load(f)
for t in data.get('provides', {}).get('templates', []):
if t.get('name') == os.environ['SPECKIT_TMPL'] and t.get('type', 'template') == 'template':
print(t.get('strategy', 'replace') + '\t' + t.get('file', ''))
sys.exit(0)
print('replace\t')
except Exception:
print('replace\t')
" 2>"$py_stderr")
local parse_status=$?
if [ $parse_status -eq 0 ] && [ -n "$result" ]; then
IFS=$'\t' read -r strategy manifest_file <<< "$result"
strategy=$(printf '%s' "$strategy" | tr '[:upper:]' '[:lower:]')
fi
if [ "$yaml_warned" = false ] && grep -q 'yaml_missing' "$py_stderr" 2>/dev/null; then
echo "Warning: PyYAML not available; composition strategies may be ignored" >&2
yaml_warned=true
fi
rm -f "$py_stderr"
fi
# Try manifest file path first, then convention path
local candidate=""
if [ -n "$manifest_file" ]; then
# Reject absolute paths and parent traversal
case "$manifest_file" in
/*|*../*|../*) manifest_file="" ;;
esac
fi
if [ -n "$manifest_file" ]; then
local mf="$presets_dir/$preset_id/$manifest_file"
[ -f "$mf" ] && candidate="$mf"
fi
if [ -z "$candidate" ]; then
local cf="$presets_dir/$preset_id/templates/${template_name}.md"
[ -f "$cf" ] && candidate="$cf"
fi
if [ -n "$candidate" ]; then
layer_paths+=("$candidate")
layer_strategies+=("$strategy")
fi
done <<< "$sorted_presets"
fi
else
# python3 failed — fall back to unordered directory scan (replace only)
for preset in "$presets_dir"/*/; do
[ -d "$preset" ] || continue
local candidate="$preset/templates/${template_name}.md"
if [ -f "$candidate" ]; then
layer_paths+=("$candidate")
layer_strategies+=("replace")
fi
done
fi
else
# No python3 or registry — fall back to unordered directory scan (replace only)
for preset in "$presets_dir"/*/; do
[ -d "$preset" ] || continue
local candidate="$preset/templates/${template_name}.md"
if [ -f "$candidate" ]; then
layer_paths+=("$candidate")
layer_strategies+=("replace")
fi
done
fi
fi
# Priority 3: Extension-provided templates (always "replace")
local ext_dir="$repo_root/.specify/extensions"
if [ -d "$ext_dir" ]; then
for ext in "$ext_dir"/*/; do
[ -d "$ext" ] || continue
case "$(basename "$ext")" in .*) continue;; esac
local candidate="$ext/templates/${template_name}.md"
if [ -f "$candidate" ]; then
layer_paths+=("$candidate")
layer_strategies+=("replace")
fi
done
fi
# Priority 4: Core templates (always "replace")
local core="$base/${template_name}.md"
if [ -f "$core" ]; then
layer_paths+=("$core")
layer_strategies+=("replace")
fi
local count=${#layer_paths[@]}
[ "$count" -eq 0 ] && return 1
# Check if any layer uses a non-replace strategy
local has_composition=false
for s in "${layer_strategies[@]}"; do
[ "$s" != "replace" ] && has_composition=true && break
done
# If the top (highest-priority) layer is replace, it wins entirely —
# lower layers are irrelevant regardless of their strategies.
if [ "${layer_strategies[0]}" = "replace" ]; then
cat "${layer_paths[0]}"
return 0
fi
if [ "$has_composition" = false ]; then
cat "${layer_paths[0]}"
return 0
fi
# Find the effective base: scan from highest priority (index 0) downward
# to find the nearest replace layer. Only compose layers above that base.
local base_idx=-1
local i
for (( i=0; i<count; i++ )); do
if [ "${layer_strategies[$i]}" = "replace" ]; then
base_idx=$i
break
fi
done
if [ $base_idx -lt 0 ]; then
return 1 # no base layer found
fi
# Read the base content; compose layers above the base (higher priority)
local content
content=$(cat "${layer_paths[$base_idx]}"; printf x)
content="${content%x}"
for (( i=base_idx-1; i>=0; i-- )); do
local path="${layer_paths[$i]}"
local strat="${layer_strategies[$i]}"
local layer_content
# Preserve trailing newlines
layer_content=$(cat "$path"; printf x)
layer_content="${layer_content%x}"
case "$strat" in
replace) content="$layer_content" ;;
prepend) content="$(printf '%s\n\n%s' "$layer_content" "$content")" ;;
append) content="$(printf '%s\n\n%s' "$content" "$layer_content")" ;;
wrap)
case "$layer_content" in
*'{CORE_TEMPLATE}'*) ;;
*) echo "Error: wrap strategy missing {CORE_TEMPLATE} placeholder" >&2; return 1 ;;
esac
while [[ "$layer_content" == *'{CORE_TEMPLATE}'* ]]; do
local before="${layer_content%%\{CORE_TEMPLATE\}*}"
local after="${layer_content#*\{CORE_TEMPLATE\}}"
layer_content="${before}${content}${after}"
done
content="$layer_content"
;;
*) echo "Error: unknown strategy '$strat'" >&2; return 1 ;;
esac
done
printf '%s' "$content"
return 0
}

View File

@@ -0,0 +1,413 @@
#!/usr/bin/env bash
set -e
JSON_MODE=false
DRY_RUN=false
ALLOW_EXISTING=false
SHORT_NAME=""
BRANCH_NUMBER=""
USE_TIMESTAMP=false
ARGS=()
i=1
while [ $i -le $# ]; do
arg="${!i}"
case "$arg" in
--json)
JSON_MODE=true
;;
--dry-run)
DRY_RUN=true
;;
--allow-existing-branch)
ALLOW_EXISTING=true
;;
--short-name)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
# Check if the next argument is another option (starts with --)
if [[ "$next_arg" == --* ]]; then
echo 'Error: --short-name requires a value' >&2
exit 1
fi
SHORT_NAME="$next_arg"
;;
--number)
if [ $((i + 1)) -gt $# ]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
i=$((i + 1))
next_arg="${!i}"
if [[ "$next_arg" == --* ]]; then
echo 'Error: --number requires a value' >&2
exit 1
fi
BRANCH_NUMBER="$next_arg"
;;
--timestamp)
USE_TIMESTAMP=true
;;
--help|-h)
echo "Usage: $0 [--json] [--dry-run] [--allow-existing-branch] [--short-name <name>] [--number N] [--timestamp] <feature_description>"
echo ""
echo "Options:"
echo " --json Output in JSON format"
echo " --dry-run Compute branch name and paths without creating branches, directories, or files"
echo " --allow-existing-branch Switch to branch if it already exists instead of failing"
echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
echo " --number N Specify branch number manually (overrides auto-detection)"
echo " --timestamp Use timestamp prefix (YYYYMMDD-HHMMSS) instead of sequential numbering"
echo " --help, -h Show this help message"
echo ""
echo "Examples:"
echo " $0 'Add user authentication system' --short-name 'user-auth'"
echo " $0 'Implement OAuth2 integration for API' --number 5"
echo " $0 --timestamp --short-name 'user-auth' 'Add user authentication'"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
i=$((i + 1))
done
FEATURE_DESCRIPTION="${ARGS[*]}"
if [ -z "$FEATURE_DESCRIPTION" ]; then
echo "Usage: $0 [--json] [--dry-run] [--allow-existing-branch] [--short-name <name>] [--number N] [--timestamp] <feature_description>" >&2
exit 1
fi
# Trim whitespace and validate description is not empty (e.g., user passed only whitespace)
FEATURE_DESCRIPTION=$(echo "$FEATURE_DESCRIPTION" | sed -E 's/^[[:space:]]+|[[:space:]]+$//g')
if [ -z "$FEATURE_DESCRIPTION" ]; then
echo "Error: Feature description cannot be empty or contain only whitespace" >&2
exit 1
fi
# Function to get highest number from specs directory
get_highest_from_specs() {
local specs_dir="$1"
local highest=0
if [ -d "$specs_dir" ]; then
for dir in "$specs_dir"/*; do
[ -d "$dir" ] || continue
dirname=$(basename "$dir")
# Match sequential prefixes (>=3 digits), but skip timestamp dirs.
if echo "$dirname" | grep -Eq '^[0-9]{3,}-' && ! echo "$dirname" | grep -Eq '^[0-9]{8}-[0-9]{6}-'; then
number=$(echo "$dirname" | grep -Eo '^[0-9]+')
number=$((10#$number))
if [ "$number" -gt "$highest" ]; then
highest=$number
fi
fi
done
fi
echo "$highest"
}
# Function to get highest number from git branches
get_highest_from_branches() {
git branch -a 2>/dev/null | sed 's/^[* ]*//; s|^remotes/[^/]*/||' | _extract_highest_number
}
# Extract the highest sequential feature number from a list of ref names (one per line).
# Shared by get_highest_from_branches and get_highest_from_remote_refs.
_extract_highest_number() {
local highest=0
while IFS= read -r name; do
[ -z "$name" ] && continue
if echo "$name" | grep -Eq '^[0-9]{3,}-' && ! echo "$name" | grep -Eq '^[0-9]{8}-[0-9]{6}-'; then
number=$(echo "$name" | grep -Eo '^[0-9]+' || echo "0")
number=$((10#$number))
if [ "$number" -gt "$highest" ]; then
highest=$number
fi
fi
done
echo "$highest"
}
# Function to get highest number from remote branches without fetching (side-effect-free)
get_highest_from_remote_refs() {
local highest=0
for remote in $(git remote 2>/dev/null); do
local remote_highest
remote_highest=$(GIT_TERMINAL_PROMPT=0 git ls-remote --heads "$remote" 2>/dev/null | sed 's|.*refs/heads/||' | _extract_highest_number)
if [ "$remote_highest" -gt "$highest" ]; then
highest=$remote_highest
fi
done
echo "$highest"
}
# Function to check existing branches (local and remote) and return next available number.
# When skip_fetch is true, queries remotes via ls-remote (read-only) instead of fetching.
check_existing_branches() {
local specs_dir="$1"
local skip_fetch="${2:-false}"
if [ "$skip_fetch" = true ]; then
# Side-effect-free: query remotes via ls-remote
local highest_remote=$(get_highest_from_remote_refs)
local highest_branch=$(get_highest_from_branches)
if [ "$highest_remote" -gt "$highest_branch" ]; then
highest_branch=$highest_remote
fi
else
# Fetch all remotes to get latest branch info (suppress errors if no remotes)
git fetch --all --prune >/dev/null 2>&1 || true
local highest_branch=$(get_highest_from_branches)
fi
# Get highest number from ALL specs (not just matching short name)
local highest_spec=$(get_highest_from_specs "$specs_dir")
# Take the maximum of both
local max_num=$highest_branch
if [ "$highest_spec" -gt "$max_num" ]; then
max_num=$highest_spec
fi
# Return next number
echo $((max_num + 1))
}
# Function to clean and format a branch name
clean_branch_name() {
local name="$1"
echo "$name" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//'
}
# Resolve repository root using common.sh functions which prioritize .specify over git
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
REPO_ROOT=$(get_repo_root)
# Check if git is available at this repo root (not a parent)
if has_git; then
HAS_GIT=true
else
HAS_GIT=false
fi
cd "$REPO_ROOT"
SPECS_DIR="$REPO_ROOT/specs"
if [ "$DRY_RUN" != true ]; then
mkdir -p "$SPECS_DIR"
fi
# Function to generate branch name with stop word filtering and length filtering
generate_branch_name() {
local description="$1"
# Common stop words to filter out
local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$"
# Convert to lowercase and split into words
local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g')
# Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original)
local meaningful_words=()
for word in $clean_name; do
# Skip empty words
[ -z "$word" ] && continue
# Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms)
if ! echo "$word" | grep -qiE "$stop_words"; then
if [ ${#word} -ge 3 ]; then
meaningful_words+=("$word")
elif echo "$description" | grep -q "\b${word^^}\b"; then
# Keep short words if they appear as uppercase in original (likely acronyms)
meaningful_words+=("$word")
fi
fi
done
# If we have meaningful words, use first 3-4 of them
if [ ${#meaningful_words[@]} -gt 0 ]; then
local max_words=3
if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi
local result=""
local count=0
for word in "${meaningful_words[@]}"; do
if [ $count -ge $max_words ]; then break; fi
if [ -n "$result" ]; then result="$result-"; fi
result="$result$word"
count=$((count + 1))
done
echo "$result"
else
# Fallback to original logic if no meaningful words found
local cleaned=$(clean_branch_name "$description")
echo "$cleaned" | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//'
fi
}
# Generate branch name
if [ -n "$SHORT_NAME" ]; then
# Use provided short name, just clean it up
BRANCH_SUFFIX=$(clean_branch_name "$SHORT_NAME")
else
# Generate from description with smart filtering
BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION")
fi
# Warn if --number and --timestamp are both specified
if [ "$USE_TIMESTAMP" = true ] && [ -n "$BRANCH_NUMBER" ]; then
>&2 echo "[specify] Warning: --number is ignored when --timestamp is used"
BRANCH_NUMBER=""
fi
# Determine branch prefix
if [ "$USE_TIMESTAMP" = true ]; then
FEATURE_NUM=$(date +%Y%m%d-%H%M%S)
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
else
# Determine branch number
if [ -z "$BRANCH_NUMBER" ]; then
if [ "$DRY_RUN" = true ] && [ "$HAS_GIT" = true ]; then
# Dry-run: query remotes via ls-remote (side-effect-free, no fetch)
BRANCH_NUMBER=$(check_existing_branches "$SPECS_DIR" true)
elif [ "$DRY_RUN" = true ]; then
# Dry-run without git: local spec dirs only
HIGHEST=$(get_highest_from_specs "$SPECS_DIR")
BRANCH_NUMBER=$((HIGHEST + 1))
elif [ "$HAS_GIT" = true ]; then
# Check existing branches on remotes
BRANCH_NUMBER=$(check_existing_branches "$SPECS_DIR")
else
# Fall back to local directory check
HIGHEST=$(get_highest_from_specs "$SPECS_DIR")
BRANCH_NUMBER=$((HIGHEST + 1))
fi
fi
# Force base-10 interpretation to prevent octal conversion (e.g., 010 → 8 in octal, but should be 10 in decimal)
FEATURE_NUM=$(printf "%03d" "$((10#$BRANCH_NUMBER))")
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
fi
# GitHub enforces a 244-byte limit on branch names
# Validate and truncate if necessary
MAX_BRANCH_LENGTH=244
if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then
# Calculate how much we need to trim from suffix
# Account for prefix length: timestamp (15) + hyphen (1) = 16, or sequential (3) + hyphen (1) = 4
PREFIX_LENGTH=$(( ${#FEATURE_NUM} + 1 ))
MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - PREFIX_LENGTH))
# Truncate suffix at word boundary if possible
TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH)
# Remove trailing hyphen if truncation created one
TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//')
ORIGINAL_BRANCH_NAME="$BRANCH_NAME"
BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}"
>&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
>&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)"
>&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)"
fi
FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME"
SPEC_FILE="$FEATURE_DIR/spec.md"
if [ "$DRY_RUN" != true ]; then
if [ "$HAS_GIT" = true ]; then
branch_create_error=""
if ! branch_create_error=$(git checkout -q -b "$BRANCH_NAME" 2>&1); then
current_branch="$(git rev-parse --abbrev-ref HEAD 2>/dev/null || true)"
# Check if branch already exists
if git branch --list "$BRANCH_NAME" | grep -q .; then
if [ "$ALLOW_EXISTING" = true ]; then
# If we're already on the branch, continue without another checkout.
if [ "$current_branch" = "$BRANCH_NAME" ]; then
:
# Otherwise switch to the existing branch instead of failing.
elif ! switch_branch_error=$(git checkout -q "$BRANCH_NAME" 2>&1); then
>&2 echo "Error: Failed to switch to existing branch '$BRANCH_NAME'. Please resolve any local changes or conflicts and try again."
if [ -n "$switch_branch_error" ]; then
>&2 printf '%s\n' "$switch_branch_error"
fi
exit 1
fi
elif [ "$USE_TIMESTAMP" = true ]; then
>&2 echo "Error: Branch '$BRANCH_NAME' already exists. Rerun to get a new timestamp or use a different --short-name."
exit 1
else
>&2 echo "Error: Branch '$BRANCH_NAME' already exists. Please use a different feature name or specify a different number with --number."
exit 1
fi
else
>&2 echo "Error: Failed to create git branch '$BRANCH_NAME'."
if [ -n "$branch_create_error" ]; then
>&2 printf '%s\n' "$branch_create_error"
else
>&2 echo "Please check your git configuration and try again."
fi
exit 1
fi
fi
else
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
fi
mkdir -p "$FEATURE_DIR"
if [ ! -f "$SPEC_FILE" ]; then
TEMPLATE=$(resolve_template "spec-template" "$REPO_ROOT") || true
if [ -n "$TEMPLATE" ] && [ -f "$TEMPLATE" ]; then
cp "$TEMPLATE" "$SPEC_FILE"
else
echo "Warning: Spec template not found; created empty spec file" >&2
touch "$SPEC_FILE"
fi
fi
# Inform the user how to persist the feature variable in their own shell
printf '# To persist: export SPECIFY_FEATURE=%q\n' "$BRANCH_NAME" >&2
fi
if $JSON_MODE; then
if command -v jq >/dev/null 2>&1; then
if [ "$DRY_RUN" = true ]; then
jq -cn \
--arg branch_name "$BRANCH_NAME" \
--arg spec_file "$SPEC_FILE" \
--arg feature_num "$FEATURE_NUM" \
'{BRANCH_NAME:$branch_name,SPEC_FILE:$spec_file,FEATURE_NUM:$feature_num,DRY_RUN:true}'
else
jq -cn \
--arg branch_name "$BRANCH_NAME" \
--arg spec_file "$SPEC_FILE" \
--arg feature_num "$FEATURE_NUM" \
'{BRANCH_NAME:$branch_name,SPEC_FILE:$spec_file,FEATURE_NUM:$feature_num}'
fi
else
if [ "$DRY_RUN" = true ]; then
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s","DRY_RUN":true}\n' "$(json_escape "$BRANCH_NAME")" "$(json_escape "$SPEC_FILE")" "$(json_escape "$FEATURE_NUM")"
else
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$(json_escape "$BRANCH_NAME")" "$(json_escape "$SPEC_FILE")" "$(json_escape "$FEATURE_NUM")"
fi
fi
else
echo "BRANCH_NAME: $BRANCH_NAME"
echo "SPEC_FILE: $SPEC_FILE"
echo "FEATURE_NUM: $FEATURE_NUM"
if [ "$DRY_RUN" != true ]; then
printf '# To persist in your shell: export SPECIFY_FEATURE=%q\n' "$BRANCH_NAME"
fi
fi

View File

@@ -0,0 +1,75 @@
#!/usr/bin/env bash
set -e
# Parse command line arguments
JSON_MODE=false
ARGS=()
for arg in "$@"; do
case "$arg" in
--json)
JSON_MODE=true
;;
--help|-h)
echo "Usage: $0 [--json]"
echo " --json Output results in JSON format"
echo " --help Show this help message"
exit 0
;;
*)
ARGS+=("$arg")
;;
esac
done
# Get script directory and load common functions
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
source "$SCRIPT_DIR/common.sh"
# Get all paths and variables from common functions
_paths_output=$(get_feature_paths) || { echo "ERROR: Failed to resolve feature paths" >&2; exit 1; }
eval "$_paths_output"
unset _paths_output
# If feature.json pins an existing feature directory, branch naming is not required.
if ! feature_json_matches_feature_dir "$REPO_ROOT" "$FEATURE_DIR"; then
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
fi
# Ensure the feature directory exists
mkdir -p "$FEATURE_DIR"
# Copy plan template if it exists
TEMPLATE=$(resolve_template "plan-template" "$REPO_ROOT") || true
if [[ -n "$TEMPLATE" ]] && [[ -f "$TEMPLATE" ]]; then
cp "$TEMPLATE" "$IMPL_PLAN"
echo "Copied plan template to $IMPL_PLAN"
else
echo "Warning: Plan template not found"
# Create a basic plan file if template doesn't exist
touch "$IMPL_PLAN"
fi
# Output results
if $JSON_MODE; then
if has_jq; then
jq -cn \
--arg feature_spec "$FEATURE_SPEC" \
--arg impl_plan "$IMPL_PLAN" \
--arg specs_dir "$FEATURE_DIR" \
--arg branch "$CURRENT_BRANCH" \
--arg has_git "$HAS_GIT" \
'{FEATURE_SPEC:$feature_spec,IMPL_PLAN:$impl_plan,SPECS_DIR:$specs_dir,BRANCH:$branch,HAS_GIT:$has_git}'
else
printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \
"$(json_escape "$FEATURE_SPEC")" "$(json_escape "$IMPL_PLAN")" "$(json_escape "$FEATURE_DIR")" "$(json_escape "$CURRENT_BRANCH")" "$(json_escape "$HAS_GIT")"
fi
else
echo "FEATURE_SPEC: $FEATURE_SPEC"
echo "IMPL_PLAN: $IMPL_PLAN"
echo "SPECS_DIR: $FEATURE_DIR"
echo "BRANCH: $CURRENT_BRANCH"
echo "HAS_GIT: $HAS_GIT"
fi

View File

@@ -0,0 +1,40 @@
# [CHECKLIST TYPE] Checklist: [FEATURE NAME]
**Purpose**: [Brief description of what this checklist covers]
**Created**: [DATE]
**Feature**: [Link to spec.md or relevant documentation]
**Note**: This checklist is generated by the `/speckit-checklist` command based on feature context and requirements.
<!--
============================================================================
IMPORTANT: The checklist items below are SAMPLE ITEMS for illustration only.
The /speckit-checklist command MUST replace these with actual items based on:
- User's specific checklist request
- Feature requirements from spec.md
- Technical context from plan.md
- Implementation details from tasks.md
DO NOT keep these sample items in the generated checklist file.
============================================================================
-->
## [Category 1]
- [ ] CHK001 First checklist item with clear action
- [ ] CHK002 Second checklist item
- [ ] CHK003 Third checklist item
## [Category 2]
- [ ] CHK004 Another category item
- [ ] CHK005 Item with specific criteria
- [ ] CHK006 Final item in this category
## Notes
- Check items off as completed: `[x]`
- Add comments or findings inline
- Link to relevant resources or documentation
- Items are numbered sequentially for easy reference

View File

@@ -0,0 +1,50 @@
# [PROJECT_NAME] Constitution
<!-- Example: Spec Constitution, TaskFlow Constitution, etc. -->
## Core Principles
### [PRINCIPLE_1_NAME]
<!-- Example: I. Library-First -->
[PRINCIPLE_1_DESCRIPTION]
<!-- Example: Every feature starts as a standalone library; Libraries must be self-contained, independently testable, documented; Clear purpose required - no organizational-only libraries -->
### [PRINCIPLE_2_NAME]
<!-- Example: II. CLI Interface -->
[PRINCIPLE_2_DESCRIPTION]
<!-- Example: Every library exposes functionality via CLI; Text in/out protocol: stdin/args → stdout, errors → stderr; Support JSON + human-readable formats -->
### [PRINCIPLE_3_NAME]
<!-- Example: III. Test-First (NON-NEGOTIABLE) -->
[PRINCIPLE_3_DESCRIPTION]
<!-- Example: TDD mandatory: Tests written → User approved → Tests fail → Then implement; Red-Green-Refactor cycle strictly enforced -->
### [PRINCIPLE_4_NAME]
<!-- Example: IV. Integration Testing -->
[PRINCIPLE_4_DESCRIPTION]
<!-- Example: Focus areas requiring integration tests: New library contract tests, Contract changes, Inter-service communication, Shared schemas -->
### [PRINCIPLE_5_NAME]
<!-- Example: V. Observability, VI. Versioning & Breaking Changes, VII. Simplicity -->
[PRINCIPLE_5_DESCRIPTION]
<!-- Example: Text I/O ensures debuggability; Structured logging required; Or: MAJOR.MINOR.BUILD format; Or: Start simple, YAGNI principles -->
## [SECTION_2_NAME]
<!-- Example: Additional Constraints, Security Requirements, Performance Standards, etc. -->
[SECTION_2_CONTENT]
<!-- Example: Technology stack requirements, compliance standards, deployment policies, etc. -->
## [SECTION_3_NAME]
<!-- Example: Development Workflow, Review Process, Quality Gates, etc. -->
[SECTION_3_CONTENT]
<!-- Example: Code review requirements, testing gates, deployment approval process, etc. -->
## Governance
<!-- Example: Constitution supersedes all other practices; Amendments require documentation, approval, migration plan -->
[GOVERNANCE_RULES]
<!-- Example: All PRs/reviews must verify compliance; Complexity must be justified; Use [GUIDANCE_FILE] for runtime development guidance -->
**Version**: [CONSTITUTION_VERSION] | **Ratified**: [RATIFICATION_DATE] | **Last Amended**: [LAST_AMENDED_DATE]
<!-- Example: Version: 2.1.1 | Ratified: 2025-06-13 | Last Amended: 2025-07-16 -->

View File

@@ -0,0 +1,104 @@
# Implementation Plan: [FEATURE]
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
**Note**: This template is filled in by the `/speckit-plan` command. See `.specify/templates/plan-template.md` for the execution workflow.
## Summary
[Extract from feature spec: primary requirement + technical approach from research]
## Technical Context
<!--
ACTION REQUIRED: Replace the content in this section with the technical details
for the project. The structure here is presented in advisory capacity to guide
the iteration process.
-->
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
**Project Type**: [e.g., library/cli/web-service/mobile-app/compiler/desktop-app or NEEDS CLARIFICATION]
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
## Constitution Check
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
[Gates determined based on constitution file]
## Project Structure
### Documentation (this feature)
```text
specs/[###-feature]/
├── plan.md # This file (/speckit-plan command output)
├── research.md # Phase 0 output (/speckit-plan command)
├── data-model.md # Phase 1 output (/speckit-plan command)
├── quickstart.md # Phase 1 output (/speckit-plan command)
├── contracts/ # Phase 1 output (/speckit-plan command)
└── tasks.md # Phase 2 output (/speckit-tasks command - NOT created by /speckit-plan)
```
### Source Code (repository root)
<!--
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
for this feature. Delete unused options and expand the chosen structure with
real paths (e.g., apps/admin, packages/something). The delivered plan must
not include Option labels.
-->
```text
# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
src/
├── models/
├── services/
├── cli/
└── lib/
tests/
├── contract/
├── integration/
└── unit/
# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
backend/
├── src/
│ ├── models/
│ ├── services/
│ └── api/
└── tests/
frontend/
├── src/
│ ├── components/
│ ├── pages/
│ └── services/
└── tests/
# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
api/
└── [same as backend above]
ios/ or android/
└── [platform-specific structure: feature modules, UI flows, platform tests]
```
**Structure Decision**: [Document the selected structure and reference the real
directories captured above]
## Complexity Tracking
> **Fill ONLY if Constitution Check has violations that must be justified**
| Violation | Why Needed | Simpler Alternative Rejected Because |
|-----------|------------|-------------------------------------|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |

View File

@@ -0,0 +1,128 @@
# Feature Specification: [FEATURE NAME]
**Feature Branch**: `[###-feature-name]`
**Created**: [DATE]
**Status**: Draft
**Input**: User description: "$ARGUMENTS"
## User Scenarios & Testing *(mandatory)*
<!--
IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
Each user story/journey must be INDEPENDENTLY TESTABLE - meaning if you implement just ONE of them,
you should still have a viable MVP (Minimum Viable Product) that delivers value.
Assign priorities (P1, P2, P3, etc.) to each story, where P1 is the most critical.
Think of each story as a standalone slice of functionality that can be:
- Developed independently
- Tested independently
- Deployed independently
- Demonstrated to users independently
-->
### User Story 1 - [Brief Title] (Priority: P1)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently - e.g., "Can be fully tested by [specific action] and delivers [specific value]"]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
2. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
### User Story 2 - [Brief Title] (Priority: P2)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
### User Story 3 - [Brief Title] (Priority: P3)
[Describe this user journey in plain language]
**Why this priority**: [Explain the value and why it has this priority level]
**Independent Test**: [Describe how this can be tested independently]
**Acceptance Scenarios**:
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
---
[Add more user stories as needed, each with an assigned priority]
### Edge Cases
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right edge cases.
-->
- What happens when [boundary condition]?
- How does system handle [error scenario]?
## Requirements *(mandatory)*
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right functional requirements.
-->
### Functional Requirements
- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
- **FR-005**: System MUST [behavior, e.g., "log all security events"]
*Example of marking unclear requirements:*
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
### Key Entities *(include if feature involves data)*
- **[Entity 1]**: [What it represents, key attributes without implementation]
- **[Entity 2]**: [What it represents, relationships to other entities]
## Success Criteria *(mandatory)*
<!--
ACTION REQUIRED: Define measurable success criteria.
These must be technology-agnostic and measurable.
-->
### Measurable Outcomes
- **SC-001**: [Measurable metric, e.g., "Users can complete account creation in under 2 minutes"]
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]
## Assumptions
<!--
ACTION REQUIRED: The content in this section represents placeholders.
Fill them out with the right assumptions based on reasonable defaults
chosen when the feature description did not specify certain details.
-->
- [Assumption about target users, e.g., "Users have stable internet connectivity"]
- [Assumption about scope boundaries, e.g., "Mobile support is out of scope for v1"]
- [Assumption about data/environment, e.g., "Existing authentication system will be reused"]
- [Dependency on existing system/service, e.g., "Requires access to the existing user profile API"]

View File

@@ -0,0 +1,251 @@
---
description: "Task list template for feature implementation"
---
# Tasks: [FEATURE NAME]
**Input**: Design documents from `/specs/[###-feature-name]/`
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
**Tests**: The examples below include test tasks. Per §5.1 of the constitution, TDD is non-negotiable — test tasks MUST appear before every implementation task. The test task labels below marked "OPTIONAL" refer to the *type* of test (E2E is best-effort per §5.2), not whether tests are written at all.
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
## Format: `[ID] [P?] [Story] Description`
- **[P]**: Can run in parallel (different files, no dependencies)
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
- Include exact file paths in descriptions
## Path Conventions
- **Single project**: `src/`, `tests/` at repository root
- **Web app**: `backend/src/`, `frontend/src/`
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
- Paths shown below assume single project - adjust based on plan.md structure
<!--
============================================================================
IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
The /speckit-tasks command MUST replace these with actual tasks based on:
- User stories from spec.md (with their priorities P1, P2, P3...)
- Feature requirements from plan.md
- Entities from data-model.md
- Endpoints from contracts/
Tasks MUST be organized by user story so each story can be:
- Implemented independently
- Tested independently
- Delivered as an MVP increment
DO NOT keep these sample tasks in the generated tasks.md file.
============================================================================
-->
## Phase 1: Setup (Shared Infrastructure)
**Purpose**: Project initialization and basic structure
- [ ] T001 Create project structure per implementation plan
- [ ] T002 Initialize [language] project with [framework] dependencies
- [ ] T003 [P] Configure linting and formatting tools
---
## Phase 2: Foundational (Blocking Prerequisites)
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
Examples of foundational tasks (adjust based on your project):
- [ ] T004 Setup database schema and migrations framework
- [ ] T005 [P] Implement authentication/authorization framework
- [ ] T006 [P] Setup API routing and middleware structure
- [ ] T007 Create base models/entities that all stories depend on
- [ ] T008 Configure error handling and logging infrastructure
- [ ] T009 Setup environment configuration management
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
---
## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 1 (REQUIRED per §5.1 — TDD) ⚠️
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 1
- [ ] T012 [P] [US1] Create [Entity1] model in src/models/[entity1].py
- [ ] T013 [P] [US1] Create [Entity2] model in src/models/[entity2].py
- [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013)
- [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T016 [US1] Add validation and error handling
- [ ] T017 [US1] Add logging for user story 1 operations
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
---
## Phase 4: User Story 2 - [Title] (Priority: P2)
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 2 (REQUIRED per §5.1 — TDD) ⚠️
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 2
- [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py
- [ ] T021 [US2] Implement [Service] in src/services/[service].py
- [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py
- [ ] T023 [US2] Integrate with User Story 1 components (if needed)
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
---
## Phase 5: User Story 3 - [Title] (Priority: P3)
**Goal**: [Brief description of what this story delivers]
**Independent Test**: [How to verify this story works on its own]
### Tests for User Story 3 (REQUIRED per §5.1 — TDD) ⚠️
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
### Implementation for User Story 3
- [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py
- [ ] T027 [US3] Implement [Service] in src/services/[service].py
- [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py
**Checkpoint**: All user stories should now be independently functional
---
[Add more user story phases as needed, following the same pattern]
---
## Phase N: Polish & Cross-Cutting Concerns
**Purpose**: Improvements that affect multiple user stories
- [ ] TXXX [P] Documentation updates in docs/
- [ ] TXXX Code cleanup and refactoring
- [ ] TXXX Performance optimization across all stories
- [ ] TXXX [P] Additional unit tests (if requested) in tests/unit/
- [ ] TXXX Security hardening
- [ ] TXXX Run quickstart.md validation
---
## Dependencies & Execution Order
### Phase Dependencies
- **Setup (Phase 1)**: No dependencies - can start immediately
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
- User stories can then proceed in parallel (if staffed)
- Or sequentially in priority order (P1 → P2 → P3)
- **Polish (Final Phase)**: Depends on all desired user stories being complete
### User Story Dependencies
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
- **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable
- **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable
### Within Each User Story
- Tests (if included) MUST be written and FAIL before implementation
- Models before services
- Services before endpoints
- Core implementation before integration
- Story complete before moving to next priority
### Parallel Opportunities
- All Setup tasks marked [P] can run in parallel
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
- Once Foundational phase completes, all user stories can start in parallel (if team capacity allows)
- All tests for a user story marked [P] can run in parallel
- Models within a story marked [P] can run in parallel
- Different user stories can be worked on in parallel by different team members
---
## Parallel Example: User Story 1
```bash
# Launch all tests for User Story 1 together (TDD — write before implementation):
Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
Task: "Integration test for [user journey] in tests/integration/test_[name].py"
# Launch all models for User Story 1 together:
Task: "Create [Entity1] model in src/models/[entity1].py"
Task: "Create [Entity2] model in src/models/[entity2].py"
```
---
## Implementation Strategy
### MVP First (User Story 1 Only)
1. Complete Phase 1: Setup
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
3. Complete Phase 3: User Story 1
4. **STOP and VALIDATE**: Test User Story 1 independently
5. Deploy/demo if ready
### Incremental Delivery
1. Complete Setup + Foundational → Foundation ready
2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
3. Add User Story 2 → Test independently → Deploy/Demo
4. Add User Story 3 → Test independently → Deploy/Demo
5. Each story adds value without breaking previous stories
### Parallel Team Strategy
With multiple developers:
1. Team completes Setup + Foundational together
2. Once Foundational is done:
- Developer A: User Story 1
- Developer B: User Story 2
- Developer C: User Story 3
3. Stories complete and integrate independently
---
## Notes
- [P] tasks = different files, no dependencies
- [Story] label maps task to specific user story for traceability
- Each user story should be independently completable and testable
- Verify tests fail before implementing
- Commit after each task or logical group
- Stop at any checkpoint to validate story independently
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence

View File

@@ -0,0 +1,63 @@
schema_version: "1.0"
workflow:
id: "speckit"
name: "Full SDD Cycle"
version: "1.0.0"
author: "GitHub"
description: "Runs specify → plan → tasks → implement with review gates"
requires:
speckit_version: ">=0.7.2"
integrations:
any: ["copilot", "claude", "gemini"]
inputs:
spec:
type: string
required: true
prompt: "Describe what you want to build"
integration:
type: string
default: "copilot"
prompt: "Integration to use (e.g. claude, copilot, gemini)"
scope:
type: string
default: "full"
enum: ["full", "backend-only", "frontend-only"]
steps:
- id: specify
command: speckit.specify
integration: "{{ inputs.integration }}"
input:
args: "{{ inputs.spec }}"
- id: review-spec
type: gate
message: "Review the generated spec before planning."
options: [approve, reject]
on_reject: abort
- id: plan
command: speckit.plan
integration: "{{ inputs.integration }}"
input:
args: "{{ inputs.spec }}"
- id: review-plan
type: gate
message: "Review the plan before generating tasks."
options: [approve, reject]
on_reject: abort
- id: tasks
command: speckit.tasks
integration: "{{ inputs.integration }}"
input:
args: "{{ inputs.spec }}"
- id: implement
command: speckit.implement
integration: "{{ inputs.integration }}"
input:
args: "{{ inputs.spec }}"

View File

@@ -0,0 +1,13 @@
{
"schema_version": "1.0",
"workflows": {
"speckit": {
"name": "Full SDD Cycle",
"version": "1.0.0",
"description": "Runs specify \u2192 plan \u2192 tasks \u2192 implement with review gates",
"source": "bundled",
"installed_at": "2026-05-02T15:15:14.550127+00:00",
"updated_at": "2026-05-02T15:15:14.550131+00:00"
}
}
}

5
CLAUDE.md Normal file
View File

@@ -0,0 +1,5 @@
<!-- SPECKIT START -->
For additional context about technologies to be used, project structure,
shell commands, and other important information, read the current plan at
`specs/001-reaction-image-board/plan.md`.
<!-- SPECKIT END -->

2
README.md Normal file
View File

@@ -0,0 +1,2 @@
# reactbin
Organize your reaction images.

View File

@@ -0,0 +1,37 @@
# Specification Quality Checklist: Reaction Image Board v1
**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: 2026-05-02
**Feature**: [../spec.md](../spec.md)
## Content Quality
- [x] No implementation details (languages, frameworks, APIs)
- [x] Focused on user value and business needs
- [x] Written for non-technical stakeholders
- [x] All mandatory sections completed
## Requirement Completeness
- [x] No [NEEDS CLARIFICATION] markers remain
- [x] Requirements are testable and unambiguous
- [x] Success criteria are measurable
- [x] Success criteria are technology-agnostic (no implementation details)
- [x] All acceptance scenarios are defined
- [x] Edge cases are identified
- [x] Scope is clearly bounded
- [x] Dependencies and assumptions identified
## Feature Readiness
- [x] All functional requirements have clear acceptance criteria
- [x] User scenarios cover primary flows
- [x] Feature meets measurable outcomes defined in Success Criteria
- [x] No implementation details leak into specification
## Notes
- All items pass. Spec is ready for `/speckit-plan`.
- The source technical spec (docs/SPEC.md) covers API contracts, storage
behaviour, and UI screen details — those details are available for
the planning phase but were intentionally kept out of this user-facing spec.

View File

@@ -0,0 +1,216 @@
# API Contract: Reaction Image Board v1
**Base URL**: `/api/v1`
**Format**: JSON (application/json) for all request/response bodies except
file uploads (multipart/form-data)
**Auth**: None in v1 (NoOpAuthProvider)
**Error envelope** (all 4xx/5xx responses):
```json
{ "detail": "<human-readable message>", "code": "<machine-readable code>" }
```
---
## Image Resource
### Image object shape
```json
{
"id": "uuid",
"hash": "sha256-hex-string-64-chars",
"filename": "original.jpg",
"mime_type": "image/jpeg",
"size_bytes": 102400,
"width": 800,
"height": 600,
"storage_key": "sha256-hex-string-64-chars",
"created_at": "2026-05-01T12:00:00Z",
"tags": ["cat", "funny"],
"duplicate": false
}
```
`duplicate` is only present on POST responses; omit or set to `false` on GET.
---
### POST /api/v1/images — Upload Image
**Request**: `Content-Type: multipart/form-data`
| Field | Required | Type | Notes |
|---|---|---|---|
| `file` | yes | file | The image binary |
| `tags` | no | string | Comma-separated, e.g. `"cat,funny,reaction"` |
**Processing**:
1. Validate MIME type → 422 `invalid_mime_type` if not accepted
2. Validate file size ≤ MAX_UPLOAD_BYTES → 422 `file_too_large` if exceeded
3. Compute SHA-256 hash of raw bytes
4. Query DB for existing record with same hash
5. If duplicate: return existing record, `duplicate: true`, HTTP 200
6. If new: write to storage, insert record, upsert tags, HTTP 201
**Responses**:
| Status | Condition | Body |
|---|---|---|
| 201 | New image stored | Image object with `duplicate: false` |
| 200 | Duplicate detected | Image object with `duplicate: true` |
| 422 | Invalid MIME type | `{"detail":"...", "code":"invalid_mime_type"}` |
| 422 | File too large | `{"detail":"...", "code":"file_too_large"}` |
| 422 | Invalid tag | `{"detail":"...", "code":"invalid_tag"}` |
---
### GET /api/v1/images — List / Search Images
**Query parameters**:
| Param | Type | Default | Notes |
|---|---|---|---|
| `tags` | string | — | Comma-separated. AND logic — all tags must be present. |
| `limit` | integer | 50 | Max 100 |
| `offset` | integer | 0 | Pagination offset |
**Response** 200:
```json
{
"items": [ { ...image object... } ],
"total": 142,
"limit": 50,
"offset": 0
}
```
Each item includes the `tags` array. `duplicate` field omitted.
Results ordered by `created_at` descending.
---
### GET /api/v1/images/{id} — Get Single Image
**Path**: `{id}` is a UUID.
**Response** 200: Single image object (tags included, `duplicate` omitted).
| Status | Code | Condition |
|---|---|---|
| 404 | `image_not_found` | No image with that UUID |
---
### GET /api/v1/images/{id}/file — Serve Image File
**Path**: `{id}` is a UUID.
**Response** 302: Redirect to pre-signed S3 URL (1-hour expiry).
`Location` header contains the URL.
| Status | Code | Condition |
|---|---|---|
| 404 | `image_not_found` | No image with that UUID |
---
### PATCH /api/v1/images/{id}/tags — Update Tags
**Request**: `Content-Type: application/json`
```json
{ "tags": ["cat", "funny", "new-tag"] }
```
Replaces the full tag set. Empty array removes all tags from the image.
Tag records themselves are never deleted.
**Response** 200: Full image object with updated tags.
| Status | Code | Condition |
|---|---|---|
| 404 | `image_not_found` | No image with that UUID |
| 422 | `invalid_tag` | A tag fails validation |
---
### DELETE /api/v1/images/{id} — Delete Image
**Path**: `{id}` is a UUID.
**Processing**:
1. Look up image record
2. Delete all `image_tags` rows for this image
3. Delete the image record
4. Delete the S3 object at `storage_key`
**Response** 204: No body.
| Status | Code | Condition |
|---|---|---|
| 404 | `image_not_found` | No image with that UUID |
---
## Tag Resource
### Tag object shape
```json
{
"id": "uuid",
"name": "cat",
"image_count": 42
}
```
---
### GET /api/v1/tags — List Tags
**Query parameters**:
| Param | Type | Default | Notes |
|---|---|---|---|
| `q` | string | — | Prefix filter on tag name |
| `limit` | integer | 100 | Max 200 |
| `offset` | integer | 0 | Pagination offset |
**Response** 200:
```json
{
"items": [ { "id": "uuid", "name": "cat", "image_count": 42 } ],
"total": 7,
"limit": 100,
"offset": 0
}
```
Results ordered alphabetically by `name`.
---
## Health
### GET /api/v1/health
**Response** 200:
```json
{ "status": "ok" }
```
No auth or error cases.
---
## Constraints Summary
| Constraint | Value |
|---|---|
| Accepted MIME types | `image/jpeg`, `image/png`, `image/gif`, `image/webp` |
| Max upload size | 52 428 800 bytes (50 MiB); configurable via `MAX_UPLOAD_BYTES` env var |
| Tag name pattern | `^[a-z0-9_-]{1,64}$` (after normalisation) |
| Pre-signed URL expiry | 3 600 seconds (1 hour) |
| Max `limit` for images list | 100 |
| Max `limit` for tags list | 200 |
| Default `limit` for images list | 50 |
| Default `limit` for tags list | 100 |

View File

@@ -0,0 +1,151 @@
# Data Model: Reaction Image Board v1
**Date**: 2026-05-02
---
## Entities
### Image
Represents a single uploaded image file.
| Field | Type | Constraints | Notes |
|---|---|---|---|
| `id` | UUID | PK, not null | Generated on insert |
| `hash` | VARCHAR(64) | UNIQUE, not null | SHA-256 hex digest of file bytes |
| `filename` | VARCHAR | not null | Original filename; display only |
| `mime_type` | VARCHAR(20) | not null | `image/jpeg`, `image/png`, `image/gif`, `image/webp` |
| `size_bytes` | BIGINT | not null, > 0 | File size |
| `width` | INTEGER | not null, > 0 | Pixel width |
| `height` | INTEGER | not null, > 0 | Pixel height |
| `storage_key` | VARCHAR(64) | not null | S3 object key; equals `hash` in v1 |
| `created_at` | TIMESTAMPTZ | not null, default now() | Set on insert; never updated |
**Indexes**:
- `images_hash_idx` UNIQUE on `hash` — supports fast duplicate detection
**Validation rules**:
- `mime_type` MUST be one of: `image/jpeg`, `image/png`, `image/gif`, `image/webp`
- `size_bytes` MUST be > 0 and ≤ `MAX_UPLOAD_BYTES` (default 52 428 800)
- `hash` MUST be a 64-character lowercase hex string (SHA-256)
---
### Tag
Represents a single normalised tag string.
| Field | Type | Constraints | Notes |
|---|---|---|---|
| `id` | UUID | PK, not null | Generated on insert |
| `name` | VARCHAR(64) | UNIQUE, not null | Normalised: lowercase, trimmed |
| `created_at` | TIMESTAMPTZ | not null, default now() | Set on insert |
**Indexes**:
- `tags_name_idx` UNIQUE on `name` — supports upsert by name
- `tags_name_prefix_idx` on `name` with `varchar_pattern_ops` — supports
prefix search (`LIKE 'prefix%'`)
**Validation rules**:
- `name` MUST match `^[a-z0-9_-]{1,64}$` (after normalisation)
- Normalisation applied before validation: lowercase + whitespace trim
**Lifecycle**:
- Tags are created implicitly on first use; no explicit creation endpoint
- Tag records are never deleted even when all image associations are removed
---
### ImageTag (join)
Many-to-many association between Image and Tag.
| Field | Type | Constraints | Notes |
|---|---|---|---|
| `image_id` | UUID | FK → images.id ON DELETE CASCADE | |
| `tag_id` | UUID | FK → tags.id ON DELETE RESTRICT | |
**Primary key**: composite `(image_id, tag_id)`
**Notes**:
- Deleting an image cascades to all its ImageTag rows
- Deleting a tag is RESTRICT (not permitted while image associations exist)
- In practice, tags are never deleted in v1 so RESTRICT is never triggered
---
## Relationships
```
Image ──< ImageTag >── Tag
(1) (M:M) (1)
```
- One Image has zero or more Tags (through ImageTag)
- One Tag is applied to zero or more Images (through ImageTag)
---
## State Transitions
### Image lifecycle
```
[upload received]
Validate MIME + size
Compute SHA-256
Existing hash? ──yes──► return existing record (duplicate: true)
no
Write to S3
Insert images row
Upsert tags + insert image_tag rows
Return new record (duplicate: false)
[user deletes image]
Delete image_tag rows (cascade)
Delete images row
Delete S3 object
[gone]
```
### Tag set update (PATCH)
```
[PATCH /api/v1/images/{id}/tags with new_tags=[...]]
Validate each tag name (post-normalisation)
Fetch current tag set for image
Compute removed = current \ new_tags
Compute added = new_tags \ current
Delete ImageTag rows for removed tags
Upsert Tag records for added tags
Insert ImageTag rows for added tags
Return full updated image record
```
---
## Database Migration Strategy
- Alembic manages all schema changes
- Migration files are committed to `api/alembic/versions/`
- Schema is applied on API startup (`alembic upgrade head`)
- M0: initial empty migration (no tables)
- M1: `images` table
- M2: `tags` table + `image_tags` table + indexes

View File

@@ -0,0 +1,354 @@
# Implementation Plan: Reaction Image Board v1
**Branch**: `001-reaction-image-board` | **Date**: 2026-05-02 | **Spec**: [spec.md](spec.md)
**Input**: Feature specification from `specs/001-reaction-image-board/spec.md`
## Summary
Build a self-hosted personal reaction image library: a FastAPI backend that
stores images in S3-compatible object storage (MinIO locally), persists
metadata and tags in PostgreSQL, and an Angular SPA that lets the user upload,
browse, filter, view, re-tag, and delete images. The project is split into
seven milestones (M0M6), API-first, each leaving the system in a fully
working and tested state.
## Technical Context
**Language/Version**: Python 3.12+ (API); TypeScript strict mode (UI)
**Primary Dependencies**: FastAPI, SQLAlchemy 2.x async, asyncpg, Alembic,
boto3/aiobotocore (API); Angular latest stable (UI)
**Storage**: PostgreSQL (relational), S3-compatible object storage via MinIO
locally / AWS S3 in production
**Testing**: pytest + pytest-asyncio (API unit + integration); Angular Karma/Jest
+ TestBed (UI unit); E2E best-effort (constitution §5.2)
**Target Platform**: Linux server (containerised); modern evergreen desktop browsers
**Project Type**: Web application — separate API service + SPA
**Performance Goals**: First page of library < 2 s for 1 000 images; upload
visible in library < 10 s on local network
**Constraints**: Single user, localhost-only in Phase 1; 50 MB upload cap;
`docker compose up` is the only required start command
**Scale/Scope**: Personal use; v1 scope bounded per constitution §8
## Constitution Check
*GATE: Must pass before Phase 0 research. Re-checked after Phase 1 design below.*
| Principle | Check | Status |
|---|---|---|
| §2.1 Separation of concerns | API service knows nothing about Angular; UI knows nothing about DB/S3 | ✅ |
| §2.2 Dependency direction | UI → API → Storage/DB; no upward imports | ✅ |
| §2.3 Storage abstraction | `StorageBackend` interface created before first S3 call (M1); no bucket names outside storage module | ✅ |
| §2.4 Auth abstraction | `AuthProvider` + `NoOpAuthProvider` wired in M1; all request-identity resolution goes through it | ✅ |
| §2.5 DB abstraction | All DB access through `ImageRepository` and `TagRepository`; no query logic outside repositories | ✅ |
| §2.6 No speculative abstraction | Only constitutionally-sanctioned interfaces (StorageBackend, AuthProvider, repositories) created | ✅ |
| §3.1 API versioning | All routes prefixed `/api/v1/` | ✅ |
| §3.3 Error shape | `{"detail": "...", "code": "..."}` enforced; integration tests verify envelope | ✅ |
| §3.4 Pagination | Every list endpoint has `limit`/`offset` from day one (M2) | ✅ |
| §4.1 Tag normalisation | Lowercase + trim applied before persistence (M2) | ✅ |
| §4.3 Dedup by hash | SHA-256 computed before storage write; existing hash returns existing record (M1) | ✅ |
| §4.4 Tag AND logic | List endpoint filters with AND semantics (M2) | ✅ |
| §5.1 TDD non-negotiable | **Tests are written FIRST and must fail before implementation in every milestone** | ✅ |
| §5.2 Test pyramid | Unit + integration tests required; E2E best-effort | ✅ |
| §5.3 Test colocation | API tests in `api/tests/`; Angular tests colocated with components | ✅ |
| §5.4 CI gate | All tests + linters must pass before a milestone is "done" | ✅ |
| §7.1 One-command start | M0 done-criterion is `docker compose up` starts all services | ✅ |
| §7.2 Env configuration | All config via env vars; `.env.example` committed in M0 | ✅ |
| §7.3 Linting | `ruff` (API) + `eslint`/`prettier` (UI) configured in M0, enforced | ✅ |
| §8 Scope boundaries | Bulk upload, OR/NOT tags, auth, image editing, multi-user — all deferred | ✅ |
**Post-design re-check**: See bottom of this file — updated after Phase 1 artifacts.
## Project Structure
### Documentation (this feature)
```text
specs/001-reaction-image-board/
├── plan.md # This file
├── research.md # Phase 0 decisions
├── data-model.md # Entity definitions and DB schema
├── quickstart.md # Local dev setup walkthrough
├── contracts/ # OpenAPI-aligned endpoint contracts
│ └── api.md
└── tasks.md # Phase 2 output (/speckit-tasks — NOT created here)
```
### Source Code (repository root)
```text
api/
├── app/
│ ├── main.py # FastAPI app factory, lifespan, middleware
│ ├── config.py # Settings from env vars (pydantic-settings)
│ ├── dependencies.py # FastAPI dependency injection (db session, auth)
│ ├── repositories/
│ │ ├── image_repo.py # ImageRepository
│ │ └── tag_repo.py # TagRepository
│ ├── storage/
│ │ ├── backend.py # StorageBackend interface
│ │ └── s3_backend.py # S3StorageBackend implementation
│ ├── auth/
│ │ ├── provider.py # AuthProvider interface
│ │ └── noop.py # NoOpAuthProvider
│ ├── routers/
│ │ ├── images.py # /api/v1/images routes
│ │ └── tags.py # /api/v1/tags route
│ └── models.py # SQLAlchemy ORM models
├── alembic/
│ ├── alembic.ini
│ └── versions/ # Migration files
├── tests/
│ ├── unit/
│ └── integration/
├── pyproject.toml # deps, ruff config
└── Dockerfile
ui/
├── src/
│ ├── app/
│ │ ├── app.component.*
│ │ ├── app.routes.ts
│ │ ├── services/
│ │ │ ├── image.service.ts (+ .spec.ts)
│ │ │ └── tag.service.ts (+ .spec.ts)
│ │ ├── library/
│ │ │ └── library.component.* (+ .spec.ts)
│ │ ├── upload/
│ │ │ └── upload.component.* (+ .spec.ts)
│ │ ├── detail/
│ │ │ └── detail.component.* (+ .spec.ts)
│ │ └── not-found/
│ │ └── not-found.component.*
│ └── environments/
├── proxy.conf.json # /api/* → API in dev
├── angular.json
├── package.json
└── Dockerfile
docker-compose.yml
.env.example
```
**Structure Decision**: Web application (Option 2 variant). `api/` contains
the FastAPI project; `ui/` contains the Angular project. Both have independent
dependency manifests and Dockerfiles per constitution §1.
## Milestones
> **TDD ORDER IS MANDATORY** (constitution §5.1): For every task below, write
> the failing test(s) first, confirm they fail, then implement until they pass.
### M0 — Project Skeleton
**Goal**: A running system with no features. Both services start, connect to
their dependencies, and respond to a health check.
**Deliverables** (implement only after failing tests exist):
- Monorepo layout per Project Structure above
- `docker-compose.yml`: PostgreSQL, MinIO, API, UI dev server
- `.env.example` with all variables from spec §5
- API: FastAPI app starts, connects to PostgreSQL (SQLAlchemy async + asyncpg)
and MinIO (aiobotocore)
- API: `GET /api/v1/health` returns `{"status": "ok"}`
- API: Alembic configured, initial empty migration applied on startup
- API: `ruff` configured and passing in CI
- UI: Angular app scaffolded, routing in place, `HttpClient` with `API_BASE_URL`
- UI: proxy config routes `/api/*` to API in local dev
- UI: `eslint` + `prettier` configured and passing
**Tests (write first)**:
- API unit: settings loaded from env vars without error
- API integration: `GET /api/v1/health` → 200 `{"status": "ok"}`
- UI: Angular default smoke test passes
**Done when**: `docker compose up` starts all four services; health endpoint
returns 200; both linters and all tests pass.
---
### M1 — Image Upload (API only)
**Goal**: API accepts an image, validates it, deduplicates by hash, stores in
MinIO, and persists the record in PostgreSQL. Tags field accepted but ignored.
**Deliverables** (implement only after failing tests exist):
- Alembic migration: `images` table
- `StorageBackend` interface (`app/storage/backend.py`)
- `S3StorageBackend` implementation (`app/storage/s3_backend.py`)
- `AuthProvider` interface + `NoOpAuthProvider` (`app/auth/`)
- `ImageRepository`: `create`, `get_by_id`, `get_by_hash`
- `POST /api/v1/images`: MIME validation, size validation, SHA-256 hash,
duplicate check, storage write, record insert, correct response shapes
(201 new / 200 duplicate with `duplicate` field)
**Tests (write first)**:
- Unit: SHA-256 hash computation on known bytes
- Unit: MIME type validator rejects PDF, MP4, etc.
- Unit: file size validator rejects files over MAX_UPLOAD_BYTES
- Integration: valid JPEG upload → 201, record in DB, object in MinIO
- Integration: same image uploaded twice → 200, `duplicate: true`, no second
MinIO object written
- Integration: invalid MIME type → 422 `invalid_mime_type` (error envelope
must include `code` field)
- Integration: oversized file → 422 `file_too_large`
**Done when**: All tests pass; linter passes; duplicate detection works
end-to-end via curl.
---
### M2 — Tags (API only)
**Goal**: Tags are fully functional on the API. Upload persists tags; search
filters by tags; tags are editable; images can be deleted.
**Deliverables** (implement only after failing tests exist):
- Alembic migration: `tags` table + `image_tags` join table
- `TagRepository`: `upsert_by_name`, `get_by_image_id`, `replace_tags_on_image`
- Tag normalisation + validation (pattern `^[a-z0-9_-]{1,64}$` after lowercase+trim)
- `POST /api/v1/images`: `tags` field now processed and persisted
- `GET /api/v1/images` with `tags`, `limit`, `offset` (AND-filter)
- `GET /api/v1/images/{id}` (returns image + tags)
- `PATCH /api/v1/images/{id}/tags` (full tag replacement)
- `GET /api/v1/tags` with `q`, `limit`, `offset` (prefix search + image count)
- `DELETE /api/v1/images/{id}` (record + ImageTag rows + storage object)
**Tests (write first)**:
- Unit: normalisation: uppercase → lowercase, whitespace stripped
- Unit: validation: rejects names > 64 chars, rejects invalid chars
- Unit: AND-filter query produces correct WHERE clause
- Integration: upload with tags → tags persisted and returned
- Integration: duplicate upload → existing record returned, tags unchanged
- Integration: `GET /api/v1/images?tags=cat,funny` → only images with both tags
- Integration: same query excludes images with only one of the two tags
- Integration: PATCH replaces tags; old tags unlinked; new tags upserted
- Integration: `GET /api/v1/tags?q=ca` → tags prefixed "ca" with correct counts
- Integration: DELETE → 204; subsequent GET returns 404 `image_not_found`
- Integration: DELETE verifies storage object removed from MinIO
**Done when**: All prior tests still pass; full API functional via curl.
---
### M3 — Image Serving (API only)
**Goal**: Images can be viewed in a browser via pre-signed S3 URLs.
**Deliverables** (implement only after failing tests exist):
- `GET /api/v1/images/{id}/file` → generates 1-hour pre-signed URL, returns
302 redirect with `Location` header
**Tests (write first)**:
- Integration: `GET /api/v1/images/{id}/file` → 302 with `Location` header
pointing to MinIO URL
- Integration: unknown ID → 404 `image_not_found`
**Done when**: Pasting `http://localhost:8000/api/v1/images/{id}/file` into a
browser loads the image directly from MinIO.
---
### M4 — UI: Library View
**Goal**: Angular SPA displays uploaded images in a responsive grid with live
tag filtering. Upload button navigates but does not submit yet.
**Deliverables** (implement only after failing tests exist):
- `ImageService` wrapping `GET /api/v1/images` and `GET /api/v1/images/{id}/file`
- `TagService` wrapping `GET /api/v1/tags`
- `LibraryComponent` (route `/`): responsive grid, thumbnails via `/file`
redirect, tag chips per image, debounced tag filter bar, "Load more"
pagination (offset/limit), upload button → `/upload`, click → `/images/:id`
**Tests (write first)**:
- Unit: `ImageService` constructs correct query params from filter state
- Unit: `TagService` calls correct endpoint with `q` param
- Unit: `LibraryComponent` renders image grid from mocked service
- Unit: `LibraryComponent` filter change triggers new API call with updated
`tags` param
**Done when**: Library view displays real images uploaded via curl in M2; tag
filtering and pagination work.
---
### M5 — UI: Upload View
**Goal**: Users can upload images and add tags through the browser.
**Deliverables** (implement only after failing tests exist):
- `UploadComponent` (route `/upload`): drag-and-drop zone, click-to-browse,
tag chip input (comma/space separated), POST to API, duplicate toast →
navigate to detail, success toast → navigate to detail, error → inline
message, no navigation
**Tests (write first)**:
- Unit: tag chip input lowercases and splits on comma/space
- Unit: on `duplicate: true` response → toast shown, navigation triggered
- Unit: on `duplicate: false` response → success toast, navigation triggered
- Unit: on error response → error displayed, no navigation
**Done when**: Full upload flow works in browser including duplicate feedback.
---
### M6 — UI: Detail View
**Goal**: Users can view full-size images, edit tags, and delete images.
Completes the v1 feature set.
**Deliverables** (implement only after failing tests exist):
- `DetailComponent` (route `/images/:id`): full-size image via `/file`
redirect, editable tag chips (× to remove, input to add), save via PATCH
on blur/Enter, delete button with confirmation dialog → DELETE → navigate
to Library, back button → Library (preserving tag filter state)
- `NotFoundComponent`: shown for all unrecognised routes
**Tests (write first)**:
- Unit: removing tag chip calls PATCH with updated list (removed tag absent)
- Unit: adding tag + Enter calls PATCH with new tag included
- Unit: delete confirmation → DELETE called → navigation to Library
- Unit: cancel on confirmation → no DELETE call, stays on detail page
- Unit: back button navigates to Library
**Done when**: Full CRUD loop works in browser: upload → view → re-tag →
delete. All tests across all milestones pass. Both linters pass. `docker
compose up` starts a fully working application.
---
## Dependency Graph
```
M0 (skeleton)
└── M1 (upload API)
└── M2 (tags API)
└── M3 (serving API)
└── M4 (library UI)
└── M5 (upload UI)
└── M6 (detail UI)
```
Serial milestones: solo project; API must be stable before UI work begins.
## What This Plan Defers
Per constitution §8 and spec §6 — not forgotten, explicitly deferred:
- **Auth phases**: `AuthProvider` interface wired in M1 (no-op). Phase 2
(username/password) and Phase 3 (OIDC) each get their own plan.
- **SQLite refactor**: Repository layer is the only thing that changes in a
future plan.
- **Bulk upload**: Out of scope per spec §6.
- **Tag rename/merge on re-upload**: Spec §2.1 explicitly defers; future spec
revision adds behaviour to M2.
## Post-Design Constitution Re-check
*Performed after Phase 1 artifacts (data-model.md, contracts/api.md,
quickstart.md) were generated.*
No new violations found. All constitutionally-required abstractions appear in
the data model and contracts. The `StorageBackend` interface is referenced
correctly in the API contract without exposing bucket names. Test colocation
confirmed in project structure above.

View File

@@ -0,0 +1,144 @@
# Quickstart: Reaction Image Board v1
**Goal**: Get a fully functional local development environment running in
under 5 minutes from a clean checkout.
---
## Prerequisites
- Docker and Docker Compose v2 installed
- Git
No other tools (Python, Node, etc.) are required on the host — everything
runs inside containers.
---
## Steps
### 1. Clone and configure
```bash
git clone <repo-url> reactbin
cd reactbin
cp .env.example .env
```
The `.env.example` file contains safe defaults for local development.
You do not need to edit `.env` to get started.
### 2. Start all services
```bash
docker compose up
```
This starts four services:
- **postgres** — PostgreSQL on port 5432
- **minio** — S3-compatible object storage on port 9000 (console on 9001)
- **api** — FastAPI application on port 8000
- **ui** — Angular dev server on port 4200
On first run, Docker builds the API and UI images (a few minutes). Subsequent
starts are fast.
### 3. Verify the API
```bash
curl http://localhost:8000/api/v1/health
# → {"status":"ok"}
```
### 4. Open the UI
Navigate to [http://localhost:4200](http://localhost:4200) in your browser.
The empty library is displayed.
---
## Upload a test image (API)
```bash
curl -X POST http://localhost:8000/api/v1/images \
-F "file=@/path/to/image.jpg" \
-F "tags=test,sample"
```
Expected response: HTTP 201 with the image JSON including its UUID.
---
## Upload a test image (UI)
1. Click the **Upload** button in the library view
2. Drag and drop an image (JPEG, PNG, GIF, or WebP, max 50 MB)
3. Type some tags separated by commas
4. Click **Upload**
5. You are redirected to the image's detail page
---
## MinIO Console
The MinIO management console is accessible at
[http://localhost:9001](http://localhost:9001).
Default credentials (from `.env.example`):
- User: `minioadmin`
- Password: `minioadmin`
You can inspect uploaded objects in the bucket here.
---
## Running tests
**API tests** (inside the container):
```bash
docker compose run --rm api pytest
```
**UI tests**:
```bash
docker compose run --rm ui ng test --watch=false
```
**Linters**:
```bash
docker compose run --rm api ruff check .
docker compose run --rm ui npm run lint
```
---
## Stopping and resetting
```bash
# Stop all services (preserves data)
docker compose down
# Stop and remove all data (PostgreSQL + MinIO volumes)
docker compose down -v
```
---
## Environment variables
All configuration comes from `.env`. The table below shows every variable
and its default:
| Variable | Default | Notes |
|---|---|---|
| `DATABASE_URL` | `postgresql+asyncpg://reactbin:reactbin@postgres:5432/reactbin` | Async DSN for SQLAlchemy |
| `S3_ENDPOINT_URL` | `http://minio:9000` | MinIO endpoint inside Docker network |
| `S3_BUCKET_NAME` | `reactbin` | Created automatically on first API start |
| `S3_ACCESS_KEY_ID` | `minioadmin` | MinIO root user |
| `S3_SECRET_ACCESS_KEY` | `minioadmin` | MinIO root password |
| `S3_REGION` | `us-east-1` | Required even for MinIO |
| `API_BASE_URL` | `http://localhost:8000` | Injected into Angular at build time |
| `MAX_UPLOAD_BYTES` | `52428800` | 50 MiB |
For production, replace MinIO credentials and DATABASE_URL with real values.
Never commit a `.env` file with real credentials.

View File

@@ -0,0 +1,134 @@
# Research: Reaction Image Board v1
**Phase**: 0 — Pre-design research
**Date**: 2026-05-02
The technology stack is fully specified in the constitution's tech stack table
(§6), so this document records rationale and key decisions rather than
exploratory research.
---
## Decision 1: Project layout
**Decision**: Two top-level directories (`api/`, `ui/`) in a single
repository, each with its own dependency manifest and Dockerfile.
**Rationale**: Constitution §1 mandates separate deployable artifacts with
separate dependency manifests. A monorepo with two independently buildable
services satisfies this without requiring a separate repo per service.
Docker Compose ties them together for local development (§7.1).
**Alternatives considered**:
- Separate repositories — rejected because it adds checkout/sync overhead
for a solo project with no reason to deploy them independently yet.
- Single `src/` with both projects — rejected because it would entangle
dependency manifests, violating §2.1.
---
## Decision 2: SQLAlchemy async + asyncpg driver
**Decision**: SQLAlchemy 2.x in async mode with the asyncpg driver.
**Rationale**: Constitution §6 specifies this explicitly. FastAPI's async
request handlers benefit directly; asyncpg is the fastest PostgreSQL driver
for Python async code. The async session is injected per-request via
FastAPI's dependency system.
**Alternatives considered**: Synchronous SQLAlchemy + psycopg2 — rejected
per constitution mandate.
---
## Decision 3: Storage object key
**Decision**: SHA-256 hex digest of file bytes, no prefix, no extension
(e.g. `a3f1...`).
**Rationale**: Spec §3 specifies this explicitly. Content-addressed keys
are stable and human-inspectable in the bucket. The same key is stored in
the `storage_key` column on the image record, so reconstruction without DB
state is possible.
**Alternatives considered**: UUID key — rejected; UUID keys lose the
content-addressing property and the deduplication shortcut.
---
## Decision 4: Duplicate detection strategy
**Decision**: Hash the file bytes in the API process; query PostgreSQL for an
existing record with that hash before any storage write.
**Rationale**: Constitution §4.3 mandates deduplication by SHA-256. Performing
the hash and DB check before writing to S3 avoids wasting storage bandwidth
and keeps the duplicate-response path (HTTP 200 + `duplicate: true`) cheap.
**Alternatives considered**:
- Hash at upload, always write to S3 then check — rejected because it wastes
S3 bandwidth for duplicate uploads.
- Client-side hash — rejected because the constitution places no trust in the
client and the UI knows nothing about storage implementation (§2.3).
---
## Decision 5: Tag validation pattern
**Decision**: `^[a-z0-9_-]{1,64}$` applied after normalisation (lowercase + trim).
**Rationale**: Spec §2.8 specifies this pattern exactly. Normalisation happens
before validation so that user input like `" Cat "` becomes `"cat"` and passes.
---
## Decision 6: Pre-signed URL strategy
**Decision**: Generate a 1-hour pre-signed URL on each request to
`GET /api/v1/images/{id}/file` and return a 302 redirect.
**Rationale**: Spec §2.4 specifies this approach. The client (browser or
Angular app) loads the image directly from S3/MinIO, avoiding API proxying
of potentially large files. The 1-hour expiry is short enough to be
meaningless at personal-use scale.
**Alternatives considered**:
- Proxy image bytes through the API — rejected because it wastes API memory
and bandwidth for potentially large files.
- Permanent public S3 URLs — rejected because it exposes the bucket structure
and requires the bucket to be public.
---
## Decision 7: Test database and storage in integration tests
**Decision**: Integration tests run against a real PostgreSQL database and a
real MinIO instance started by Docker Compose (or a dedicated test compose
file). No mocking of the database or storage layer.
**Rationale**: Constitution §5.2 mandates "API routes tested against a real
(test) database and a real (test) S3-compatible bucket (e.g. MinIO in Docker)".
Mocking at this layer has historically caused test/prod divergence.
**Alternatives considered**:
- SQLite in-memory for integration tests — rejected; constitution mandates
PostgreSQL specifically for the repository layer.
- Mocked S3 (moto) — rejected per constitution §5.2.
---
## Decision 8: Angular tag filter debounce
**Decision**: Debounce tag filter bar API calls in `LibraryComponent` using
RxJS `debounceTime` (e.g. 300 ms).
**Rationale**: Spec §4.1 says "updates in real time (debounced API call)".
300 ms is a standard UX debounce that prevents excessive API calls while
still feeling responsive.
---
## All NEEDS CLARIFICATION resolved
No unresolved clarifications remain. The constitution and spec together fully
specify the technology choices and behaviour for v1.

View File

@@ -0,0 +1,259 @@
# Feature Specification: Reaction Image Board v1
**Feature Branch**: `001-reaction-image-board`
**Created**: 2026-05-02
**Status**: Draft
**Input**: User description: "Read docs/SPEC.md, from which we will create the official spec"
## User Scenarios & Testing *(mandatory)*
### User Story 1 — Upload an Image (Priority: P1)
A user drags and drops (or browses to select) a single image file from their
device, optionally adds tags, and submits the upload. The image appears in
the library immediately. If the same image was already uploaded before, the
system recognises the duplicate and shows the existing entry without creating
a second copy.
**Why this priority**: This is the core data-entry action. Without it no
library content exists to browse, search, or manage.
**Independent Test**: Upload a JPEG, verify it appears in the library grid,
then re-upload the same file and verify only one copy exists with an
"Already in your library" notification.
**Acceptance Scenarios**:
1. **Given** a supported image file (JPEG, PNG, GIF, or WebP) under 50 MB,
**When** the user submits the upload form,
**Then** the image is stored, appears in the library, and the user is
taken to the image's detail page.
2. **Given** an image already in the library,
**When** the user uploads the same file again,
**Then** no duplicate is stored, the user sees an "Already in your library"
notification, and is navigated to the existing image's detail page.
3. **Given** an unsupported file type (e.g. PDF, MP4),
**When** the user attempts to upload it,
**Then** an inline error is shown and the user remains on the upload page.
4. **Given** a file larger than 50 MB,
**When** the user attempts to upload it,
**Then** an inline error is shown before any storage is attempted.
5. **Given** a tag name longer than 64 characters or containing characters
outside lowercase letters, digits, hyphens, and underscores,
**When** the user submits the upload,
**Then** an inline validation error identifies the problematic tag and the
upload does not proceed.
---
### User Story 2 — Browse and Filter the Library (Priority: P1)
A user opens the application and sees a grid of all their uploaded images as
thumbnails. They can filter the grid by selecting one or more tags; the grid
updates to show only images that carry **all** selected tags. Filters can be
added and removed interactively without reloading the page.
**Why this priority**: The library view is the default landing page and the
primary way to find and re-use reaction images.
**Independent Test**: Seed the library with tagged images, apply a single tag
filter and verify only matching images are shown, then add a second filter and
verify both tags are required on every visible result.
**Acceptance Scenarios**:
1. **Given** the library contains images,
**When** the user opens the application,
**Then** all images are shown in reverse chronological order as thumbnails
with their tags displayed beneath each one.
2. **Given** a non-empty library,
**When** the user selects one or more tags in the filter bar,
**Then** only images that have every selected tag are shown.
3. **Given** active tag filters,
**When** the user removes a filter chip,
**Then** the grid expands to reflect the remaining filters (or shows all
images if no filters remain).
4. **Given** a large library (more images than fit on screen),
**When** the user scrolls to the bottom or clicks "Load more",
**Then** additional images load without replacing the already-visible ones.
---
### User Story 3 — View Image Detail and Edit Tags (Priority: P2)
A user clicks an image in the library to open a detail page showing the
full-size image and its current tags. They can add new tags or remove
existing ones. Changes are saved on blur or pressing Enter, not on every
keystroke.
**Why this priority**: Tag management is the primary organisation mechanism;
editing must be accessible from the image itself.
**Independent Test**: Open any image detail page, add a new tag, navigate
back to the library, filter by that tag, and confirm the image appears.
**Acceptance Scenarios**:
1. **Given** the user navigates to an image detail page,
**When** the page loads,
**Then** the full-size image is displayed alongside all its current tags.
2. **Given** the user types a new tag into the tag input and presses Enter
(or moves focus away),
**Then** the tag is added to the image and the display updates immediately.
3. **Given** the user clicks the remove (×) button on an existing tag chip,
**Then** the tag is removed from the image.
4. **Given** a tag value that exceeds 64 characters or contains invalid
characters,
**When** the user tries to save it,
**Then** an inline error is shown and the invalid tag is not persisted.
---
### User Story 4 — Delete an Image (Priority: P2)
A user chooses to permanently remove an image from their library. A
confirmation step prevents accidental deletion. After deletion, the image
is gone from the library view and from storage.
**Why this priority**: Users must be able to remove unwanted content from a
personal collection.
**Independent Test**: Delete a known image, confirm it no longer appears in
the library, and confirm that navigating to its former detail URL shows a
"not found" screen.
**Acceptance Scenarios**:
1. **Given** the user is on an image detail page,
**When** they click the delete button and confirm,
**Then** the image and its stored file are permanently removed and the user
is returned to the library.
2. **Given** the user clicks the delete button,
**When** they dismiss the confirmation dialog (cancel),
**Then** no deletion occurs and the user remains on the detail page.
---
### User Story 5 — Browse and Search Tags (Priority: P3)
A user can view a list of all tags currently in use, along with how many
images each tag is applied to. They can type a prefix to narrow the list.
**Why this priority**: Useful for discovering existing tags and maintaining a
consistent vocabulary, but the library filter bar already enables tag selection
so this is supplementary.
**Independent Test**: Open the tag browser and verify every tag present in the
library appears with a correct image count, then type a prefix and verify only
matching tags remain visible.
**Acceptance Scenarios**:
1. **Given** the user opens the tag browser,
**When** the page loads,
**Then** all tags are listed alphabetically with their image counts.
2. **Given** the user types a prefix into the search input,
**Then** only tags whose names begin with that prefix are shown.
---
### Edge Cases
- What happens when the library is empty? → An empty-state prompt is shown
encouraging the user to upload their first image.
- What happens when a tag filter matches zero images? → The grid shows an
empty-results message (not an error).
- What happens when the user navigates to a non-existent image ID? → A "Not
found" screen is shown with a link back to the library.
- What happens when the user navigates to an unknown route? → A "Not found"
screen is shown.
- What happens when the upload form is submitted with no tags? → The image is
stored with no tags; no validation error is raised.
## Requirements *(mandatory)*
### Functional Requirements
- **FR-001**: System MUST accept uploads of JPEG, PNG, GIF, and WebP images
up to 50 MB per file; all other types MUST be rejected.
- **FR-002**: System MUST detect duplicate image content at upload time and
return the existing record without writing a duplicate to storage.
- **FR-003**: Users MUST be able to attach zero or more tags to an image at
upload time via a comma-or-space-separated text input.
- **FR-004**: Tag names MUST be normalised (lowercased, whitespace-trimmed)
before storage and MUST conform to: lowercase letters, digits, hyphens, and
underscores only, 164 characters.
- **FR-005**: Users MUST be able to filter the image library by one or more
tags; the filter logic MUST be AND (every specified tag must be present on
the result).
- **FR-006**: All list views MUST support pagination; no view may load the
entire library at once.
- **FR-007**: Users MUST be able to replace the complete tag set on an
existing image (add new tags, remove existing tags) from the detail view.
- **FR-008**: Users MUST be able to permanently delete an image including its
stored file and all tag associations, after a confirmation step.
- **FR-009**: Images MUST be viewable in the browser (thumbnail in library,
full-size on detail page) without exposing permanent internal storage
credentials or addresses.
- **FR-010**: Users MUST be able to list all tags sorted alphabetically with
associated image counts, with optional prefix filtering.
- **FR-011**: Tags MUST be created implicitly on first use; no explicit
tag-creation step is required.
- **FR-012**: Removing a tag from an image MUST NOT delete the shared tag
record or affect other images that use the same tag.
### Key Entities *(include if feature involves data)*
- **Image**: A single uploaded file. Key attributes: unique content
fingerprint, original filename, file type, pixel dimensions, file size,
upload timestamp, associated tags.
- **Tag**: A normalised text label that can be applied to many images.
Key attributes: name (unique, always lowercase), creation timestamp, count
of images currently using it.
- **ImageTag**: The many-to-many association between an image and a tag.
## Success Criteria *(mandatory)*
### Measurable Outcomes
- **SC-001**: A user can upload an image with tags and see it appear in the
library in under 10 seconds on a local network connection.
- **SC-002**: Re-uploading an identical image produces no duplicate library
entry; duplicate detection is invisible to the user except for the
informational notification.
- **SC-003**: The library's first page of results loads in under 2 seconds
for a collection of 1,000 images.
- **SC-004**: A tag-filtered search with 13 active tags returns results in
under 2 seconds across a library of 1,000 images.
- **SC-005**: A user can add or remove a tag on an existing image within
5 seconds of interaction on a local network connection.
- **SC-006**: The complete application starts from a clean checkout with a
single command and requires no manual setup beyond copying the example
environment file.
## Assumptions
- The application serves a single user (the owner) on a local network.
No authentication, access control, or multi-user isolation is required in v1.
- Only one image file can be submitted per upload action; bulk upload is out
of scope for v1.
- Images are immutable after upload: file content is never replaced; only the
tag associations may change.
- The deployment environment provides S3-compatible object storage (locally
via MinIO for development).
- Target clients are modern evergreen desktop browsers; mobile-native
experience is explicitly out of scope for v1.
- OR/NOT tag logic, collections/albums, image editing, alternative sort
orders, and multi-user features are all explicitly out of scope for v1.

View File

@@ -0,0 +1,352 @@
---
description: "Task list for Reaction Image Board v1"
---
# Tasks: Reaction Image Board v1
**Input**: Design documents from `specs/001-reaction-image-board/`
**Prerequisites**: plan.md ✅, spec.md ✅, research.md ✅, data-model.md ✅, contracts/api.md ✅
**Tests**: Per §5.1 of the constitution, TDD is non-negotiable. Test tasks
MUST appear before every implementation task. Write the test, confirm it
fails, then implement until it passes.
**Organization**: Tasks follow the milestone order from plan.md (API-first,
serial). Each task is tagged with the user story it serves.
## Format: `[ID] [P?] [Story?] Description`
- **[P]**: Can run in parallel (different files, no dependencies on incomplete tasks)
- **[Story]**: Which user story this task serves (US1US5 per spec.md)
- Include exact file paths in descriptions
## Path Conventions
```
api/app/ API source
api/tests/unit/ API unit tests
api/tests/integration/ API integration tests
api/alembic/versions/ Database migrations
ui/src/app/ Angular source
ui/src/app/services/ Angular services (+ .spec.ts colocated)
ui/src/app/<feature>/ Angular component dirs (+ .spec.ts colocated)
```
---
## Phase 1: Setup (Project Skeleton — M0)
**Purpose**: Establish the monorepo layout, Docker Compose stack, and linting
baseline. No feature logic. All subsequent milestones build on this.
- [ ] T001 Create top-level monorepo layout: `api/`, `ui/`, `docker-compose.yml`, `.env.example`
- [ ] T002 Write `.env.example` with all variables from spec §5 (DATABASE_URL, S3_*, API_BASE_URL, MAX_UPLOAD_BYTES)
- [ ] T003 [P] Write `api/Dockerfile` (Python 3.12 slim, installs pyproject.toml deps, runs uvicorn)
- [ ] T004 [P] Scaffold Angular project with CLI into `ui/` (strict mode, standalone components, routing)
- [ ] T005 [P] Write `ui/Dockerfile` (Node LTS, `ng serve --host 0.0.0.0`)
- [ ] T006 Write `docker-compose.yml` defining: postgres, minio, api (depends_on postgres+minio), ui (depends_on api)
- [ ] T007 [P] Configure `api/pyproject.toml` with FastAPI, SQLAlchemy 2.x async, asyncpg, Alembic, aiobotocore, pydantic-settings, pytest, pytest-asyncio, ruff
- [ ] T008 [P] Configure `ui/package.json` / `angular.json` with eslint + prettier; add `ui/proxy.conf.json` routing `/api/*` to `http://localhost:8000`
- [ ] T009 Write API unit test: settings load from env vars without error in `api/tests/unit/test_config.py`
- [ ] T010 Write API integration test: `GET /api/v1/health` returns 200 `{"status":"ok"}` in `api/tests/integration/test_health.py`
- [ ] T011 Implement `api/app/config.py` (pydantic-settings reading all env vars)
- [ ] T012 Implement `api/app/main.py` (FastAPI factory, lifespan connecting to Postgres + MinIO, health route)
- [ ] T013 Configure Alembic in `api/alembic/` with async engine; apply `alembic upgrade head` on startup
- [ ] T014 [P] Add Angular default smoke test in `ui/src/app/app.component.spec.ts`
**Checkpoint**: `docker compose up` starts all four services. Health endpoint
returns 200. Both linters pass. All tests pass.
---
## Phase 2: Foundational (Upload API Infrastructure — M1 core)
**Purpose**: Core interfaces and repositories that MUST exist before any user
story endpoint can be implemented. Establishes the `StorageBackend`,
`AuthProvider`, and `ImageRepository` abstractions.
**⚠️ CRITICAL**: No user story work can begin until this phase is complete.
- [ ] T015 Write unit test: SHA-256 hash of known bytes returns expected hex digest in `api/tests/unit/test_hashing.py`
- [ ] T016 Write unit test: MIME validator accepts jpeg/png/gif/webp and rejects pdf/mp4 in `api/tests/unit/test_validation.py`
- [ ] T017 Write unit test: file size validator rejects bytes exceeding MAX_UPLOAD_BYTES in `api/tests/unit/test_validation.py`
- [ ] T018 [P] Implement `StorageBackend` interface (put, get_presigned_url, delete) in `api/app/storage/backend.py`
- [ ] T019 [P] Implement `S3StorageBackend` using aiobotocore in `api/app/storage/s3_backend.py`
- [ ] T020 [P] Implement `AuthProvider` interface + `NoOpAuthProvider` in `api/app/auth/provider.py` and `api/app/auth/noop.py`
- [ ] T021 Implement MIME type + file size validation helpers in `api/app/routers/images.py` (or `api/app/validation.py`)
- [ ] T022 Write Alembic migration for `images` table in `api/alembic/versions/`
- [ ] T023 Implement `Image` SQLAlchemy model in `api/app/models.py`
- [ ] T024 Implement `ImageRepository` (create, get_by_id, get_by_hash) in `api/app/repositories/image_repo.py`
- [ ] T025 Wire `AuthProvider`, `StorageBackend`, and DB session into FastAPI dependency injection in `api/app/dependencies.py`
**Checkpoint**: All unit tests pass; foundation ready for user story endpoints.
---
## Phase 3: User Story 1 — Upload an Image (M1 endpoint + M5 UI) 🎯 MVP
**Goal**: A user can upload an image (with tags, though tag persistence is
deferred to Phase 4). Duplicate detection works. Errors shown inline.
**Independent Test**: Upload a JPEG via the UI, verify it appears in the
library grid. Re-upload the same file, verify "Already in your library" toast
and no duplicate in DB or MinIO.
### Tests for User Story 1 (REQUIRED per §5.1 — TDD) ⚠️
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T026 [P] [US1] Integration test: valid JPEG upload → 201, record in DB, object in MinIO in `api/tests/integration/test_upload.py`
- [ ] T027 [P] [US1] Integration test: same image uploaded twice → 200, `duplicate: true`, no second MinIO object in `api/tests/integration/test_upload.py`
- [ ] T028 [P] [US1] Integration test: invalid MIME type → 422 with `{"detail":"...","code":"invalid_mime_type"}` in `api/tests/integration/test_upload.py`
- [ ] T029 [P] [US1] Integration test: file > MAX_UPLOAD_BYTES → 422 `file_too_large` in `api/tests/integration/test_upload.py`
- [ ] T030 [P] [US1] Angular unit test: tag chip input lowercases and splits on comma/space in `ui/src/app/upload/upload.component.spec.ts`
- [ ] T031 [P] [US1] Angular unit test: `duplicate: true` response → toast shown, navigate to detail in `ui/src/app/upload/upload.component.spec.ts`
- [ ] T032 [P] [US1] Angular unit test: `duplicate: false` response → success toast, navigate to detail in `ui/src/app/upload/upload.component.spec.ts`
- [ ] T033 [P] [US1] Angular unit test: error response → inline error shown, no navigation in `ui/src/app/upload/upload.component.spec.ts`
### Implementation for User Story 1
- [ ] T034 [US1] Implement `POST /api/v1/images` endpoint (MIME check, size check, SHA-256, duplicate query, storage write, record insert; tags field accepted but ignored) in `api/app/routers/images.py`
- [ ] T035 [US1] Implement `ImageService` wrapping `GET /api/v1/images` and `GET /api/v1/images/{id}/file` in `ui/src/app/services/image.service.ts`
- [ ] T036 [US1] Implement `UploadComponent` (route `/upload`) with drag-and-drop zone, click-to-browse, tag chip input, POST submit, duplicate/success/error handling in `ui/src/app/upload/upload.component.ts`
**Checkpoint**: Full upload flow works in browser. Duplicate detection gives
correct feedback. API tests and Angular unit tests all pass.
---
## Phase 4: User Story 2 — Browse and Filter the Library (M2 list/search + M4 UI)
**Goal**: User can view all images in a responsive grid and filter by one or
more tags (AND logic). Pagination works. Tags are persisted during upload.
**Independent Test**: Seed the library via the upload flow. Apply a single tag
filter and verify only matching images are shown. Add a second filter, verify
both tags must be present. Remove a filter, verify the grid expands.
### Tests for User Story 2 (REQUIRED per §5.1 — TDD) ⚠️
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T037 [P] [US2] Unit test: tag normalisation — uppercase → lowercase, whitespace stripped in `api/tests/unit/test_tags.py`
- [ ] T038 [P] [US2] Unit test: tag validation — rejects names > 64 chars, invalid chars in `api/tests/unit/test_tags.py`
- [ ] T039 [P] [US2] Integration test: upload with tags → tags persisted, returned in response in `api/tests/integration/test_tags.py`
- [ ] T040 [P] [US2] Integration test: duplicate upload → existing record returned, tags unchanged in `api/tests/integration/test_tags.py`
- [ ] T041 [P] [US2] Integration test: `GET /api/v1/images?tags=cat,funny` → only images with both tags in `api/tests/integration/test_search.py`
- [ ] T042 [P] [US2] Integration test: same query excludes images with only one matching tag in `api/tests/integration/test_search.py`
- [ ] T043 [P] [US2] Angular unit test: `ImageService` constructs correct query params from filter state in `ui/src/app/services/image.service.spec.ts`
- [ ] T044 [P] [US2] Angular unit test: `LibraryComponent` renders image grid from mocked service in `ui/src/app/library/library.component.spec.ts`
- [ ] T045 [P] [US2] Angular unit test: filter change triggers new API call with updated `tags` param in `ui/src/app/library/library.component.spec.ts`
### Implementation for User Story 2
- [ ] T046 [US2] Write Alembic migration for `tags` and `image_tags` tables in `api/alembic/versions/`
- [ ] T047 [US2] Implement `Tag` and `ImageTag` SQLAlchemy models in `api/app/models.py`
- [ ] T048 [US2] Implement tag normalisation + validation helpers in `api/app/repositories/tag_repo.py`
- [ ] T049 [US2] Implement `TagRepository` (upsert_by_name, get_by_image_id) in `api/app/repositories/tag_repo.py`
- [ ] T050 [US2] Update `POST /api/v1/images` to process and persist the `tags` field in `api/app/routers/images.py`
- [ ] T051 [US2] Implement `GET /api/v1/images` with `tags` (AND-filter), `limit`, `offset` in `api/app/routers/images.py`
- [ ] T052 [US2] Implement `GET /api/v1/images/{id}` returning image + tags in `api/app/routers/images.py`
- [ ] T053 [US2] Update `ImageService` to support `tags` filter query param in `ui/src/app/services/image.service.ts`
- [ ] T054 [US2] Implement `LibraryComponent` (route `/`) with image grid, tag chips, debounced filter bar, "Load more" pagination in `ui/src/app/library/library.component.ts`
**Checkpoint**: Library view shows real images with tags. Tag filtering (AND
logic) and pagination work end-to-end.
---
## Phase 5: User Story 3 — View Image Detail and Edit Tags (M3 serving + M6 detail UI)
**Goal**: User can view a full-size image and edit its tags inline. Changes
saved on blur/Enter.
**Independent Test**: Open an image detail page, add a new tag, navigate back,
filter by that tag, and confirm the image appears.
### Tests for User Story 3 (REQUIRED per §5.1 — TDD) ⚠️
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T055 [P] [US3] Integration test: `GET /api/v1/images/{id}/file` → 302 with `Location` header pointing to MinIO URL in `api/tests/integration/test_serving.py`
- [ ] T056 [P] [US3] Integration test: `/file` for unknown ID → 404 `image_not_found` in `api/tests/integration/test_serving.py`
- [ ] T057 [P] [US3] Integration test: `PATCH /api/v1/images/{id}/tags` replaces tags, old tags unlinked, new tags upserted in `api/tests/integration/test_tags.py`
- [ ] T058 [P] [US3] Integration test: PATCH with invalid tag → 422 `invalid_tag` in `api/tests/integration/test_tags.py`
- [ ] T059 [P] [US3] Angular unit test: removing tag chip calls PATCH with updated list (removed tag absent) in `ui/src/app/detail/detail.component.spec.ts`
- [ ] T060 [P] [US3] Angular unit test: adding tag + Enter calls PATCH with new tag included in `ui/src/app/detail/detail.component.spec.ts`
### Implementation for User Story 3
- [ ] T061 [US3] Implement `GET /api/v1/images/{id}/file` (generate 1-hour pre-signed URL, return 302) in `api/app/routers/images.py`
- [ ] T062 [US3] Implement `TagRepository.replace_tags_on_image` in `api/app/repositories/tag_repo.py`
- [ ] T063 [US3] Implement `PATCH /api/v1/images/{id}/tags` in `api/app/routers/images.py`
- [ ] T064 [US3] Implement `DetailComponent` (route `/images/:id`) with full-size image, editable tag chips (add/remove), save on blur/Enter via PATCH, back button in `ui/src/app/detail/detail.component.ts`
**Checkpoint**: Full-size image loads in browser via redirect. Tag editing
works from detail page. Changes persist across page navigation.
---
## Phase 6: User Story 4 — Delete an Image (M6 detail UI + M2 delete endpoint)
**Goal**: User can permanently delete an image with confirmation. Returns to
library afterwards.
**Independent Test**: Delete a known image, confirm it no longer appears in the
library and that navigating to its former URL shows a not-found screen.
### Tests for User Story 4 (REQUIRED per §5.1 — TDD) ⚠️
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T065 [P] [US4] Integration test: `DELETE /api/v1/images/{id}` → 204; subsequent `GET /{id}` returns 404 in `api/tests/integration/test_delete.py`
- [ ] T066 [P] [US4] Integration test: DELETE verifies MinIO object is removed in `api/tests/integration/test_delete.py`
- [ ] T067 [P] [US4] Integration test: DELETE of unknown ID → 404 `image_not_found` in `api/tests/integration/test_delete.py`
- [ ] T068 [P] [US4] Angular unit test: delete confirmation → DELETE called → navigation to Library in `ui/src/app/detail/detail.component.spec.ts`
- [ ] T069 [P] [US4] Angular unit test: cancel confirmation dialog → no DELETE call, stays on detail page in `ui/src/app/detail/detail.component.spec.ts`
### Implementation for User Story 4
- [ ] T070 [US4] Implement `DELETE /api/v1/images/{id}` (delete image_tags rows, image record, S3 object) in `api/app/routers/images.py`
- [ ] T071 [US4] Add delete button with confirmation dialog + back-to-Library navigation to `DetailComponent` in `ui/src/app/detail/detail.component.ts`
- [ ] T072 [US4] Implement `NotFoundComponent` shown for all unrecognised routes in `ui/src/app/not-found/not-found.component.ts`
**Checkpoint**: Full CRUD loop works: upload → view → re-tag → delete.
Deleted images gone from library and storage.
---
## Phase 7: User Story 5 — Browse and Search Tags (M2 tags endpoint + UI)
**Goal**: User can list all tags alphabetically with image counts and narrow
by prefix.
**Independent Test**: Open the tag browser, verify every tag present in the
library appears with a correct image count. Type a prefix and verify only
matching tags remain.
### Tests for User Story 5 (REQUIRED per §5.1 — TDD) ⚠️
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
- [ ] T073 [P] [US5] Integration test: `GET /api/v1/tags` returns all tags alphabetically with correct image_count in `api/tests/integration/test_tags.py`
- [ ] T074 [P] [US5] Integration test: `GET /api/v1/tags?q=ca` returns only tags prefixed "ca" in `api/tests/integration/test_tags.py`
- [ ] T075 [P] [US5] Angular unit test: `TagService` calls `GET /api/v1/tags` with `q` param in `ui/src/app/services/tag.service.spec.ts`
### Implementation for User Story 5
- [ ] T076 [US5] Implement `GET /api/v1/tags` with `q` prefix search, `limit`, `offset`, image_count in `api/app/routers/tags.py`
- [ ] T077 [US5] Implement `TagService` wrapping `GET /api/v1/tags` in `ui/src/app/services/tag.service.ts`
- [ ] T078 [US5] Wire `TagService` into `LibraryComponent` tag filter bar for tag autocomplete/selection in `ui/src/app/library/library.component.ts`
**Checkpoint**: All user stories independently functional and tested.
---
## Phase 8: Polish & Cross-Cutting Concerns
**Purpose**: Improvements affecting multiple user stories and final validation.
- [ ] T079 [P] Add `GET /api/v1/images/{id}` 404 test to verify error envelope shape `{"detail":"...","code":"image_not_found"}` in `api/tests/integration/test_upload.py`
- [ ] T080 [P] Verify all API error responses include both `detail` and `code` fields (constitution §3.3) — check tests for T028, T029, T056, T058, T067
- [ ] T081 [P] Add empty-state UI for library with zero images in `ui/src/app/library/library.component.ts`
- [ ] T082 [P] Add empty-state UI for tag filter returning zero results in `ui/src/app/library/library.component.ts`
- [ ] T083 Configure Angular routing to show `NotFoundComponent` for all unrecognised routes in `ui/src/app/app.routes.ts`
- [ ] T084 [P] Run quickstart.md validation: `docker compose up`, upload an image, filter by tag, edit tag, delete image — full happy path
- [ ] T085 [P] Run `ruff check .` in `api/` — confirm zero lint errors
- [ ] T086 [P] Run `npm run lint` in `ui/` — confirm zero lint errors
- [ ] T087 Run all API tests: `docker compose run --rm api pytest` — confirm all pass
- [ ] T088 Run all UI tests: `docker compose run --rm ui ng test --watch=false` — confirm all pass
---
## Dependencies & Execution Order
### Phase Dependencies
- **Phase 1 (Setup)**: No dependencies — start immediately
- **Phase 2 (Foundational)**: Depends on Phase 1 — BLOCKS all user stories
- **Phase 3 (US1 Upload)**: Depends on Phase 2 — API endpoint + Angular upload UI
- **Phase 4 (US2 Browse)**: Depends on Phase 3 (tags API needs upload first)
- **Phase 5 (US3 Detail/Edit)**: Depends on Phase 4 (detail view needs list view navigation)
- **Phase 6 (US4 Delete)**: Depends on Phase 5 (delete is in the detail component)
- **Phase 7 (US5 Tags)**: Can start after Phase 4 (tag endpoint is independent)
- **Phase 8 (Polish)**: Depends on all prior phases
### User Story Dependencies
- **US1 (Upload)**: Foundational complete → API endpoint first, then Angular component
- **US2 (Browse)**: US1 must exist to seed data; tag schema (Phase 4) needed
- **US3 (View/Edit)**: US2 complete (library navigation leads to detail)
- **US4 (Delete)**: US3 complete (delete is on the detail view)
- **US5 (Tags)**: Independent after Phase 4 tag endpoint; can start in parallel with US3/US4
### Within Each Phase
- Test tasks MUST be written and confirmed FAILING before implementation
- Models before repositories before endpoints
- API endpoint before Angular service
- Angular service before Angular component
- Story complete before moving to next phase
### Parallel Opportunities
- All Phase 1 tasks marked [P] can run in parallel after T001/T002
- T018T020 (interfaces) can run in parallel
- All integration tests within a phase marked [P] can be written in parallel
- Angular unit tests within a phase marked [P] can be written in parallel
- T085/T086/T087/T088 (lint + test runs) can run in parallel
---
## Parallel Example: Phase 3 (User Story 1)
```bash
# Write all tests for US1 in parallel (all different files):
T026: api/tests/integration/test_upload.py (new upload)
T027: api/tests/integration/test_upload.py (duplicate)
T028: api/tests/integration/test_upload.py (invalid MIME)
T029: api/tests/integration/test_upload.py (oversized)
T030: ui/src/app/upload/upload.component.spec.ts (tag chips)
T031: ui/src/app/upload/upload.component.spec.ts (duplicate response)
# Then implement sequentially:
T034: POST /api/v1/images endpoint
T035: Angular ImageService
T036: Angular UploadComponent
```
---
## Implementation Strategy
### MVP First (User Stories 1 + 2 only)
1. Complete Phase 1: Setup
2. Complete Phase 2: Foundational — BLOCKS all stories
3. Complete Phase 3: US1 Upload
4. Complete Phase 4: US2 Browse/Filter
5. **STOP and VALIDATE**: Upload via UI, filter by tag, verify end-to-end
6. Can demo at this point: full read+write loop without detail view
### Incremental Delivery
1. Phase 1 + 2 → Foundation ready
2. Phase 3 → US1 Upload works (MVP start)
3. Phase 4 → US2 Browse + tag filtering works (MVP complete)
4. Phase 5 → US3 Detail + tag editing
5. Phase 6 → US4 Delete (completes full CRUD)
6. Phase 7 → US5 Tag browser (supplementary)
7. Phase 8 → Polish + final validation
---
## Notes
- [P] = different files, no incomplete dependencies — safe to parallelise
- [USN] label maps task to user story for traceability
- TDD is non-negotiable (§5.1): test → fail → implement → pass
- API tests run against real Postgres + MinIO (no mocks per §5.2)
- Milestone done-criterion: all tests pass + linter passes
- `docker compose up` must keep working after every milestone