Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
703b50e
chore: update environment variables configuration
hgkim-openerd Mar 30, 2026
ec35d58
refactor: 통합 테스트 MockMvc에서 MockMvcTester로 마이그레이션 (8개 파일)
Whale0928 Mar 30, 2026
44f000c
refactor: 통합 테스트 MockMvc에서 MockMvcTester로 마이그레이션 (5개 파일, Batch 5-7)
Whale0928 Mar 30, 2026
a5218fa
docs: MockMvcTester 마이그레이션 계획 문서 완료 처리
Whale0928 Mar 30, 2026
79cd6a7
chore: set default value for `business_support_type` to 'GENERAL' and…
hgkim-openerd Mar 31, 2026
4a77787
chore: replace initialization scripts with Liquibase schema managemen…
hgkim-openerd Mar 31, 2026
1825785
chore: integrate Liquibase for test schema management and update buil…
hgkim-openerd Mar 31, 2026
2db51f1
chore: update deploy_v2_development workflow to conditionally run pre…
hgkim-openerd Mar 31, 2026
2c27148
Merge remote-tracking branch 'origin/main'
Whale0928 Apr 4, 2026
cf31964
refactor: MySQL scripts and update documentation
Whale0928 Apr 4, 2026
157c08a
refactor: implement hierarchy-based region filtering across repositories
Whale0928 Apr 4, 2026
41ee8f3
docs: introduce parent-child hierarchy for regions
Whale0928 Apr 4, 2026
f41d4a0
test: add nullable field to AlcoholsHelper region test data
Whale0928 Apr 4, 2026
c70955c
test: update RegionServiceTest and RestReferenceControllerTest with p…
Whale0928 Apr 4, 2026
5b27339
test: update RegionServiceTest and RestReferenceControllerTest with p…
Whale0928 Apr 4, 2026
92b0e06
test: add integration tests for region hierarchy-based alcohol filtering
Whale0928 Apr 4, 2026
1d881a2
feat: add region hierarchy support for filtering and API enhancements
Whale0928 Apr 4, 2026
c867228
chore: add PostToolUse hook for spotless and Kotlin spotless config
Whale0928 Apr 6, 2026
e284a7d
style: apply ktlint formatting to admin-api Kotlin files
Whale0928 Apr 6, 2026
4df0660
refactor: Claude 스킬 및 구조 개선안을 문서화
Whale0928 Apr 6, 2026
6647ffd
refactor: CLAUDE.md 파일 영어로 변경 및 내용 간소화
Whale0928 Apr 6, 2026
673acbf
feat: CI 검증 스킬 관련 문서 추가 (SKILL.md)
Whale0928 Apr 6, 2026
cb6973d
docs: Mono 및 테스트 관련 패턴 문서 추가
Whale0928 Apr 8, 2026
ed7f04b
refactor: implement-product-api/implement-test 스킬 분리 및 개선
hgkim-openerd Apr 8, 2026
1db651a
feat: add agentic skill system with 6 new skills
Whale0928 Apr 8, 2026
a2945e5
chore: remove deprecated implement-product-api and implement-test skills
Whale0928 Apr 8, 2026
d9be84b
deps: version sync
Whale0928 Apr 8, 2026
2b80b7a
chore: update plan document to reflect implementation completion
Whale0928 Apr 8, 2026
05f7da8
feat: 어드민 유저 목록 조회 API 구현
Whale0928 Apr 8, 2026
82dd328
chore: 스킬 풀 사이클 검증 피드백 및 plan 완료 처리
Whale0928 Apr 8, 2026
f46c17f
docs: 어드민 유저 목록 조회 API RestDocs 문서 추가
Whale0928 Apr 8, 2026
6c260fb
docs: /docs 스킬 설계안 추가 (API 문서화 워크플로우)
Whale0928 Apr 8, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions .claude/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,18 @@
}
]
}
],
"PostToolUse": [
{
"matcher": "Edit|Write",
"hooks": [
{
"type": "command",
"command": "FP=$(cat | jq -r '.tool_input.file_path // empty'); [[ \"$FP\" == *.java || \"$FP\" == *.kt ]] && cd $CLAUDE_PROJECT_DIR && ./gradlew spotlessApply -q 2>/dev/null || true",
"timeout": 30
}
]
}
]
}
}
201 changes: 201 additions & 0 deletions .claude/skills/debug/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,201 @@
---
name: debug
description: |
Systematic root-cause debugging for build failures, test failures, and runtime errors.
Trigger: "/debug", or when the user says "에러 났어", "테스트 실패", "빌드 안 돼", "왜 안 되지", "debug this".
Follows a structured 6-step process: STOP, REPRODUCE, LOCALIZE, FIX, GUARD, VERIFY.
Use when anything unexpected happens — do not guess at fixes.
argument-hint: "[error description or test name]"
---

# Debugging and Error Recovery

## Overview

When something breaks, stop adding features, preserve evidence, and follow a structured process to find and fix the root cause. Guessing wastes time. This skill works for build errors, test failures, runtime bugs, and unexpected behavior in the bottle-note-api-server project.

## When to Use

- Build fails (`compileJava`, `compileKotlin`, `spotlessApply`)
- Tests fail (`unit_test`, `integration_test`, `check_rule_test`, `admin_integration_test`)
- Runtime behavior does not match expectations
- An error appears in logs or console
- Something worked before and stopped working

## When NOT to Use

- Implementing new features (use `/implement`)
- Writing new tests (use `/test`)
- Code cleanup or refactoring (use `/self-review` for review, `/implement` for changes)

## Process

### Step 1: STOP

Stop all other changes immediately.

- Do NOT push past a failing test to work on the next feature
- Preserve the error output — copy the full message before doing anything
- If you have uncommitted work in progress, stash it: `git stash`
- Read the COMPLETE error message before forming any hypothesis

### Step 2: REPRODUCE

Make the failure happen reliably. If you cannot reproduce it, you cannot fix it with confidence.

```
Can you reproduce the failure?
├── YES -> Proceed to Step 3
└── NO
├── Check environment differences (Docker running? submodule initialized?)
├── Run in isolation (single test, clean build)
└── If truly non-reproducible, document conditions and monitor
```

**Common reproduction commands:**
```bash
# Specific test
./gradlew :bottlenote-product-api:test --tests "app.bottlenote.{domain}.{TestClass}.{testMethod}"

# Test by tag
./gradlew unit_test
./gradlew integration_test
./gradlew admin_integration_test
./gradlew check_rule_test

# Full clean build
./gradlew clean build -x test -x asciidoctor

# Compile only
./gradlew compileJava compileTestJava
./gradlew :bottlenote-admin-api:compileKotlin :bottlenote-admin-api:compileTestKotlin
```

### Step 3: LOCALIZE

Narrow down WHERE the failure happens. Use the project-specific triage tree:

```
Build failure:
├── Java compile error
│ ├── In bottlenote-mono -> check domain/service/repository code
│ ├── In bottlenote-product-api -> check controller code
│ └── In test source -> check test fixtures, InMemory implementations
├── Kotlin compile error
│ └── In bottlenote-admin-api -> check Kotlin controller/test code
├── Spotless format error
│ └── Run: ./gradlew spotlessApply (auto-fixes formatting)
└── Dependency resolution error
└── Check gradle/libs.versions.toml for version conflicts

Test failure:
├── @Tag("unit")
│ ├── Fake/InMemory implementation out of sync with domain repo interface?
│ ├── Service logic changed but test not updated?
│ └── New dependency not wired in @BeforeEach setup?
├── @Tag("integration")
│ ├── Docker running? (TestContainers requires Docker)
│ ├── Database schema changed? (check Liquibase changelogs)
│ ├── Test data setup missing? (check TestFactory)
│ └── Auth token issue? (check TestAuthenticationSupport)
├── @Tag("rule")
│ ├── Package dependency violation? (check ArchUnit rules)
│ ├── New class in wrong package?
│ └── Circular dependency introduced?
└── @Tag("admin_integration")
├── Admin auth setup correct?
├── context-path /admin/api/v1 accounted for in test?
└── Kotlin-Java interop issue?
```

**For stack traces:** read bottom-up, find the first line referencing `app.bottlenote.*`.

### Step 4: FIX

Fix the ROOT CAUSE, not the symptom.

```
Symptom: "Test expects 3 items but gets 2"

Symptom fix (bad):
-> Change assertion to expect 2

Root cause fix (good):
-> The query has a WHERE clause that filters out soft-deleted items
-> Fix the test data setup to not include soft-deleted items
```

Rules:
- One change at a time — compile after each change
- If fix requires more than 5 files, reconsider whether the diagnosis is correct
- Do NOT suppress errors (`@Disabled`, empty catch blocks, `@SuppressWarnings`)
- Do NOT delete or skip failing tests

### Step 5: GUARD

Write a regression test that would have caught this bug.

- Use `@DisplayName` in Korean describing the bug scenario
- The test should FAIL without the fix and PASS with it
- If the fix changed a domain Repository interface, update the corresponding `InMemory{Domain}Repository`
- If the fix changed a Facade interface, update the corresponding `Fake{Domain}Facade`

### Step 6: VERIFY

Run verification to confirm the fix and check for regressions.

| Original failure | Minimum verification |
|-----------------|---------------------|
| Compile error | `./gradlew compileJava compileTestJava` |
| Unit test | `./gradlew unit_test` |
| Integration test | `./gradlew integration_test` (requires Docker) |
| Architecture rule | `./gradlew check_rule_test` |
| Admin test | `./gradlew admin_integration_test` |
| Unknown/broad | `/verify standard` or `/verify full` |

## Quick Reference: Diagnostic Commands

| Situation | Command |
|-----------|---------|
| What changed recently | `git log --oneline -10` |
| What files are modified | `git status` |
| Diff of uncommitted changes | `git diff` |
| Find which commit broke it | `git bisect start && git bisect bad HEAD && git bisect good <sha>` |
| Check Java compile | `./gradlew compileJava compileTestJava` |
| Check Kotlin compile | `./gradlew :bottlenote-admin-api:compileKotlin` |
| Auto-fix formatting | `./gradlew spotlessApply` |
| Run single test class | `./gradlew test --tests "app.bottlenote.{domain}.{TestClass}"` |
| Check dependency versions | `cat gradle/libs.versions.toml` |
| Check Docker status | `docker info` |

## Common Rationalizations

| Rationalization | Reality |
|-----------------|---------|
| "I know what the bug is, I'll just fix it" | You might be right 70% of the time. The other 30% costs hours. Reproduce first. |
| "The failing test is probably wrong" | Verify that assumption. If the test is wrong, fix the test. Do not skip it. |
| "Let me just revert and redo everything" | Reverting destroys diagnostic information. Understand WHAT broke before reverting. |
| "This is a flaky test, ignore it" | Flaky tests mask real bugs. Fix the flakiness or understand why it is intermittent. |
| "I'll fix it in the next commit" | Fix it now. The next commit will introduce new issues on top of this one. |

## Red Flags

- Changing more than 5 files to fix a "simple" bug (diagnosis is likely wrong)
- Fixing without reproducing first
- Multiple stacked fixes without verifying between each one
- Suppressing errors instead of fixing root cause (`@Disabled`, empty catch, lint-disable)
- Changing test assertions to match wrong behavior
- "It works now" without understanding what changed
- No regression test added after a bug fix

## Verification

After fixing a bug:

- [ ] Root cause identified and understood (not just symptom)
- [ ] Fix addresses the root cause specifically
- [ ] Regression test exists that fails without the fix
- [ ] All existing tests pass
- [ ] Build succeeds
- [ ] InMemory/Fake implementations updated if interfaces changed
- [ ] Original failure scenario verified end-to-end
154 changes: 154 additions & 0 deletions .claude/skills/define/SKILL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
---
name: define
description: |
Clarifies requirements before any code is written. Creates a plan document with assumptions, success criteria, and impact scope.
Trigger: "/define", or when the user says "이거 구현해줘", "기능 추가", "요구사항 정리", "define requirements".
Use when starting a new feature, when requirements are vague, or when the scope of a change is unclear.
Do NOT write code during this skill — the output is a plan document, not implementation.
argument-hint: "[feature description]"
---

# Define Requirements

## Overview

Write a structured specification before writing any code. The plan document is the shared source of truth — it defines what we are building, why, and how we will know it is done. Code without a spec is guessing.

This skill creates `plan/{feature-name}.md` with an Overview section. The `/plan` skill later adds Tasks to the same document.

## When to Use

- Starting a new feature or significant change
- Requirements are ambiguous or incomplete
- The change touches 3+ files or multiple modules
- The user gives a vague request ("이거 구현해줘", "추가해줘")

## When NOT to Use

- Bug fixes with clear reproduction (use `/debug`)
- Single-file changes with obvious scope
- Requirements are already documented in a plan file
- Test-only work (use `/test`)

## Process

### Step 1: Parse Request

Identify what the user wants. Do NOT assume scope.

- What domain is involved? (alcohols, rating, review, support, etc.)
- Which module? (product-api, admin-api, or both?)
- What is the expected user-facing behavior?
- Are there related features already implemented?

If anything is unclear, ask before proceeding. Do NOT fill in ambiguous requirements silently.

### Step 2: Surface Assumptions

List every assumption explicitly. Each assumption is something that could be wrong.

```
ASSUMPTIONS:
1. This feature is for product-api (not admin-api)
2. Authentication is required (not a public endpoint)
3. The alcohol entity already exists and does not need schema changes
4. Pagination uses cursor-based approach (project default for product)
-> Confirm or correct these before I proceed.
```

Do NOT proceed without user confirmation on assumptions.

### Step 3: Define Success Criteria

Each criterion must be specific and testable. Translate vague requirements into concrete conditions.

```
REQUIREMENT: "평점 통계 기능 추가"

SUCCESS CRITERIA:
- GET /api/v1/ratings/statistics/{alcoholId} returns average rating, count, and distribution
- Response includes rating distribution as a map (e.g., {FIVE: 12, FOUR: 8, ...})
- Unauthenticated users can access (read-only endpoint)
- Response time < 500ms for alcohols with 1000+ ratings
-> Are these the right targets?
```

### Step 4: Analyze Impact Scope

Check which modules and components are affected:

- **Modules**: Which of mono, product-api, admin-api are involved?
- **Domains**: Does this touch multiple domains? (If yes, Facade needed)
- **Entities**: Any schema changes? (Liquibase migration needed)
- **Events**: New domain events? Existing event listeners affected?
- **Cache**: Does this data need caching? Existing cache invalidation affected?
- **Tests**: Which test types will be needed? (unit, integration, RestDocs)

### Step 5: Create Plan Document

Create `plan/{feature-name}.md` in Korean with the following structure:

```markdown
# Plan: [기능명]

## Overview
[무엇을 왜 만드는지]

### Assumptions
- [가정 1]
- [가정 2]

### Success Criteria
- [성공 기준 1 - 구체적, 테스트 가능]
- [성공 기준 2]

### Impact Scope
- [영향받는 모듈/파일 목록]
```

This document will be extended by `/plan` (Tasks section) and `/implement` (Progress Log).

### Step 6: User Approval Gate

Present the complete Overview to the user. Do NOT proceed to `/plan` or `/implement` without explicit approval.

```
Plan document created: plan/{feature-name}.md

Summary:
- Assumptions: [count] items listed
- Success criteria: [count] conditions defined
- Impact: [modules affected]

Approve to proceed to /plan for task breakdown?
```

## Common Rationalizations

| Rationalization | Reality |
|-----------------|---------|
| "This is simple, I don't need a spec" | Simple tasks still need acceptance criteria. A 2-line spec is fine. |
| "I'll figure it out while coding" | That is how you end up with rework. 15 minutes of spec saves 3 hours of wrong implementation. |
| "Requirements will change anyway" | That is why the spec is a living document. Having one that changes is better than having none. |
| "The user knows what they want" | Even clear requests have implicit assumptions. The spec surfaces those. |
| "I can just start with /implement" | Without defined success criteria, how will you know when you are done? |

## Red Flags

- Jumping to code without user approval on assumptions
- Assumptions not listed explicitly
- Success criteria that are not testable ("make it better", "improve performance")
- Missing impact analysis (especially cross-domain Facade needs)
- Proceeding to `/plan` without user approval on the Overview
- Creating multiple plan documents for a single feature

## Verification

Before proceeding to `/plan`:

- [ ] Plan document exists at `plan/{feature-name}.md`
- [ ] Assumptions are listed and confirmed by user
- [ ] Success criteria are specific and testable
- [ ] Impact scope identifies affected modules, domains, and test types
- [ ] User has explicitly approved the Overview
- [ ] Document is written in Korean (plan documents use Korean)
Loading
Loading