[volume-7] 이벤트 기반 아키텍처 및 Kafka 파이프라인 구현#280
[volume-7] 이벤트 기반 아키텍처 및 Kafka 파이프라인 구현#280pable91 wants to merge 28 commits intoLoopers-dev-lab:pable91from
Conversation
📝 WalkthroughWalkthroughKafka 모듈 의존성 추가 및 Outbox 패턴 도입으로 도메인 이벤트를 Outbox에 기록하고 스케줄러가 Kafka로 중계하며, streamer가 Kafka를 소비해 상품 메트릭 집계 및 쿠폰 발급 처리 흐름을 추가했다. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant Controller
participant LikeFacade
participant AppEventPublisher
participant OutboxRepo
participant OutboxScheduler
participant Kafka
participant Streamer
Client->>Controller: POST /likes/{productId}
Controller->>LikeFacade: toggleLike(productId, userId)
LikeFacade->>OutboxRepo: save(OutboxEvent for LIKED/UNLIKED)
LikeFacade->>AppEventPublisher: publish(LikedEvent/UnlikedEvent)
LikeFacade-->>Controller: LikeAction
Note over OutboxScheduler,Kafka: 주기적(5s) 중계
OutboxScheduler->>OutboxRepo: findUnpublished()
OutboxRepo-->>OutboxScheduler: events
loop for each event
OutboxScheduler->>Kafka: send(topic=eventType, key=partitionKey, payload)
Kafka-->>OutboxScheduler: sendResult
OutboxScheduler->>OutboxRepo: markPublished(id)
end
Kafka->>Streamer: deliver to topic
Streamer->>Streamer: process(event) with EventHandled idempotency
sequenceDiagram
autonumber
participant PaymentGateway
participant PaymentFacade
participant OrderRepo
participant AppEventPublisher
participant OutboxRepo
participant OutboxScheduler
participant Kafka
participant CouponIssueConsumer
participant CouponProcessor
PaymentGateway->>PaymentFacade: payment callback
PaymentFacade->>OrderRepo: save/confirm order
PaymentFacade->>AppEventPublisher: publish(OrderConfirmedEvent)
PaymentFacade->>OutboxRepo: save(OutboxEvent ORDER_CONFIRMED)
Note over OutboxScheduler,Kafka: Outbox -> Kafka 전송
OutboxScheduler->>Kafka: send(ORDER_CONFIRMED)
Kafka->>CouponIssueConsumer: deliver ORDER_CONFIRMED
CouponIssueConsumer->>CouponProcessor: process per productId
CouponProcessor->>CouponTemplateRepo: tryIncrementIssuedCount (atomic)
CouponProcessor->>CouponRepo: save CouponEntity / update request status
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 16
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
modules/kafka/src/main/resources/kafka.yml (1)
24-24:⚠️ Potential issue | 🔴 CriticalConsumer
value-serializer설정 오류가 있다.운영 관점에서 이 설정은 consumer가 메시지를 역직렬화하지 못하게 만든다.
value-serializer는 producer 전용 설정이며, consumer는value-deserializer를 사용해야 한다. 현재 설정으로는 consumer 시작 시 설정이 무시되거나 런타임 에러가 발생할 수 있다.수정안:
- value-serializer: org.apache.kafka.common.serialization.ByteArrayDeserializer + value-deserializer: org.apache.kafka.common.serialization.ByteArrayDeserializer추가 테스트: local 또는 test 프로파일에서 consumer가 메시지를 정상적으로 수신하는지 확인하는 통합 테스트를 수행해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/kafka/src/main/resources/kafka.yml` at line 24, The Kafka consumer YAML uses the producer-only key `value-serializer` which prevents the consumer from deserializing messages; change the setting key to `value-deserializer` (keeping the class `org.apache.kafka.common.serialization.ByteArrayDeserializer`) under the consumer config so consumers use the correct deserializer, and then run an integration test under the local/test profile to verify consumers receive and decode messages correctly.
🟡 Minor comments (5)
apps/commerce-api/src/main/java/com/loopers/domain/order/OrderConfirmedEvent.java-8-11 (1)
8-11:⚠️ Potential issue | 🟡 Minor핵심 식별자(
orderId,userId)는 생성 시점에 null을 차단해야 한다운영 관점에서 주문 확정 이벤트의 식별자가 null이면 후속 쿠폰 발급/아웃박스 처리에서 지연 실패가 발생해 복구 비용이 커진다. 수정안으로 생성자에서 두 필드를 모두 null 체크해야 한다. 추가 테스트로
orderIdnull,userIdnull 각각에 대해 예외 발생을 검증해야 한다.As per coding guidelines `**/domain/**/*.java`: null 처리와 불변성 보장을 우선 점검해야 한다.수정 예시
+import java.util.Objects; @@ public OrderConfirmedEvent(Long orderId, Long userId) { - this.orderId = orderId; - this.userId = userId; + this.orderId = Objects.requireNonNull(orderId, "orderId must not be null"); + this.userId = Objects.requireNonNull(userId, "userId must not be null"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/order/OrderConfirmedEvent.java` around lines 8 - 11, Constructor OrderConfirmedEvent currently allows null orderId and userId; update the OrderConfirmedEvent(Long orderId, Long userId) constructor to validate both parameters and throw a clear unchecked exception (e.g., IllegalArgumentException or NullPointerException) when either orderId or userId is null, assigning only after validation to preserve immutability of the fields; then add unit tests that assert an exception is thrown for a null orderId and for a null userId respectively, referencing the OrderConfirmedEvent constructor and the orderId/userId fields in the tests.apps/commerce-api/src/main/java/com/loopers/domain/order/OrderRequestedEvent.java-7-9 (1)
7-9:⚠️ Potential issue | 🟡 Minor이벤트 생성 시
userIdnull 방어가 필요하다운영 관점에서 null 이벤트가 생성되면 리스너/로그/직렬화 단계에서 지연 실패가 발생해 장애 원인 추적이 어려워진다. 수정안으로 생성자에서 null을 즉시 차단해야 한다. 추가 테스트로
new OrderRequestedEvent(null)시 예외가 발생하는 단위 테스트를 넣어야 한다.As per coding guidelines `**/domain/**/*.java`: null 처리와 불변성을 점검하고 도메인 경계에서 유효하지 않은 상태를 차단해야 한다.수정 예시
+import java.util.Objects; @@ public OrderRequestedEvent(Long userId) { - this.userId = userId; + this.userId = Objects.requireNonNull(userId, "userId must not be null"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/order/OrderRequestedEvent.java` around lines 7 - 9, The constructor OrderRequestedEvent(Long userId) must guard against null to enforce immutability and domain validity: update the OrderRequestedEvent constructor to validate userId and throw an immediate unchecked exception (e.g., NullPointerException or IllegalArgumentException) when null is passed, and add a unit test that asserts new OrderRequestedEvent(null) throws the expected exception; reference the OrderRequestedEvent constructor to locate and change the validation and add the corresponding unit test for the null case.apps/commerce-api/src/main/java/com/loopers/domain/product/ProductViewedEvent.java-7-9 (1)
7-9:⚠️ Potential issue | 🟡 Minor이벤트 생성 시
productIdnull 방어가 필요하다운영 관점에서 null payload는 소비 단계에서 예외를 유발해 재시도/중복 처리 비용을 키운다. 수정안으로 생성자에서 null을 즉시 거부해야 한다. 추가 테스트로
new ProductViewedEvent(null)예외 케이스를 포함해야 한다.As per coding guidelines `**/domain/**/*.java`: null 처리와 캡슐화를 통해 도메인 모델의 유효 상태를 보장해야 한다.수정 예시
+import java.util.Objects; @@ public ProductViewedEvent(Long productId) { - this.productId = productId; + this.productId = Objects.requireNonNull(productId, "productId must not be null"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductViewedEvent.java` around lines 7 - 9, The ProductViewedEvent constructor must defensively reject a null productId to maintain domain invariants: update the ProductViewedEvent(Long productId) constructor to validate productId (throw a clear runtime exception such as NullPointerException or IllegalArgumentException with a descriptive message) and keep productId private/final to preserve encapsulation; also add a unit test that constructs new ProductViewedEvent(null) and asserts the expected exception is thrown to prevent regressions.apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java-53-59 (1)
53-59:⚠️ Potential issue | 🟡 MinorrecordView()의 트랜잭션 경계와 원자성을 명확히 정의해야 한다.
ProductV1Controller에서
getProduct()(readOnly 트랜잭션) →publishEvent()→recordView()(readWrite 트랜잭션)이 순차적으로 실행되므로, 상품 조회와 뷰 이벤트 저장이 동일 트랜잭션 내에서 처리되지 않는다. 조회 후 publishEvent 실행 중 또는 recordView 저장 중 장애가 발생하면 뷰 이벤트 유실이 가능하다.Outbox 패턴을 사용한 최종 일관성(eventual consistency)이 의도된 설계라면, 이를 문서화해야 한다. 만약 강한 일관성이 필요하다면, 단일 트랜잭션 내에서 조회와 이벤트 저장을 처리하거나 보상 트랜잭션을 도입해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java` around lines 53 - 59, The product view handling currently spans ProductV1Controller.getProduct() (readOnly) → publishEvent() → ProductFacade.recordView(), causing the read and outbox save to occur outside a single transaction and risking lost view events; either (A) enforce a single transactional boundary by moving the outbox save into the same transaction as the read (e.g., adjust getProduct()/publishEvent() to call ProductFacade.recordView() within a `@Transactional`(readOnly = false) scope or change propagation so recordView runs in the caller transaction), or (B) explicitly document that the outbox approach is eventual-consistency by adding Javadoc and design notes for ProductFacade.recordView() and ProductV1Controller.getProduct(), or (C) introduce compensation/retry (e.g., durable retry queue) around OutboxEventRepository.save in ProductFacade.recordView() to guarantee persistence; choose one approach and update ProductFacade.recordView(), ProductV1Controller.getProduct(), and publishEvent() accordingly.apps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/ProductMetricsConsumer.java-40-55 (1)
40-55:⚠️ Potential issue | 🟡 Minor
processRecord()에서 타입 캐스팅 및 null 처리가 방어적이지 않다.
(byte[]) record.value()- 설정 오류 시 ClassCastException 발생payload.get("productIds")/payload.get("productId")- null일 경우 NPE 발생(List<Integer>)캐스팅 - 타입 안전성 없음운영 환경에서 잘못된 메시지가 유입되면 해당 배치 전체 처리가 중단될 수 있다.
제안: 방어적 처리 추가
private void processRecord(ConsumerRecord<Object, Object> record) throws Exception { String topic = record.topic(); String baseEventId = topic + "-" + record.partition() + "-" + record.offset(); - byte[] bytes = (byte[]) record.value(); + if (!(record.value() instanceof byte[] bytes)) { + throw new IllegalArgumentException("Unexpected value type: " + record.value().getClass()); + } Map<String, Object> payload = objectMapper.readValue(bytes, new TypeReference<>() {}); if ("ORDER_CONFIRMED".equals(topic)) { - List<Integer> productIds = (List<Integer>) payload.get("productIds"); + Object rawProductIds = payload.get("productIds"); + if (!(rawProductIds instanceof List<?> productIdList)) { + throw new IllegalArgumentException("productIds is missing or invalid"); + } - for (Integer productId : productIds) { + for (Object item : productIdList) { + Long productId = ((Number) item).longValue(); String eventId = baseEventId + "-" + productId; - processor.process(eventId, topic, productId.longValue()); + processor.process(eventId, topic, productId); } } else { - Long productId = ((Number) payload.get("productId")).longValue(); + Object rawProductId = payload.get("productId"); + if (!(rawProductId instanceof Number)) { + throw new IllegalArgumentException("productId is missing or invalid"); + } + Long productId = ((Number) rawProductId).longValue(); processor.process(baseEventId, topic, productId); } }추가 테스트: 잘못된 payload 형식에 대한 예외 처리 단위 테스트가 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/ProductMetricsConsumer.java` around lines 40 - 55, processRecord currently assumes record.value() is a byte[] and payload keys exist and have exact types; make it defensive: verify record.value() is instance of byte[] before casting and handle otherwise (log and skip), wrap objectMapper.readValue in try/catch for JsonProcessingException, null-check payload and payload.get("productId")/("productIds") and return/skip with a log when missing, and for productIds treat it as List<?> then for each element assert it is Number (or convert via Number) before using longValue to call processor.process; update processRecord to validate types and handle malformed messages gracefully rather than allowing ClassCastException/NPE to propagate.
🧹 Nitpick comments (14)
apps/commerce-api/src/main/java/com/loopers/domain/like/UnlikedEvent.java (1)
3-14:productIdnull 검증이 누락되었다.운영 관점에서 null
productId로 이벤트가 생성될 경우, 이벤트 리스너에서productService.decreaseLikeCount(null)호출로 이어져 예기치 않은 동작이 발생할 수 있다. 도메인 이벤트는 생성 시점에 유효성을 보장하는 것이 방어적 설계다.수정안:
public UnlikedEvent(Long productId) { if (productId == null) { throw new IllegalArgumentException("productId must not be null"); } this.productId = productId; }추가 테스트: null productId로 이벤트 생성 시 예외 발생 확인 단위 테스트를 추가해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/like/UnlikedEvent.java` around lines 3 - 14, Add null-checking to the UnlikedEvent constructor so the domain invariant is enforced: in the UnlikedEvent(Long productId) constructor validate productId != null and throw an IllegalArgumentException with a clear message if it is null; update any callers/tests as needed and add a unit test that constructing new UnlikedEvent(null) results in the expected IllegalArgumentException to prevent downstream productService.decreaseLikeCount(null) calls.apps/commerce-streamer/src/main/java/com/loopers/domain/event/EventHandled.java (1)
15-17:create()팩토리 메서드에서 null 검증이 필요하다.운영 관점에서
eventId또는eventType이 null인 상태로EventHandled가 생성되면,EventHandledEntity저장 시 DBNOT NULL제약 조건 위반으로 예외가 발생한다. 이 경우 예외 메시지가 DB 레벨에서 발생하여 디버깅이 어렵다. 도메인 레이어에서 조기 검증하면 명확한 에러 메시지를 제공할 수 있다.수정안:
public static EventHandled create(String eventId, String eventType) { if (eventId == null || eventId.isBlank()) { throw new IllegalArgumentException("eventId must not be null or blank"); } if (eventType == null || eventType.isBlank()) { throw new IllegalArgumentException("eventType must not be null or blank"); } return new EventHandled(null, eventId, eventType); }추가 테스트: null/blank eventId 또는 eventType으로 생성 시 예외 발생 확인 단위 테스트를 추가해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-streamer/src/main/java/com/loopers/domain/event/EventHandled.java` around lines 15 - 17, The EventHandled.create factory currently allows null/blank eventId or eventType which can cause DB NOT NULL violations; update EventHandled.create to validate that eventId and eventType are non-null and non-blank and throw IllegalArgumentException with clear messages (e.g., "eventId must not be null or blank" / "eventType must not be null or blank") before constructing the new EventHandled(null, eventId, eventType); also add unit tests that assert creation with null or blank eventId/eventType on EventHandled.create throws the expected IllegalArgumentException to prevent DB-level errors when persisting EventHandledEntity.apps/commerce-api/src/main/java/com/loopers/domain/like/ProductLikedEvent.java (1)
3-20:productId와userId모두 null 검증이 누락되었다.운영 관점에서
UserActionLogListener가 이 이벤트를 처리할 때 null 필드가 있으면 로그에null값이 기록되어 추적이 어려워진다.ProductUnlikedEvent(context snippet 참조)와 동일한 구조를 유지하면서 두 클래스 모두 null 검증을 추가하는 것이 일관성 있다.수정안:
public ProductLikedEvent(Long productId, Long userId) { if (productId == null || userId == null) { throw new IllegalArgumentException("productId and userId must not be null"); } this.productId = productId; this.userId = userId; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/like/ProductLikedEvent.java` around lines 3 - 20, The ProductLikedEvent constructor lacks null checks for productId and userId; update the ProductLikedEvent(Long productId, Long userId) constructor to validate that neither productId nor userId is null and throw an IllegalArgumentException with a clear message if they are; mirror the null-check pattern used in ProductUnlikedEvent to keep both event classes consistent and prevent null values from reaching UserActionLogListener.apps/commerce-api/src/main/java/com/loopers/domain/like/LikedEvent.java (1)
3-14:UnlikedEvent와 동일하게productIdnull 검증을 추가하는 것을 권장한다.
LikedEvent도UnlikedEvent와 동일한 구조이므로 null 검증 로직을 일관되게 적용해야 한다. 이벤트 발행 전 검증이 보장된다면 생략할 수 있으나, 방어적 설계 관점에서 생성자 검증을 권장한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/like/LikedEvent.java` around lines 3 - 14, The LikedEvent constructor lacks a null check for productId (same as UnlikedEvent); add defensive validation in the LikedEvent(Long productId) constructor to throw a clear exception when productId is null (e.g., use Objects.requireNonNull(productId, "productId must not be null") or throw IllegalArgumentException/NullPointerException) so the class mirrors UnlikedEvent's behavior and prevents creating events with a null productId.apps/commerce-streamer/src/main/java/com/loopers/infrastructure/event/EventHandledEntity.java (1)
34-37: Constructor에서 null 방어 로직이 없다.운영 관점에서
EventHandled도메인 객체가 null이거나 내부 필드가 null인 경우, 이 생성자에서 NPE가 발생한다. 도메인 레이어에서 검증이 보장된다면 문제없으나, 방어적 설계 관점에서 명시적 검증을 추가하면 디버깅이 용이하다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/event/EventHandledEntity.java` around lines 34 - 37, EventHandledEntity's constructor lacks null checks and can NPE when eventHandled or its fields are null; update the constructor (EventHandledEntity(EventHandled)) to defensively validate the input by checking that the EventHandled parameter is not null and that eventHandled.getEventId() and eventHandled.getEventType() are not null (use Objects.requireNonNull or explicit checks) and throw a clear IllegalArgumentException/NullPointerException with descriptive messages if any are null before assigning to this.eventId and this.eventType.apps/commerce-api/src/main/java/com/loopers/interfaces/api/order/OrderV1Controller.java (1)
31-31: 주문 이벤트 발행 책임은 Controller보다 Application 계층에 두는 편이 안전하다운영 관점에서 Line [31]처럼 Controller에서 이벤트 발행 시점을 관리하면 엔드포인트별로 이벤트 누락/중복 정책이 갈라지고 장애 시 추적이 어려워진다. 수정안으로 이벤트 발행을
OrderFacade(또는 서비스)로 이동해 성공 경로에서만 일관되게 발행하고, 필요하면 트랜잭션 커밋 후 리스닝으로 분리하는 것이 좋다. 추가 테스트로 주문 성공 시 이벤트 1회 발행, 주문 실패 시 미발행, 예외 발생 시 API 응답 형식 유지 여부를 검증해야 한다.컨트롤러 최소화 예시
- eventPublisher.publishEvent(new OrderRequestedEvent(user.getId())); OrderCommand command = new OrderCommand( user.getId(), request.toProductQuantities(), request.couponId() );As per coding guidelines
**/*Controller*.java: Controller는 요청 검증과 응답 조립에 집중하고 비즈니스 로직은 Service로 이동해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/interfaces/api/order/OrderV1Controller.java` at line 31, Move the event publishing responsibility out of OrderV1Controller and into the application/service layer (e.g., OrderFacade or OrderService): remove the direct call to eventPublisher.publishEvent(new OrderRequestedEvent(user.getId())) from OrderV1Controller and add publishing logic inside OrderFacade (or the service method that executes the order) so the event is emitted only on the successful completion of the business operation (after persistence/transaction commit if applicable); keep controller focused on validation/response assembly, and add tests that assert OrderRequestedEvent is published exactly once on success and not published on failure or exception while preserving API error responses.apps/commerce-api/src/main/java/com/loopers/infrastructure/outbox/OutboxEventJpaRepository.java (1)
14-16: markPublished에@Modifying(clearAutomatically = true) 추가를 권장한다.
@Modifying쿼리 실행 후 영속성 컨텍스트가 자동으로 클리어되지 않으면, 이후 같은 트랜잭션 내에서 해당 엔티티를 조회할 때 stale 데이터가 반환될 수 있다.♻️ clearAutomatically 추가
- `@Modifying` + `@Modifying`(clearAutomatically = true) `@Query`("UPDATE OutboxEventEntity e SET e.publishedAt = :publishedAt WHERE e.id = :id") void markPublished(`@Param`("id") Long id, `@Param`("publishedAt") ZonedDateTime publishedAt);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/outbox/OutboxEventJpaRepository.java` around lines 14 - 16, The `@Modifying` query in OutboxEventJpaRepository.markPublished should clear the persistence context automatically to avoid stale reads; update the `@Modifying` annotation on the markPublished method to include clearAutomatically = true (i.e., `@Modifying`(clearAutomatically = true)) so that after executing the UPDATE the EntityManager clears managed entities and subsequent reads in the same transaction return fresh state.apps/commerce-api/src/main/java/com/loopers/domain/like/ProductUnlikedEvent.java (1)
3-19: Java record 사용을 고려할 수 있다.이벤트 클래스가 단순 데이터 홀더 역할만 수행하므로 Java record로 변환하면 equals/hashCode/toString이 자동 생성되고 코드가 간결해진다.
♻️ record로 변환 제안
-public class ProductUnlikedEvent { - - private final Long productId; - private final Long userId; - - public ProductUnlikedEvent(Long productId, Long userId) { - this.productId = productId; - this.userId = userId; - } - - public Long getProductId() { - return productId; - } - - public Long getUserId() { - return userId; - } -} +public record ProductUnlikedEvent(Long productId, Long userId) { +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/like/ProductUnlikedEvent.java` around lines 3 - 19, The ProductUnlikedEvent class is a simple data holder and should be converted to a Java record to eliminate boilerplate; replace the class declaration and fields/getters with a record declaration like ProductUnlikedEvent(Long productId, Long userId) (ensure the project Java version supports records and update any serialization frameworks or usages if they require explicit constructors or annotations).apps/commerce-api/src/main/java/com/loopers/application/OutboxEventHelper.java (1)
14-16: RuntimeException 대신 CoreException을 사용해야 한다.프로젝트의 통합 오류 처리 패턴에 따르면 에러는
CoreException을 통해ApiControllerAdvice로 라우팅되어야 한다. 현재RuntimeException은 일관된 에러 응답 포맷을 깨뜨릴 수 있다.직렬화 실패는 개발 단계에서 발견되어야 하는 버그이므로
IllegalStateException또는 프로젝트의CoreException으로 변경하는 것이 적절하다.♻️ 예외 타입 변경 제안
public static String toJson(ObjectMapper objectMapper, Object value) { try { return objectMapper.writeValueAsString(value); } catch (JsonProcessingException e) { - throw new RuntimeException("Outbox payload 직렬화 실패", e); + throw new IllegalStateException("Outbox payload 직렬화 실패", e); } }Based on learnings: "enforce unified error handling by routing errors through CoreException to ApiControllerAdvice".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/OutboxEventHelper.java` around lines 14 - 16, The catch block in OutboxEventHelper that currently throws new RuntimeException("Outbox payload 직렬화 실패", e) should be changed to throw the project's CoreException (preserving the original cause and message) so the error is routed through ApiControllerAdvice; locate the catch(JsonProcessingException e) in OutboxEventHelper and replace the RuntimeException usage with constructing and throwing CoreException (or wrap the JsonProcessingException as the cause) to follow the unified error-handling pattern.apps/commerce-streamer/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsEntity.java (1)
38-43: 매핑에서 도메인id를 버리면 save 의미가 좁아진다.운영 관점에서 이 생성자는
ProductMetrics의id를 보존하지 않으므로, 같은 매핑을 save/update에 재사용하는 순간 신규 insert로 처리되어uk_product_metrics_product_id충돌이나 재처리 실패를 만들 수 있다.id를 포함하는 팩토리 또는 생성자를 추가하거나 생성 전용과 갱신 전용 매핑을 분리하는 것이 안전하다.restore(existingId, ...)를 저장했을 때 기존 row가 갱신되는 저장소 테스트를 추가해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsEntity.java` around lines 38 - 43, The ProductMetricsEntity(ProductMetrics metrics) constructor drops the domain id which causes new inserts instead of updates; add an alternative constructor or static factory (e.g., ProductMetricsEntity.restore or ProductMetricsEntity(ProductMetrics metrics, Long id)) that copies metrics.getId() into the entity id field so saves with an existing id perform updates; update or add tests to cover saving/restoring an entity with an existing id to ensure the repository performs an update (verify no uk_product_metrics_product_id conflict and the row is updated).apps/commerce-api/src/main/java/com/loopers/interfaces/api/like/LikeV1Controller.java (1)
30-35: 이벤트 발행 분기는 Controller보다 Facade에 두는 편이 낫다.운영 관점에서 지금 구조는 HTTP Controller만
ProductLikedEvent와ProductUnlikedEvent를 발행하므로 다른 진입점에서LikeFacade.toggleLike()를 호출하면 동일한 로그와 후속 처리가 빠져 관측성이 분기된다. 이 분기는LikeFacade내부로 이동하고, 가능하면 트랜잭션 후행 이벤트로 한 번만 발행되게 정리하는 것이 좋다. Facade 통합 테스트로 liked/unliked 각각 이벤트가 정확히 1회 발행되는지 검증해야 한다.As per coding guidelines
Controller는 요청 검증(Bean Validation)과 응답 조립에 집중하고 비즈니스 로직은 Service로 이동한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/interfaces/api/like/LikeV1Controller.java` around lines 30 - 35, Move the event-publishing responsibility out of LikeV1Controller and into LikeFacade so all callers of likeFacade.toggleLike(...) get the same post-change events; update LikeFacade (or its toggleLike method) to publish exactly one ProductLikedEvent or ProductUnlikedEvent based on the returned LikeAction and do so as a transaction-after-commit event (use TransactionalEventListener/after-commit publishing pattern or framework equivalent) instead of letting Controller call eventPublisher; remove the eventPublisher calls from LikeV1Controller; and add a Facade integration test asserting that when LikeFacade.toggleLike(productId, userId) is invoked it emits exactly one corresponding event (liked vs unliked).apps/commerce-api/src/main/java/com/loopers/application/logging/UserActionLogListener.java (1)
15-32: 고빈도 사용자 액션을info로 건별 기록하면 로그 비용이 급증한다.운영 관점에서
ProductViewedEvent처럼 트래픽이 큰 이벤트를 동기info로그로 남기면 요청 지연, 로그 적재 비용, 검색 노이즈가 함께 커진다. 분석 목적이면 Kafka/metrics로 보내고 애플리케이션 로그는debug또는 sampling/async appender로 낮추는 편이 안전하다. 부하 테스트로 이벤트 발생량 대비 응답 지연과 로그 적재량이 허용 범위인지 확인해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/logging/UserActionLogListener.java` around lines 15 - 32, handle high-frequency user actions (especially ProductViewedEvent handled by handleProductViewed) are currently logged at info which will inflate log volume; change handleProductViewed to use debug (log.debug) or route the event to your metrics/Kafka pipeline instead of synchronous app logs, and for other frequent handlers like handleProductLiked and handleProductUnliked consider either lowering to debug, applying sampling, or using an async appender/metrics emitter; ensure the event handlers (handleProductViewed, handleProductLiked, handleProductUnliked, handleOrderRequested) are updated consistently and add a comment noting where high-volume analytics are emitted (metrics/Kafka) so future reviewers know why app logs are reduced.apps/commerce-api/src/main/java/com/loopers/domain/outbox/OutboxEvent.java (1)
23-25:create()메서드에서 필수 필드 검증이 누락되었다.
eventType,payload,partitionKey는 DB에서 NOT NULL 제약이 있지만 도메인 레벨에서 검증이 없다. null 또는 빈 문자열이 전달되면 인프라 레이어에서 예외가 발생하여 디버깅이 어려워진다.제안: 팩토리 메서드에 검증 추가
public static OutboxEvent create(String eventType, String payload, String partitionKey) { + if (eventType == null || eventType.isBlank()) { + throw new IllegalArgumentException("eventType must not be null or blank"); + } + if (payload == null) { + throw new IllegalArgumentException("payload must not be null"); + } + if (partitionKey == null || partitionKey.isBlank()) { + throw new IllegalArgumentException("partitionKey must not be null or blank"); + } return new OutboxEvent(null, eventType, payload, partitionKey, null, null); }추가 테스트: 잘못된 인자에 대한 예외 발생 검증 단위 테스트가 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/outbox/OutboxEvent.java` around lines 23 - 25, The OutboxEvent.create factory lacks domain-level validation for required fields (eventType, payload, partitionKey); add checks in OutboxEvent.create to throw an IllegalArgumentException (or a domain-specific unchecked exception) when any of these parameters is null or blank (empty/whitespace), and ensure the method returns the new OutboxEvent only after validation; update or add unit tests to assert that create(...) throws for null/blank eventType, payload, and partitionKey and succeeds for valid values.apps/commerce-api/src/main/java/com/loopers/infrastructure/outbox/OutboxEventEntity.java (1)
33-39:toEntity()에서 null 검증이 누락되었다.
outboxEvent파라미터나 내부 필드(eventType,payload,partitionKey)가 null인 경우, DB 제약조건 위반 또는 NPE가 발생한다. 운영 환경에서 이벤트 저장 실패 시 원인 파악이 어려워진다.제안: null 검증 추가
public static OutboxEventEntity toEntity(OutboxEvent outboxEvent) { + if (outboxEvent == null) { + throw new IllegalArgumentException("outboxEvent must not be null"); + } OutboxEventEntity entity = new OutboxEventEntity(); entity.eventType = outboxEvent.getEventType(); entity.payload = outboxEvent.getPayload(); entity.partitionKey = outboxEvent.getPartitionKey(); return entity; }추가 테스트: null 입력 시 예외 발생 검증 단위 테스트가 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/outbox/OutboxEventEntity.java` around lines 33 - 39, toEntity currently lacks null validation: ensure the method validates its input and required fields (parameter OutboxEvent and its eventType, payload, partitionKey) before constructing OutboxEventEntity; use Objects.requireNonNull or explicit checks in toEntity(OutboxEvent outboxEvent) and throw IllegalArgumentException (or NullPointerException with clear message) naming the missing field(s) so DB constraint violations are avoided and errors are actionable, and add unit tests for null outboxEvent and null individual fields to assert the expected exception is thrown.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeEventListener.java`:
- Around line 21-33: The AFTER_COMMIT listeners handleLiked and handleUnliked
call productService.increaseLikeCount/decreaseLikeCount with no recovery, so
failures after the main transaction cause silent loss; wrap each handler in
resilient error-handling: catch exceptions around productService calls, log them
with context, and on failure either enqueue a retry task/message (persist
minimal retry record or push to a retry queue) or annotate the handler with a
retry mechanism (e.g., `@Retryable`) so transient DB errors are retried;
alternatively, remove these listeners and consolidate count updates into the
existing ProductMetricsProcessor Kafka consumer to ensure a single, reliable
update path—apply one of these fixes to handleLiked and handleUnliked and add
monitoring/logging for retries.
In
`@apps/commerce-api/src/main/java/com/loopers/application/logging/UserActionLogListener.java`:
- Around line 21-32: 로그에 원문 userId가 남지 않도록 handleProductLiked,
handleProductUnliked, handleOrderRequested 메서드의 로깅을 변경하세요: 원문 userId를 직접 출력하지 말고
마스킹(hash 또는 mask)한 값(예: maskedUserId 또는 hashedUserId)을 생성해 로그에 사용하거나 민감정보 전용 감사
sink로 분리하고 info 로그에서는 제거하세요; 또한 로그 캡처 단위 테스트를 추가해 원시 userId가 출력되지 않는지 검증하도록 하세요.
In
`@apps/commerce-api/src/main/java/com/loopers/application/outbox/OutboxRelayScheduler.java`:
- Around line 24-26: KafkaTemplate.send(...) is asynchronous so
markPublished(event.getId()) is called before the send completes; change
OutboxRelayScheduler to wait for the send result and only call
outboxEventRepository.markPublished(...) on successful send, and handle failures
by logging and leaving the event unmarked (or scheduling retry). Concretely,
replace the direct kafkaTemplate.send(event.getEventType(),
event.getPartitionKey(), event.getPayload()) + immediate
outboxEventRepository.markPublished(...) with waiting on the send Future (e.g.,
use the returned Future/ListenableFuture from kafkaTemplate.send(...).get() or
addCallback/onSuccess/onFailure) and call
outboxEventRepository.markPublished(event.getId()) inside the success handler
and handle exceptions in the failure handler (log and do not mark as published).
- Around line 20-31: Add a transactional boundary to the scheduler by annotating
relay() with `@Transactional` so markPublished(...) executes inside a transaction;
change outboxEventRepository.findUnpublished() to use a SELECT ... FOR UPDATE
SKIP LOCKED (with a jakarta.persistence.lock.timeout=3000 hint) and limit the
result set (e.g., LIMIT 100) to avoid memory spikes and prevent multiple
scheduler instances from selecting the same rows; keep relay() iterating over
the locked rows and calling kafkaTemplate.send(...) then
outboxEventRepository.markPublished(...); finally add an integration test that
runs multiple scheduler instances concurrently and asserts no duplicate Kafka
messages are produced for the same outbox id.
In
`@apps/commerce-api/src/main/java/com/loopers/application/payment/PaymentFacade.java`:
- Around line 100-106: The outbox payload currently only includes productIds
which causes sales to be undercounted for multi-quantity OrderItem; update the
OutboxEvent creation in PaymentFacade (the OutboxEvent.create /
OutboxEventHelper.toJson call) to serialize items as a list of objects with
productId (use OrderItem::refProductId) and quantity (OrderItem::getQuantity)
e.g. "items":[{"productId":..., "quantity":...}], then update
ProductMetricsConsumer to parse that payload and call
incrementSalesCount(productId, quantity) (or loop quantity times less preferred)
instead of incrementing once per id, and add/adjust an integration test to
assert per-product sales equal the sum of OrderItem quantities after checkout.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/outbox/OutboxEventRepository.java`:
- Line 9: The findUnpublished() method returns all unprocessed OutboxEvent rows
which can cause OOM/long runs; change the repository API to limit batch size
(e.g., replace List<OutboxEvent> findUnpublished() with List<OutboxEvent>
findUnpublished(int limit) or use Pageable) and update callers (the `@Scheduled`
job) to pass a sane batch size; additionally prevent duplicate processing during
concurrent scheduler runs by selecting rows with a DB lock/skip-locked or by
atomically marking events as "processing" (e.g., add/update a
markProcessing(List<Id>) or use SELECT ... FOR UPDATE SKIP LOCKED) in the
repository implementation before returning results so that OutboxEvent
processing is limited and mutually exclusive.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/outbox/OutboxEventJpaRepository.java`:
- Line 12: findByPublishedAtIsNull() in OutboxEventJpaRepository reads
uncommitted work without pessimistic locking, allowing multiple scheduler
instances to fetch and publish the same events; change the repository to use a
pessimistic lock (e.g., annotate the query or method with
`@Lock`(LockModeType.PESSIMISTIC_WRITE) or use a FOR UPDATE native query) and add
a result limit (e.g., switch to a pageable/top-N method such as
findTopByPublishedAtIsNull(Pageable) or findTopNByPublishedAtIsNull) so you only
fetch a bounded batch, then ensure OutboxRelayScheduler’s processing method is
annotated with `@Transactional` so the lock is acquired and held within a
transaction boundary while marking events published.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductV1Controller.java`:
- Around line 45-48: Move the event publishing and outbox recording into the
facade so the controller only handles request/response: replace the three-step
sequence in ProductV1Controller (productFacade.getProduct(...),
eventPublisher.publishEvent(...), productFacade.recordView(...)) by a single
facade call; add a new transactional facade method (e.g.
ProductFacade.getProductWithViewEvent(Long productId)) that calls existing
getProduct(productId), then publishes the ProductViewedEvent and calls
recordView(productId) inside the same transaction, and update
ProductV1Controller to return the result from that new facade method.
In `@apps/commerce-api/src/main/resources/application.yml`:
- Line 26: 현재 application.yml이 전 프로파일에서 kafka.yml을 강제 import하고 있어 부트타임에
kafka.yml 누락 시 전체 애플리케이션이 실패할 위험이 있습니다; 변경 방법은 import 항목을 "optional:kafka.yml"로
바꾸거나 kafka 설정 import를 dev/qa/prd 등 Kafka를 실제로 사용하는 프로파일 전용으로 분리(예:
application-dev.yml/application-qa.yml/application-prd.yml에만 kafka.yml import
추가)하고, application*.yml의 기본값(placeholder)들이 안전하게 동작하는지 확인하며, 또한
local/test/dev/qa/prd별로 Spring context-load 테스트를 추가해 kafka.yml이 비어있거나 누락된 상태에서
의도한 프로파일만 실패/성공하는지 검증하십시오 (참조: kafka.yml import 줄, application.yml 및
profile-specific application-*.yml 설정).
In
`@apps/commerce-streamer/src/main/java/com/loopers/application/metrics/ProductMetricsProcessor.java`:
- Around line 40-43: The ensureMetricsExists method can race: between
productMetricsRepository.findByProductId(productId) and save(...) another
transaction may insert the same productId causing a unique constraint error; fix
by making the insert idempotent (preferably add a unique constraint on
product_metrics.product_id if missing) and change ensureMetricsExists to perform
an upsert or defensive retry: use a repository/DAO method that executes INSERT
... ON CONFLICT DO NOTHING / MERGE for ProductMetrics (or a native query) or
wrap productMetricsRepository.save(ProductMetrics.create(productId)) with a
catch for DataIntegrityViolationException/ConstraintViolationException that then
re-queries productMetricsRepository.findByProductId(productId) and returns the
existing row; ensure adjustments reference ensureMetricsExists,
productMetricsRepository, and ProductMetrics.create.
- Around line 29-35: The switch in ProductMetricsProcessor currently logs
unknown eventType but still proceeds to persist EventHandled, causing
unsupported events to be marked as processed; modify the flow so that after the
switch (or inside its default branch) you perform an early return for unknown
eventType instead of continuing to the code that creates/saves EventHandled
(ensure the default branch of the switch returns or throws so methods like
incrementViewCount/incrementLikeCount/incrementSalesCount are only followed by
saving EventHandled when a known event was handled).
In
`@apps/commerce-streamer/src/main/java/com/loopers/domain/metrics/ProductMetrics.java`:
- Around line 5-16: Change the three counters in ProductMetrics from int to long
(fields likeCount, viewCount, salesCount and the private constructor
ProductMetrics(...)) and update any usages/constructors/builders that create
ProductMetrics so they pass long values; also ensure the corresponding
domain/entity field types, JPQL update statements, and DB columns are changed to
long/BIGINT to keep types consistent. After changing types, add a unit test that
starts from Integer.MAX_VALUE - 1, increments the counter twice and asserts the
value is correct (no overflow) to validate the boundary behavior. Ensure all
method signatures, equals/hashCode/serialization code that reference these
fields are updated to use long.
In
`@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/event/EventHandledEntity.java`:
- Around line 14-22: Remove the redundant single-column index
idx_event_handled_event_id from the `@Table` indexes on EventHandledEntity (the
unique constraint uk_event_handled_event_id_type already covers event_id as a
prefix) and add a defensive null check at the start of the EventHandledEntity
constructor by calling Objects.requireNonNull(eventHandled, "eventHandled must
not be null") to avoid NPEs when a null EventHandled DTO is passed; also add a
unit test that constructs EventHandledEntity with null to validate the guard.
In
`@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsJpaRepository.java`:
- Around line 13-27: Add clearAutomatically = true to each `@Modifying` annotation
in ProductMetricsJpaRepository so the persistence context is cleared after
executing the UPDATEs; specifically update the `@Modifying` on incrementLikeCount,
decrementLikeCount, incrementViewCount, and incrementSalesCount to
`@Modifying`(clearAutomatically = true) to ensure subsequent reads in the same
transaction see the DB-updated values loaded by
findByProductId()/ensureMetricsExists.
In
`@apps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/ProductMetricsConsumer.java`:
- Around line 28-38: The consume method currently always calls
acknowledgment.acknowledge(), causing messages to be lost when all records fail;
update consume(List<ConsumerRecord<Object,Object>> records, Acknowledgment
acknowledgment) to track per-record success (e.g., increment a successCount when
processRecord(record) returns/finishes without exception) and only call
acknowledgment.acknowledge() if successCount > 0, otherwise skip ack so Kafka
can redeliver; alternatively (or additionally) implement a
sendToDlq(ConsumerRecord<Object,Object>, Exception) helper and invoke it from
the catch block to publish failed records to a Dead Letter Queue and still
decide ack behavior based on whether any record succeeded. Ensure references to
processRecord(...) and acknowledgment are used to locate where to change logic.
---
Outside diff comments:
In `@modules/kafka/src/main/resources/kafka.yml`:
- Line 24: The Kafka consumer YAML uses the producer-only key `value-serializer`
which prevents the consumer from deserializing messages; change the setting key
to `value-deserializer` (keeping the class
`org.apache.kafka.common.serialization.ByteArrayDeserializer`) under the
consumer config so consumers use the correct deserializer, and then run an
integration test under the local/test profile to verify consumers receive and
decode messages correctly.
---
Minor comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`:
- Around line 53-59: The product view handling currently spans
ProductV1Controller.getProduct() (readOnly) → publishEvent() →
ProductFacade.recordView(), causing the read and outbox save to occur outside a
single transaction and risking lost view events; either (A) enforce a single
transactional boundary by moving the outbox save into the same transaction as
the read (e.g., adjust getProduct()/publishEvent() to call
ProductFacade.recordView() within a `@Transactional`(readOnly = false) scope or
change propagation so recordView runs in the caller transaction), or (B)
explicitly document that the outbox approach is eventual-consistency by adding
Javadoc and design notes for ProductFacade.recordView() and
ProductV1Controller.getProduct(), or (C) introduce compensation/retry (e.g.,
durable retry queue) around OutboxEventRepository.save in
ProductFacade.recordView() to guarantee persistence; choose one approach and
update ProductFacade.recordView(), ProductV1Controller.getProduct(), and
publishEvent() accordingly.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/order/OrderConfirmedEvent.java`:
- Around line 8-11: Constructor OrderConfirmedEvent currently allows null
orderId and userId; update the OrderConfirmedEvent(Long orderId, Long userId)
constructor to validate both parameters and throw a clear unchecked exception
(e.g., IllegalArgumentException or NullPointerException) when either orderId or
userId is null, assigning only after validation to preserve immutability of the
fields; then add unit tests that assert an exception is thrown for a null
orderId and for a null userId respectively, referencing the OrderConfirmedEvent
constructor and the orderId/userId fields in the tests.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/order/OrderRequestedEvent.java`:
- Around line 7-9: The constructor OrderRequestedEvent(Long userId) must guard
against null to enforce immutability and domain validity: update the
OrderRequestedEvent constructor to validate userId and throw an immediate
unchecked exception (e.g., NullPointerException or IllegalArgumentException)
when null is passed, and add a unit test that asserts new
OrderRequestedEvent(null) throws the expected exception; reference the
OrderRequestedEvent constructor to locate and change the validation and add the
corresponding unit test for the null case.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/product/ProductViewedEvent.java`:
- Around line 7-9: The ProductViewedEvent constructor must defensively reject a
null productId to maintain domain invariants: update the ProductViewedEvent(Long
productId) constructor to validate productId (throw a clear runtime exception
such as NullPointerException or IllegalArgumentException with a descriptive
message) and keep productId private/final to preserve encapsulation; also add a
unit test that constructs new ProductViewedEvent(null) and asserts the expected
exception is thrown to prevent regressions.
In
`@apps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/ProductMetricsConsumer.java`:
- Around line 40-55: processRecord currently assumes record.value() is a byte[]
and payload keys exist and have exact types; make it defensive: verify
record.value() is instance of byte[] before casting and handle otherwise (log
and skip), wrap objectMapper.readValue in try/catch for JsonProcessingException,
null-check payload and payload.get("productId")/("productIds") and return/skip
with a log when missing, and for productIds treat it as List<?> then for each
element assert it is Number (or convert via Number) before using longValue to
call processor.process; update processRecord to validate types and handle
malformed messages gracefully rather than allowing ClassCastException/NPE to
propagate.
---
Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/logging/UserActionLogListener.java`:
- Around line 15-32: handle high-frequency user actions (especially
ProductViewedEvent handled by handleProductViewed) are currently logged at info
which will inflate log volume; change handleProductViewed to use debug
(log.debug) or route the event to your metrics/Kafka pipeline instead of
synchronous app logs, and for other frequent handlers like handleProductLiked
and handleProductUnliked consider either lowering to debug, applying sampling,
or using an async appender/metrics emitter; ensure the event handlers
(handleProductViewed, handleProductLiked, handleProductUnliked,
handleOrderRequested) are updated consistently and add a comment noting where
high-volume analytics are emitted (metrics/Kafka) so future reviewers know why
app logs are reduced.
In
`@apps/commerce-api/src/main/java/com/loopers/application/OutboxEventHelper.java`:
- Around line 14-16: The catch block in OutboxEventHelper that currently throws
new RuntimeException("Outbox payload 직렬화 실패", e) should be changed to throw the
project's CoreException (preserving the original cause and message) so the error
is routed through ApiControllerAdvice; locate the catch(JsonProcessingException
e) in OutboxEventHelper and replace the RuntimeException usage with constructing
and throwing CoreException (or wrap the JsonProcessingException as the cause) to
follow the unified error-handling pattern.
In `@apps/commerce-api/src/main/java/com/loopers/domain/like/LikedEvent.java`:
- Around line 3-14: The LikedEvent constructor lacks a null check for productId
(same as UnlikedEvent); add defensive validation in the LikedEvent(Long
productId) constructor to throw a clear exception when productId is null (e.g.,
use Objects.requireNonNull(productId, "productId must not be null") or throw
IllegalArgumentException/NullPointerException) so the class mirrors
UnlikedEvent's behavior and prevents creating events with a null productId.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/like/ProductLikedEvent.java`:
- Around line 3-20: The ProductLikedEvent constructor lacks null checks for
productId and userId; update the ProductLikedEvent(Long productId, Long userId)
constructor to validate that neither productId nor userId is null and throw an
IllegalArgumentException with a clear message if they are; mirror the null-check
pattern used in ProductUnlikedEvent to keep both event classes consistent and
prevent null values from reaching UserActionLogListener.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/like/ProductUnlikedEvent.java`:
- Around line 3-19: The ProductUnlikedEvent class is a simple data holder and
should be converted to a Java record to eliminate boilerplate; replace the class
declaration and fields/getters with a record declaration like
ProductUnlikedEvent(Long productId, Long userId) (ensure the project Java
version supports records and update any serialization frameworks or usages if
they require explicit constructors or annotations).
In `@apps/commerce-api/src/main/java/com/loopers/domain/like/UnlikedEvent.java`:
- Around line 3-14: Add null-checking to the UnlikedEvent constructor so the
domain invariant is enforced: in the UnlikedEvent(Long productId) constructor
validate productId != null and throw an IllegalArgumentException with a clear
message if it is null; update any callers/tests as needed and add a unit test
that constructing new UnlikedEvent(null) results in the expected
IllegalArgumentException to prevent downstream
productService.decreaseLikeCount(null) calls.
In `@apps/commerce-api/src/main/java/com/loopers/domain/outbox/OutboxEvent.java`:
- Around line 23-25: The OutboxEvent.create factory lacks domain-level
validation for required fields (eventType, payload, partitionKey); add checks in
OutboxEvent.create to throw an IllegalArgumentException (or a domain-specific
unchecked exception) when any of these parameters is null or blank
(empty/whitespace), and ensure the method returns the new OutboxEvent only after
validation; update or add unit tests to assert that create(...) throws for
null/blank eventType, payload, and partitionKey and succeeds for valid values.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/outbox/OutboxEventEntity.java`:
- Around line 33-39: toEntity currently lacks null validation: ensure the method
validates its input and required fields (parameter OutboxEvent and its
eventType, payload, partitionKey) before constructing OutboxEventEntity; use
Objects.requireNonNull or explicit checks in toEntity(OutboxEvent outboxEvent)
and throw IllegalArgumentException (or NullPointerException with clear message)
naming the missing field(s) so DB constraint violations are avoided and errors
are actionable, and add unit tests for null outboxEvent and null individual
fields to assert the expected exception is thrown.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/outbox/OutboxEventJpaRepository.java`:
- Around line 14-16: The `@Modifying` query in
OutboxEventJpaRepository.markPublished should clear the persistence context
automatically to avoid stale reads; update the `@Modifying` annotation on the
markPublished method to include clearAutomatically = true (i.e.,
`@Modifying`(clearAutomatically = true)) so that after executing the UPDATE the
EntityManager clears managed entities and subsequent reads in the same
transaction return fresh state.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/like/LikeV1Controller.java`:
- Around line 30-35: Move the event-publishing responsibility out of
LikeV1Controller and into LikeFacade so all callers of
likeFacade.toggleLike(...) get the same post-change events; update LikeFacade
(or its toggleLike method) to publish exactly one ProductLikedEvent or
ProductUnlikedEvent based on the returned LikeAction and do so as a
transaction-after-commit event (use TransactionalEventListener/after-commit
publishing pattern or framework equivalent) instead of letting Controller call
eventPublisher; remove the eventPublisher calls from LikeV1Controller; and add a
Facade integration test asserting that when LikeFacade.toggleLike(productId,
userId) is invoked it emits exactly one corresponding event (liked vs unliked).
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/order/OrderV1Controller.java`:
- Line 31: Move the event publishing responsibility out of OrderV1Controller and
into the application/service layer (e.g., OrderFacade or OrderService): remove
the direct call to eventPublisher.publishEvent(new
OrderRequestedEvent(user.getId())) from OrderV1Controller and add publishing
logic inside OrderFacade (or the service method that executes the order) so the
event is emitted only on the successful completion of the business operation
(after persistence/transaction commit if applicable); keep controller focused on
validation/response assembly, and add tests that assert OrderRequestedEvent is
published exactly once on success and not published on failure or exception
while preserving API error responses.
In
`@apps/commerce-streamer/src/main/java/com/loopers/domain/event/EventHandled.java`:
- Around line 15-17: The EventHandled.create factory currently allows null/blank
eventId or eventType which can cause DB NOT NULL violations; update
EventHandled.create to validate that eventId and eventType are non-null and
non-blank and throw IllegalArgumentException with clear messages (e.g., "eventId
must not be null or blank" / "eventType must not be null or blank") before
constructing the new EventHandled(null, eventId, eventType); also add unit tests
that assert creation with null or blank eventId/eventType on EventHandled.create
throws the expected IllegalArgumentException to prevent DB-level errors when
persisting EventHandledEntity.
In
`@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/event/EventHandledEntity.java`:
- Around line 34-37: EventHandledEntity's constructor lacks null checks and can
NPE when eventHandled or its fields are null; update the constructor
(EventHandledEntity(EventHandled)) to defensively validate the input by checking
that the EventHandled parameter is not null and that eventHandled.getEventId()
and eventHandled.getEventType() are not null (use Objects.requireNonNull or
explicit checks) and throw a clear IllegalArgumentException/NullPointerException
with descriptive messages if any are null before assigning to this.eventId and
this.eventType.
In
`@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsEntity.java`:
- Around line 38-43: The ProductMetricsEntity(ProductMetrics metrics)
constructor drops the domain id which causes new inserts instead of updates; add
an alternative constructor or static factory (e.g., ProductMetricsEntity.restore
or ProductMetricsEntity(ProductMetrics metrics, Long id)) that copies
metrics.getId() into the entity id field so saves with an existing id perform
updates; update or add tests to cover saving/restoring an entity with an
existing id to ensure the repository performs an update (verify no
uk_product_metrics_product_id conflict and the row is updated).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fc6b4628-0c60-43b3-968e-40ef8201695c
⛔ Files ignored due to path filters (1)
docs/07-event-driven.mdis excluded by!**/*.mdand included by**
📒 Files selected for processing (39)
apps/commerce-api/build.gradle.ktsapps/commerce-api/src/main/java/com/loopers/application/OutboxEventHelper.javaapps/commerce-api/src/main/java/com/loopers/application/like/LikeEventListener.javaapps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.javaapps/commerce-api/src/main/java/com/loopers/application/logging/UserActionLogListener.javaapps/commerce-api/src/main/java/com/loopers/application/order/OrderConfirmedEventListener.javaapps/commerce-api/src/main/java/com/loopers/application/outbox/OutboxRelayScheduler.javaapps/commerce-api/src/main/java/com/loopers/application/payment/PaymentFacade.javaapps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.javaapps/commerce-api/src/main/java/com/loopers/domain/like/LikeAction.javaapps/commerce-api/src/main/java/com/loopers/domain/like/LikedEvent.javaapps/commerce-api/src/main/java/com/loopers/domain/like/ProductLikedEvent.javaapps/commerce-api/src/main/java/com/loopers/domain/like/ProductUnlikedEvent.javaapps/commerce-api/src/main/java/com/loopers/domain/like/UnlikedEvent.javaapps/commerce-api/src/main/java/com/loopers/domain/order/OrderConfirmedEvent.javaapps/commerce-api/src/main/java/com/loopers/domain/order/OrderRequestedEvent.javaapps/commerce-api/src/main/java/com/loopers/domain/outbox/OutboxEvent.javaapps/commerce-api/src/main/java/com/loopers/domain/outbox/OutboxEventRepository.javaapps/commerce-api/src/main/java/com/loopers/domain/product/ProductViewedEvent.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/outbox/OutboxEventEntity.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/outbox/OutboxEventJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/outbox/OutboxEventRepositoryImpl.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/like/LikeV1Controller.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/order/OrderV1Controller.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/product/ProductV1Controller.javaapps/commerce-api/src/main/resources/application.ymlapps/commerce-streamer/src/main/java/com/loopers/application/metrics/ProductMetricsProcessor.javaapps/commerce-streamer/src/main/java/com/loopers/domain/event/EventHandled.javaapps/commerce-streamer/src/main/java/com/loopers/domain/event/EventHandledRepository.javaapps/commerce-streamer/src/main/java/com/loopers/domain/metrics/ProductMetrics.javaapps/commerce-streamer/src/main/java/com/loopers/domain/metrics/ProductMetricsRepository.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/event/EventHandledEntity.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/event/EventHandledJpaRepository.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/event/EventHandledRepositoryImpl.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsEntity.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsJpaRepository.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsRepositoryImpl.javaapps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/ProductMetricsConsumer.javamodules/kafka/src/main/resources/kafka.yml
| @Transactional(propagation = Propagation.REQUIRES_NEW) | ||
| @TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT) | ||
| public void handleLiked(LikedEvent event) { | ||
| log.info("좋아요 이벤트 수신. productId={}", event.getProductId()); | ||
| productService.increaseLikeCount(event.getProductId()); | ||
| } | ||
|
|
||
| @Transactional(propagation = Propagation.REQUIRES_NEW) | ||
| @TransactionalEventListener(phase = TransactionPhase.AFTER_COMMIT) | ||
| public void handleUnliked(UnlikedEvent event) { | ||
| log.info("좋아요 취소 이벤트 수신. productId={}", event.getProductId()); | ||
| productService.decreaseLikeCount(event.getProductId()); | ||
| } |
There was a problem hiding this comment.
리스너 실패 시 복구 메커니즘이 없다.
AFTER_COMMIT 단계에서 실행되므로 이 리스너가 실패해도 메인 트랜잭션(like 테이블 변경)은 이미 커밋된 상태다. 실패한 카운트 업데이트는 별도 재시도 없이 유실된다.
운영 관점에서 고려할 사항:
ProductService.increaseLikeCount()가 예외를 던지면 로그만 남고 카운트 불일치 발생- 재시도 로직이 없어 일시적 DB 장애 시 데이터 정합성 깨짐
- 모니터링/알림 없이 silent failure 발생
해결 방안:
- 방안 1: try-catch로 감싸고 실패 시 별도 재시도 큐에 저장
- 방안 2: 이 리스너를 제거하고 Kafka consumer(
ProductMetricsProcessor)에서만 카운트 업데이트 (단일 경로) - 방안 3:
@Retryable적용하여 일시적 실패 재시도
🛡️ 에러 핸들링 추가 예시
+import lombok.extern.slf4j.Slf4j;
+
`@Slf4j`
`@Component`
`@RequiredArgsConstructor`
public class LikeEventListener {
private final ProductService productService;
`@Transactional`(propagation = Propagation.REQUIRES_NEW)
`@TransactionalEventListener`(phase = TransactionPhase.AFTER_COMMIT)
public void handleLiked(LikedEvent event) {
- log.info("좋아요 이벤트 수신. productId={}", event.getProductId());
- productService.increaseLikeCount(event.getProductId());
+ try {
+ log.info("좋아요 이벤트 수신. productId={}", event.getProductId());
+ productService.increaseLikeCount(event.getProductId());
+ } catch (Exception e) {
+ log.error("좋아요 카운트 증가 실패. productId={}", event.getProductId(), e);
+ // TODO: 재시도 큐 저장 또는 알림 발송
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeEventListener.java`
around lines 21 - 33, The AFTER_COMMIT listeners handleLiked and handleUnliked
call productService.increaseLikeCount/decreaseLikeCount with no recovery, so
failures after the main transaction cause silent loss; wrap each handler in
resilient error-handling: catch exceptions around productService calls, log them
with context, and on failure either enqueue a retry task/message (persist
minimal retry record or push to a retry queue) or annotate the handler with a
retry mechanism (e.g., `@Retryable`) so transient DB errors are retried;
alternatively, remove these listeners and consolidate count updates into the
existing ProductMetricsProcessor Kafka consumer to ensure a single, reliable
update path—apply one of these fixes to handleLiked and handleUnliked and add
monitoring/logging for retries.
| public LikeAction toggleLike(Long productId, Long userId) { | ||
| if (likeService.isLiked(productId, userId)) { | ||
| likeService.unlike(productId, userId); | ||
| productService.decreaseLikeCount(productId); | ||
| eventPublisher.publishEvent(new UnlikedEvent(productId)); | ||
| outboxEventRepository.save(OutboxEvent.create( | ||
| "UNLIKED", | ||
| OutboxEventHelper.toJson(objectMapper, Map.of("productId", productId)), String.valueOf(productId) | ||
| )); | ||
| return LikeAction.UNLIKED; | ||
| } else { | ||
| likeService.like(productId, userId); | ||
| productService.increaseLikeCount(productId); | ||
| eventPublisher.publishEvent(new LikedEvent(productId)); | ||
| outboxEventRepository.save(OutboxEvent.create( | ||
| "LIKED", | ||
| OutboxEventHelper.toJson(objectMapper, Map.of("productId", productId)), String.valueOf(productId) | ||
| )); | ||
| return LikeAction.LIKED; | ||
| } | ||
| } |
There was a problem hiding this comment.
좋아요 카운트 업데이트의 이중 경로가 데이터 불일치를 유발할 수 있다.
현재 구조에서 좋아요 카운트 업데이트가 두 경로로 발생한다:
LikeEventListener가LikedEvent/UnlikedEvent를 받아ProductService.increaseLikeCount()/decreaseLikeCount()호출 (in-process, AFTER_COMMIT)- Outbox → Kafka →
ProductMetricsProcessor가 메트릭 업데이트
LikeEventListener가 AFTER_COMMIT + REQUIRES_NEW로 동작하므로, 메인 트랜잭션(like 테이블 + outbox)이 커밋된 후 리스너가 실패하면:
- like 테이블: 변경됨
- outbox: 저장됨 (Kafka로 발행 예정)
- product.likeCount: 업데이트 실패
이후 Kafka consumer가 같은 이벤트를 처리하면 카운트가 한 번만 증가/감소해야 하는데, 리스너 재시도 시 중복 처리될 수 있다.
in-process 리스너와 Kafka consumer 중 하나의 경로만 사용하거나, 멱등성 보장 로직이 필요하다.
| public void handleProductLiked(ProductLikedEvent event) { | ||
| log.info("좋아요. productId={}, userId={}", event.getProductId(), event.getUserId()); | ||
| } | ||
|
|
||
| @EventListener | ||
| public void handleProductUnliked(ProductUnlikedEvent event) { | ||
| log.info("좋아요 취소. productId={}, userId={}", event.getProductId(), event.getUserId()); | ||
| } | ||
|
|
||
| @EventListener | ||
| public void handleOrderRequested(OrderRequestedEvent event) { | ||
| log.info("주문 요청. userId={}", event.getUserId()); |
There was a problem hiding this comment.
원문 userId를 info 로그에 남기지 않는 편이 안전하다.
운영 관점에서 사용자 식별자가 일반 로그 파이프라인에 그대로 남으면 접근 통제, 보관 기간, 파기 범위가 불필요하게 커진다. userId는 마스킹 또는 해시 처리하거나 별도 감사 로그 sink로 분리하는 것이 좋다. 로그 캡처 테스트를 추가해 원문 userId가 출력되지 않는지 검증해야 한다.
As per coding guidelines 로깅 시 민감정보 노출 가능성을 점검한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/logging/UserActionLogListener.java`
around lines 21 - 32, 로그에 원문 userId가 남지 않도록 handleProductLiked,
handleProductUnliked, handleOrderRequested 메서드의 로깅을 변경하세요: 원문 userId를 직접 출력하지 말고
마스킹(hash 또는 mask)한 값(예: maskedUserId 또는 hashedUserId)을 생성해 로그에 사용하거나 민감정보 전용 감사
sink로 분리하고 info 로그에서는 제거하세요; 또한 로그 캡처 단위 테스트를 추가해 원시 userId가 출력되지 않는지 검증하도록 하세요.
| @Scheduled(fixedDelay = 5000) | ||
| public void relay() { | ||
| List<OutboxEvent> unpublished = outboxEventRepository.findUnpublished(); | ||
| for (OutboxEvent event : unpublished) { | ||
| try { | ||
| kafkaTemplate.send(event.getEventType(), event.getPartitionKey(), event.getPayload()); | ||
| outboxEventRepository.markPublished(event.getId()); | ||
| } catch (Exception e) { | ||
| log.error("Failed to publish outbox event. id={}, eventType={}", event.getId(), event.getEventType(), e); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# OutboxEventJpaRepository에서 `@Transactional` 및 Lock 어노테이션 확인
echo "=== Checking OutboxEventJpaRepository for transaction and lock annotations ==="
ast-grep --pattern '@Modifying
`@Query`($_)
void markPublished($$$)'
echo ""
echo "=== Checking findByPublishedAtIsNull for locking ==="
rg -n -A3 "findByPublishedAtIsNull|findUnpublished" --type=javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2529
🏁 Script executed:
# Check OutboxEventJpaRepository for all annotations and method signatures
echo "=== Full OutboxEventJpaRepository.java ==="
find . -name "OutboxEventJpaRepository.java" -type f -exec cat -n {} \;
echo ""
echo "=== Full OutboxEventRepositoryImpl.java ==="
find . -name "OutboxEventRepositoryImpl.java" -type f -exec cat -n {} \;
echo ""
echo "=== Full OutboxRelayScheduler.java with class-level annotations ==="
find . -name "OutboxRelayScheduler.java" -type f -exec head -50 {} \;Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 3454
다중 인스턴스 환경에서 트랜잭션 누락 및 중복 메시지 발행 문제가 발생한다.
relay() 메서드에 @Transactional이 없어 markPublished() 호출 시 TransactionRequiredException이 발생한다. @Modifying 어노테이션이 있는 쿼리는 명시적인 트랜잭션 경계가 필수이다. 또한 findUnpublished()에 비관적 잠금이 없어 여러 스케줄러 인스턴스가 동시에 같은 이벤트를 조회하여 Kafka에 중복 메시지를 발행한다. Outbox 패턴의 신뢰성 보장이 무너진다.
수정 방안:
relay()메서드에@Transactional추가findUnpublished()메서드에SELECT ... FOR UPDATE SKIP LOCKED적용하고 락 타임아웃 힌트(jakarta.persistence.lock.timeout=3000) 추가- 쿼리 결과를 제한(예:
LIMIT 100)하여 대량 백로그 시 메모리 부담 방지
동시 스케줄러 실행 환경에서 중복 메시지가 발행되지 않는지 확인하는 통합 테스트를 추가해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/outbox/OutboxRelayScheduler.java`
around lines 20 - 31, Add a transactional boundary to the scheduler by
annotating relay() with `@Transactional` so markPublished(...) executes inside a
transaction; change outboxEventRepository.findUnpublished() to use a SELECT ...
FOR UPDATE SKIP LOCKED (with a jakarta.persistence.lock.timeout=3000 hint) and
limit the result set (e.g., LIMIT 100) to avoid memory spikes and prevent
multiple scheduler instances from selecting the same rows; keep relay()
iterating over the locked rows and calling kafkaTemplate.send(...) then
outboxEventRepository.markPublished(...); finally add an integration test that
runs multiple scheduler instances concurrently and asserts no duplicate Kafka
messages are produced for the same outbox id.
| try { | ||
| kafkaTemplate.send(event.getEventType(), event.getPartitionKey(), event.getPayload()); | ||
| outboxEventRepository.markPublished(event.getId()); |
There was a problem hiding this comment.
KafkaTemplate.send()가 비동기이므로 전송 완료 전에 markPublished가 호출된다.
send()는 CompletableFuture를 반환하며 비동기로 실행된다. 현재 코드는 전송 완료를 기다리지 않고 바로 markPublished()를 호출하므로, Kafka 전송 실패 시에도 이벤트가 발행된 것으로 표시되어 메시지가 유실된다.
제안: 전송 완료 대기 후 markPublished 호출
-kafkaTemplate.send(event.getEventType(), event.getPartitionKey(), event.getPayload());
-outboxEventRepository.markPublished(event.getId());
+kafkaTemplate.send(event.getEventType(), event.getPartitionKey(), event.getPayload())
+ .get(10, TimeUnit.SECONDS); // 동기 대기
+outboxEventRepository.markPublished(event.getId());추가 테스트: Kafka 브로커 장애 상황에서 이벤트가 재발행 대상으로 유지되는지 검증 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/outbox/OutboxRelayScheduler.java`
around lines 24 - 26, KafkaTemplate.send(...) is asynchronous so
markPublished(event.getId()) is called before the send completes; change
OutboxRelayScheduler to wait for the send result and only call
outboxEventRepository.markPublished(...) on successful send, and handle failures
by logging and leaving the event unmarked (or scheduling retry). Concretely,
replace the direct kafkaTemplate.send(event.getEventType(),
event.getPartitionKey(), event.getPayload()) + immediate
outboxEventRepository.markPublished(...) with waiting on the send Future (e.g.,
use the returned Future/ListenableFuture from kafkaTemplate.send(...).get() or
addCallback/onSuccess/onFailure) and call
outboxEventRepository.markPublished(event.getId()) inside the success handler
and handle exceptions in the failure handler (log and do not mark as published).
| private void ensureMetricsExists(Long productId) { | ||
| productMetricsRepository.findByProductId(productId) | ||
| .orElseGet(() -> productMetricsRepository.save(ProductMetrics.create(productId))); | ||
| } |
There was a problem hiding this comment.
ensureMetricsExists()에서 동시성 문제가 발생할 수 있다.
findByProductId()와 save() 사이에 다른 트랜잭션이 동일 productId로 삽입하면 unique constraint 위반이 발생한다. 여러 Kafka 파티션에서 같은 productId 이벤트가 동시에 처리될 때 문제가 된다.
수정 방안:
product_metrics테이블에productIdunique 제약 추가 (아직 없다면)INSERT ... ON CONFLICT DO NOTHING또는MERGE사용- 또는 예외 발생 시 재조회 로직 추가
제안: 예외 처리를 통한 방어 로직
private void ensureMetricsExists(Long productId) {
- productMetricsRepository.findByProductId(productId)
- .orElseGet(() -> productMetricsRepository.save(ProductMetrics.create(productId)));
+ if (productMetricsRepository.findByProductId(productId).isEmpty()) {
+ try {
+ productMetricsRepository.save(ProductMetrics.create(productId));
+ } catch (DataIntegrityViolationException e) {
+ // 동시 삽입으로 인한 중복 - 무시하고 진행
+ log.debug("ProductMetrics already created by another transaction. productId={}", productId);
+ }
+ }
}추가 테스트: 동일 productId에 대해 동시에 이벤트 처리 시 예외 없이 정상 동작하는지 검증 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-streamer/src/main/java/com/loopers/application/metrics/ProductMetricsProcessor.java`
around lines 40 - 43, The ensureMetricsExists method can race: between
productMetricsRepository.findByProductId(productId) and save(...) another
transaction may insert the same productId causing a unique constraint error; fix
by making the insert idempotent (preferably add a unique constraint on
product_metrics.product_id if missing) and change ensureMetricsExists to perform
an upsert or defensive retry: use a repository/DAO method that executes INSERT
... ON CONFLICT DO NOTHING / MERGE for ProductMetrics (or a native query) or
wrap productMetricsRepository.save(ProductMetrics.create(productId)) with a
catch for DataIntegrityViolationException/ConstraintViolationException that then
re-queries productMetricsRepository.findByProductId(productId) and returns the
existing row; ensure adjustments reference ensureMetricsExists,
productMetricsRepository, and ProductMetrics.create.
| private final Long id; | ||
| private final Long productId; | ||
| private final int likeCount; | ||
| private final int viewCount; | ||
| private final int salesCount; | ||
|
|
||
| private ProductMetrics(Long id, Long productId, int likeCount, int viewCount, int salesCount) { | ||
| this.id = id; | ||
| this.productId = productId; | ||
| this.likeCount = likeCount; | ||
| this.viewCount = viewCount; | ||
| this.salesCount = salesCount; |
There was a problem hiding this comment.
집계 카운터는 int보다 long이 안전하다.
운영 관점에서 조회·판매 집계는 누적값이므로 인기 상품이 장기간 운영되면 Integer.MAX_VALUE를 넘는 순간 음수로 overflow되어 대시보드와 후속 의사결정이 깨진다. 도메인, 엔티티, JPQL update, DB 컬럼을 모두 long/BIGINT로 맞추는 것이 안전하다. Integer.MAX_VALUE - 1에서 두 번 증가시켜도 값이 음수로 바뀌지 않는 경계 테스트를 추가해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-streamer/src/main/java/com/loopers/domain/metrics/ProductMetrics.java`
around lines 5 - 16, Change the three counters in ProductMetrics from int to
long (fields likeCount, viewCount, salesCount and the private constructor
ProductMetrics(...)) and update any usages/constructors/builders that create
ProductMetrics so they pass long values; also ensure the corresponding
domain/entity field types, JPQL update statements, and DB columns are changed to
long/BIGINT to keep types consistent. After changing types, add a unit test that
starts from Integer.MAX_VALUE - 1, increments the counter twice and asserts the
value is correct (no overflow) to validate the boundary behavior. Ensure all
method signatures, equals/hashCode/serialization code that reference these
fields are updated to use long.
| @Table( | ||
| name = "event_handled", | ||
| uniqueConstraints = { | ||
| @UniqueConstraint(name = "uk_event_handled_event_id_type", columnNames = {"event_id", "event_type"}) | ||
| }, | ||
| indexes = { | ||
| @Index(name = "idx_event_handled_event_id", columnList = "event_id") | ||
| } | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: event_id 단독 조회 쿼리 사용 여부 확인
# EventHandled 관련 repository에서 eventId만 사용하는 쿼리 검색
rg -n "findBy.*EventId[^A]|existsBy.*EventId[^A]" --type=javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
fd -t f "EventHandled" --type=javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 256
🏁 Script executed:
fd -t f "EventHandledRepository" --type=javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 256
🏁 Script executed:
cat -n "apps/commerce-streamer/src/main/java/com/loopers/infrastructure/event/EventHandledEntity.java"Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1779
🏁 Script executed:
rg -n "eventId" --type=java -B 2 -A 2Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 9152
🏁 Script executed:
rg -n "EventHandledRepository|EventHandledJpaRepository" --type-add 'java:*.java' -g '*.java'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1241
🏁 Script executed:
rg -n "from event_handled|findBy.*event_id" --type-add 'java:*.java' -g '*.java'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
단독 event_id 인덱스는 제거하고, 생성자에서 null 검증을 추가하다.
(event_id, event_type) 복합 unique constraint로 생성된 인덱스가 이미 event_id 컬럼을 prefix로 포함하므로, idx_event_handled_event_id 단일 인덱스는 중복된다. 코드베이스에서 event_id만으로 조회하는 쿼리는 존재하지 않으며(모두 existsByEventIdAndEventType 사용), 복합 인덱스의 prefix로 동일한 성능을 얻을 수 있다. 불필요한 인덱스 유지로 인한 저장소 낭비와 INSERT/UPDATE 성능 저하를 제거하기 위해 단일 인덱스를 제거하다.
또한 생성자(34-37번 라인)에서 eventHandled 파라미터에 대한 null 검증이 없어서, null 전달 시 NPE가 발생할 수 있다. 방어적 프로그래밍을 위해 생성자 첫 줄에 Objects.requireNonNull(eventHandled, "eventHandled must not be null") 검증을 추가하다. 테스트에서 null 입력 케이스를 추가하여 검증하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/event/EventHandledEntity.java`
around lines 14 - 22, Remove the redundant single-column index
idx_event_handled_event_id from the `@Table` indexes on EventHandledEntity (the
unique constraint uk_event_handled_event_id_type already covers event_id as a
prefix) and add a defensive null check at the start of the EventHandledEntity
constructor by calling Objects.requireNonNull(eventHandled, "eventHandled must
not be null") to avoid NPEs when a null EventHandled DTO is passed; also add a
unit test that constructs EventHandledEntity with null to validate the guard.
| @Modifying | ||
| @Query("UPDATE ProductMetricsEntity m SET m.likeCount = m.likeCount + 1 WHERE m.productId = :productId") | ||
| void incrementLikeCount(@Param("productId") Long productId); | ||
|
|
||
| @Modifying | ||
| @Query("UPDATE ProductMetricsEntity m SET m.likeCount = m.likeCount - 1 WHERE m.productId = :productId AND m.likeCount > 0") | ||
| void decrementLikeCount(@Param("productId") Long productId); | ||
|
|
||
| @Modifying | ||
| @Query("UPDATE ProductMetricsEntity m SET m.viewCount = m.viewCount + 1 WHERE m.productId = :productId") | ||
| void incrementViewCount(@Param("productId") Long productId); | ||
|
|
||
| @Modifying | ||
| @Query("UPDATE ProductMetricsEntity m SET m.salesCount = m.salesCount + 1 WHERE m.productId = :productId") | ||
| void incrementSalesCount(@Param("productId") Long productId); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# ProductMetricsEntity에서 productId에 인덱스 어노테이션 확인
echo "=== Checking ProductMetricsEntity for index on productId ==="
rg -n -B5 -A10 "class ProductMetricsEntity" --type=java
echo ""
echo "=== Checking for `@Index` annotation ==="
rg -n "@Index|@Table.*index" --type=java -g "*Metrics*"Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2259
🏁 Script executed:
# ProductMetricsProcessor에서 ensureMetricsExists와 카운터 업데이트 패턴 확인
echo "=== Checking ProductMetricsProcessor flow ==="
rg -n -A15 "ensureMetricsExists|incrementLikeCount|decrementLikeCount|incrementViewCount|incrementSalesCount" apps/commerce-streamer/src/main/java/com/loopers/infrastructure/metrics/ --type=java
echo ""
echo "=== Checking if ProductMetricsProcessor has `@Transactional` ==="
rg -n -B5 "class ProductMetricsProcessor" apps/commerce-streamer --type=java -A10Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 7128
🏁 Script executed:
# Check if ensureMetricsExists method exists anywhere
echo "=== Searching for ensureMetricsExists ==="
rg -n "ensureMetricsExists" apps/commerce-streamer --type=java
echo ""
echo "=== Full ProductMetricsProcessor.process() method ==="
rg -n -A20 "public void process" apps/commerce-streamer/src/main/java/com/loopers/application/metrics/ProductMetricsProcessor.java --type=javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1530
🏁 Script executed:
# Get the full ensureMetricsExists method implementation
echo "=== Full ensureMetricsExists method ==="
rg -n -A10 "private void ensureMetricsExists" apps/commerce-streamer/src/main/java/com/loopers/application/metrics/ProductMetricsProcessor.java --type=java
echo ""
echo "=== Check findByProductId method signature ==="
rg -n -B2 -A5 "findByProductId" apps/commerce-streamer --type=javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 4684
@Modifying 쿼리에 clearAutomatically = true 설정이 필요하다.
ProductMetricsProcessor.process() 메서드 실행 흐름:
ensureMetricsExists(productId)→findByProductId()로 엔티티를 영속성 컨텍스트에 로드incrementViewCount()등@ModifyingUPDATE 쿼리 실행
@Modifying 쿼리는 SQL UPDATE를 직접 실행하므로 영속성 컨텍스트의 캐시된 엔티티는 갱신되지 않는다. 같은 트랜잭션 내에서 메트릭을 조회하면 DB에는 업데이트되었지만 캐시는 구식 값을 반환하는 문제가 발생한다. clearAutomatically = true로 설정하면 UPDATE 후 자동으로 1차 캐시를 비워 이 문제를 해결한다.
제안: clearAutomatically 추가
-@Modifying
+@Modifying(clearAutomatically = true)
`@Query`("UPDATE ProductMetricsEntity m SET m.likeCount = m.likeCount + 1 WHERE m.productId = :productId")
void incrementLikeCount(`@Param`("productId") Long productId);나머지 decrementLikeCount(), incrementViewCount(), incrementSalesCount() 메서드도 동일하게 적용해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsJpaRepository.java`
around lines 13 - 27, Add clearAutomatically = true to each `@Modifying`
annotation in ProductMetricsJpaRepository so the persistence context is cleared
after executing the UPDATEs; specifically update the `@Modifying` on
incrementLikeCount, decrementLikeCount, incrementViewCount, and
incrementSalesCount to `@Modifying`(clearAutomatically = true) to ensure
subsequent reads in the same transaction see the DB-updated values loaded by
findByProductId()/ensureMetricsExists.
| public void consume(List<ConsumerRecord<Object, Object>> records, Acknowledgment acknowledgment) { | ||
| for (ConsumerRecord<Object, Object> record : records) { | ||
| try { | ||
| processRecord(record); | ||
| } catch (Exception e) { | ||
| log.error("이벤트 처리 실패. topic={}, partition={}, offset={}", | ||
| record.topic(), record.partition(), record.offset(), e); | ||
| } | ||
| } | ||
| acknowledgment.acknowledge(); | ||
| } |
There was a problem hiding this comment.
모든 레코드 처리 실패 시에도 offset이 커밋되어 메시지가 유실된다.
consume() 메서드는 개별 레코드 실패와 무관하게 항상 acknowledge()를 호출한다. 배치 내 모든 레코드가 실패해도 offset이 커밋되어 해당 메시지들은 재처리되지 않는다.
수정 방안:
- 성공/실패 카운트를 추적하여 전체 실패 시 acknowledge를 생략
- 또는 실패한 레코드를 Dead Letter Queue(DLQ)로 전송
제안: 전체 실패 시 acknowledge 생략
public void consume(List<ConsumerRecord<Object, Object>> records, Acknowledgment acknowledgment) {
+ int successCount = 0;
for (ConsumerRecord<Object, Object> record : records) {
try {
processRecord(record);
+ successCount++;
} catch (Exception e) {
log.error("이벤트 처리 실패. topic={}, partition={}, offset={}",
record.topic(), record.partition(), record.offset(), e);
}
}
- acknowledgment.acknowledge();
+ if (successCount > 0) {
+ acknowledgment.acknowledge();
+ } else {
+ log.error("배치 내 모든 레코드 처리 실패. records={}", records.size());
+ // acknowledge 생략 - 재처리 유도
+ }
}추가 테스트: Kafka 컨슈머 장애 시나리오에서 메시지 재처리가 정상 동작하는지 통합 테스트가 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/ProductMetricsConsumer.java`
around lines 28 - 38, The consume method currently always calls
acknowledgment.acknowledge(), causing messages to be lost when all records fail;
update consume(List<ConsumerRecord<Object,Object>> records, Acknowledgment
acknowledgment) to track per-record success (e.g., increment a successCount when
processRecord(record) returns/finishes without exception) and only call
acknowledgment.acknowledge() if successCount > 0, otherwise skip ack so Kafka
can redeliver; alternatively (or additionally) implement a
sendToDlq(ConsumerRecord<Object,Object>, Exception) helper and invoke it from
the catch block to publish failed records to a Dead Letter Queue and still
decide ack behavior based on whether any record succeeded. Ensure references to
processRecord(...) and acknowledgment are used to locate where to change logic.
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (7)
apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponTemplateJpaRepository.java (1)
15-22:@Modifying(clearAutomatically = true)추가를 권장한다.현재 호출 흐름에서는
tryIncrementIssuedCount()이후에findById()로 템플릿을 조회하므로 문제가 없다. 그러나 향후 리팩토링으로 호출 순서가 변경되거나, 동일 트랜잭션 내 다른 경로에서 템플릿을 먼저 조회한 후 이 메서드를 호출하면issuedCount가 stale 상태로 남을 수 있다. 방어적 설계를 위해clearAutomatically = true를 추가하는 것이 안전하다.♻️ 수정안
- `@Modifying` + `@Modifying`(clearAutomatically = true) `@Query`(value = """ UPDATE coupon_templates SET issued_count = issued_count + 1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponTemplateJpaRepository.java` around lines 15 - 22, The update method tryIncrementIssuedCount in CouponTemplateJpaRepository should be annotated with `@Modifying`(clearAutomatically = true) to ensure the persistence context is cleared after the native update; modify the `@Modifying` annotation on tryIncrementIssuedCount to add clearAutomatically = true so cached/stale CouponTemplate entities aren't returned in the same transaction after the native update.apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponTemplateEntity.java (1)
43-55:issuedCount필드 초기화가 중복되었다.Line 45에서 필드 선언 시
= 0으로 초기화하고, Line 54 생성자에서도this.issuedCount = 0으로 재할당한다. 둘 중 하나만 유지하면 된다. 또한issuedCount에 대한 getter가 없는데, 만약 조회 API에서 발급 현황을 노출해야 한다면 추가가 필요하다.♻️ 중복 초기화 제거
`@Comment`("현재 발급된 수량") `@Column`(name = "issued_count", nullable = false) - private Integer issuedCount = 0; + private Integer issuedCount; public CouponTemplateEntity(String name, DiscountType type, Integer discountValue, Integer minOrderAmount, ZonedDateTime expiredAt, Integer maxIssuanceCount) { this.name = name;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponTemplateEntity.java` around lines 43 - 55, The issuedCount field is being initialized twice (field declaration issuedCount = 0 and constructor assignment this.issuedCount = 0); remove one of these to avoid redundancy (either drop the "= 0" on the field or remove the assignment in the constructor inside CouponTemplateEntity constructor). Also add a public getter getIssuedCount() in CouponTemplateEntity if the issued count must be exposed by the API.apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponEntity.java (2)
14-15: 동시성 강화 시(ref_user_id, ref_template_id)유니크 제약 추가 검토가 필요하다.현재
existsByRefUserIdAndRefTemplateId체크 후save하는 select-then-act 패턴은 동시 요청 시 중복 발급이 가능하다. 학습 내역에 따르면 BR-C03 단계에서 동시성 강화가 예정되어 있으므로, 해당 단계에서 DB 레벨의 유니크 제약 조건을 추가하여 최종 방어선을 확보해야 한다.`@Table`(name = "coupons", uniqueConstraints = { `@UniqueConstraint`(columnNames = {"ref_user_id", "ref_template_id"}) })Based on learnings: "duplicate issuance prevention (BR-C03 concurrency hardening) is deferred to a later phase."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponEntity.java` around lines 14 - 15, Add a DB-level unique constraint to the CouponEntity mapping to prevent duplicate issuance under concurrent requests: update the `@Table` annotation on class CouponEntity to include a UniqueConstraint for the "ref_user_id" and "ref_template_id" columns (i.e., `@Table`(name="coupons", uniqueConstraints = {`@UniqueConstraint`(columnNames = {"ref_user_id", "ref_template_id"})})). Ensure column names match the entity field mappings for refUserId and refTemplateId so the JPA schema and future migration reflect the uniqueness guarantee.
57-67: 생성자에서 필수 필드 null 검증이 누락되었다.
refUserId,refTemplateId,name등 필수 필드가null로 전달될 경우 엔티티가 생성되지만, DB 저장 시점에서야nullable = false제약에 의해 실패한다. 빠른 실패(fail-fast) 원칙에 따라 생성 시점에 검증하는 것이 디버깅에 유리하다.♻️ null 검증 추가
public CouponEntity(Long refUserId, Long refTemplateId, String name, DiscountType type, Integer discountValue, Integer minOrderAmount, ZonedDateTime expiredAt) { + if (refUserId == null || refTemplateId == null || name == null || type == null + || discountValue == null || minOrderAmount == null || expiredAt == null) { + throw new IllegalArgumentException("Required fields cannot be null"); + } this.refUserId = refUserId; this.refTemplateId = refTemplateId; this.name = name;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponEntity.java` around lines 57 - 67, The constructor CouponEntity(Long refUserId, Long refTemplateId, String name, DiscountType type, Integer discountValue, Integer minOrderAmount, ZonedDateTime expiredAt) must perform fail-fast null checks for required fields (at least refUserId, refTemplateId, name and any other non-nullable fields) before assigning to the instance; update that constructor to validate using Objects.requireNonNull(...) or explicit checks and throw IllegalArgumentException/NullPointerException with clear messages indicating which parameter is missing (referencing the constructor and fields refUserId, refTemplateId, name, type, etc.), so the entity cannot be created with null mandatory values.apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponIssueRequest.java (1)
8-9:status와failReason필드를final로 변경하거나 상태 전이 메서드를 추가해야 한다.현재 두 필드는 mutable로 선언되어 있으나 setter가 없어 실질적으로 변경할 수 없다. 만약 도메인 객체 레벨에서 상태 전이가 필요 없고 인프라 레이어에서 직접 DB를 갱신하는 설계라면
final로 명시하여 의도를 명확히 해야 한다. 반대로 도메인 로직에서 상태 전이가 필요하다면markSuccess(),markFailed(String reason)같은 도메인 메서드를 추가하여 캡슐화를 강화해야 한다.♻️ Option A: final로 명시 (현재 설계 유지)
public class CouponIssueRequest { private final Long id; private final Long userId; private final Long templateId; - private CouponIssueRequestStatus status; - private String failReason; + private final CouponIssueRequestStatus status; + private final String failReason;♻️ Option B: 도메인 상태 전이 메서드 추가
public String getFailReason() { return failReason; } + + public void markSuccess() { + this.status = CouponIssueRequestStatus.SUCCESS; + this.failReason = null; + } + + public void markFailed(String reason) { + this.status = CouponIssueRequestStatus.FAILED; + this.failReason = reason; + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponIssueRequest.java` around lines 8 - 9, CouponIssueRequest currently has mutable fields status and failReason but no setters; either make the intent explicit by marking both fields final and initializing them via the constructor (update the CouponIssueRequest constructor to accept initial status and failReason as needed) OR add domain state-transition methods on CouponIssueRequest (e.g., markSuccess() which sets status to SUCCESS and clears failReason, and markFailed(String reason) which sets status to FAILED and sets failReason) to encapsulate state changes; locate the class CouponIssueRequest and modify the fields/status-handling accordingly (use CouponIssueRequest.status and CouponIssueRequest.failReason and add methods markSuccess and markFailed if choosing the domain-method route).apps/commerce-api/src/main/java/com/loopers/application/coupon/CouponIssueRequestInfo.java (1)
12-19:from()메서드에 null 방어 로직이 없다.
CouponIssueRequest request가 null로 전달될 경우 NPE가 발생한다. 운영 환경에서 예기치 않은 null 입력으로 인한 장애를 방지하려면 방어적 검증이 필요하다.🛡️ 수정안
public static CouponIssueRequestInfo from(CouponIssueRequest request) { + if (request == null) { + throw new IllegalArgumentException("CouponIssueRequest must not be null"); + } return new CouponIssueRequestInfo( request.getId(), request.getTemplateId(), request.getStatus(), request.getFailReason() ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/coupon/CouponIssueRequestInfo.java` around lines 12 - 19, from(CouponIssueRequest request) currently assumes request is non-null and will NPE if called with null; add a defensive null check at the start of CouponIssueRequestInfo.from to validate the request parameter (e.g., if (request == null) throw new IllegalArgumentException("request must not be null") or a NullPointerException with a clear message) before accessing request.getId()/getTemplateId()/getStatus()/getFailReason(), so the method fails fast with a clear error instead of causing an implicit NPE.apps/commerce-api/src/main/java/com/loopers/interfaces/api/coupon/UserCouponV1Dto.java (1)
52-59:from()메서드의 null 입력 방어 로직이 없다.
CouponIssueRequestInfo info가 null인 경우 NPE가 발생한다. 기존CouponResponse.from()패턴과 동일한 구조이나, 운영 안정성을 위해 null 방어 로직 추가를 권장한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/interfaces/api/coupon/UserCouponV1Dto.java` around lines 52 - 59, The from(CouponIssueRequestInfo info) method lacks null-input protection; add a guard at the start of CouponIssueRequestResponse.from so it mirrors the existing CouponResponse.from() pattern (i.e., if info is null return null) before constructing the new CouponIssueRequestResponse using info.requestId(), info.templateId(), info.status(), info.failReason(). Ensure you reference the from(...) static method and CouponIssueRequestInfo/CouponIssueRequestResponse types when applying the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/coupon/CouponFacade.java`:
- Around line 81-95: requestCouponIssue currently sends to kafkaTemplate inside
the DB transaction which can cause request/message divergence; instead persist
an OutboxEvent entity (use OutboxEventHelper.toJson payload) via an
OutboxEventRepository in the same transactional method and remove the direct
kafkaTemplate.send call from requestCouponIssue; implement or use an OutboxRelay
(or register a TransactionSynchronizationManager.afterCommit hook) to publish
pending OutboxEvent rows to COUPON_ISSUE_TOPIC after commit and mark them
SENT/RETRY on success/failure with retry/backoff; ensure send-future failures
update OutboxEvent status and are retried, and add integration tests exercising
rollback, broker unavailable, and send failures so CouponIssueRequest (and its
PENDING state) stay consistent with outbox lifecycle.
In
`@apps/commerce-api/src/main/java/com/loopers/application/coupon/CreateCouponTemplateCommand.java`:
- Around line 11-13: The CouponTemplate.create() flow is missing validation for
maxIssuanceCount; add a validateMaxIssuanceCount(Integer maxIssuanceCount)
method and call it from CouponTemplate.create() alongside validateDiscountType()
and validateExpiredAt(); the validator should allow null (meaning unlimited) but
throw an IllegalArgumentException (or the project’s domain validation exception)
if maxIssuanceCount is non-null and < 1, and include a clear message;
update/create unit tests to assert that null is accepted and that 0 or negative
values cause the expected exception.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponIssueRequest.java`:
- Around line 19-21: The create factory method (CouponIssueRequest.create)
currently constructs a domain object with possible null userId or templateId;
add explicit null checks for userId and templateId at the start of create and
throw a clear runtime exception (e.g., IllegalArgumentException or
NullPointerException) with descriptive messages if either is null so invalid
objects are not created (keep the rest of construction using
CouponIssueRequestStatus.PENDING unchanged).
In
`@apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponTemplate.java`:
- Around line 35-50: Ensure CouponTemplate enforces issuance invariants in the
domain: in the static create(...) and restore(...) factories validate that
maxIssuanceCount is either null (meaning unlimited) or >= 1, issuedCount >= 0,
and when maxIssuanceCount != null then issuedCount <= maxIssuanceCount; if any
rule fails throw CoreException. Update the create(...) factory to reject
zero/negative maxIssuanceCount and the restore(...) factory to validate
issuedCount bounds and max/issued relation. Add unit tests for create and
restore to reject 0/negative/overflow counts and to allow null (unlimited)
maxIssuanceCount.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/coupon/CouponIssueRequestRepositoryImpl.java`:
- Around line 15-19: The save logic in CouponIssueRequestRepositoryImpl.save
currently always calls CouponIssueRequestEntity.create(request) which inserts a
new row even when the domain has an existing ID; change it to detect whether the
incoming CouponIssueRequest has an ID, and if so load the existing entity via
jpaRepository.findById(id), update its fields (use a new
CouponIssueRequestEntity.updateStatus(status, failReason) method as noted) and
then call jpaRepository.save(entity); if no ID is present, keep creating a new
entity via CouponIssueRequestEntity.create(request) and save it. Ensure you
reference CouponIssueRequestRepositoryImpl.save,
CouponIssueRequestEntity.create, CouponIssueRequestEntity.updateStatus,
jpaRepository.findById(...) and jpaRepository.save(...) when implementing the
fix.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/coupon/CouponTemplateEntity.java`:
- Around line 48-50: CouponTemplateEntity에 선언된 issuedCount 필드를 바로 NOT NULL로 배포하면
기존 DB에서 DDL 실패나 NULL 값으로 인한 런타임 에러가 발생하므로, 대신 Flyway/Liquibase 마이그레이션을 추가하여: 1)
새로운 마이그레이션에서 issued_count 컬럼을 nullable로 먼저 추가, 2) 모든 기존 행에 대해 issued_count를 0으로
백필 업데이트, 3) 컬럼에 NOT NULL 제약을 추가해 고정, 그리고 4) 이 마이그레이션을 기존 데이터 스냅샷으로 적용하는 통합 테스트를
작성하여 issuedCount 필드(CouponTemplateEntity.issuedCount) 변경을 안전하게 배포하세요.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/admin/coupon/AdminCouponV1Dto.java`:
- Around line 31-32: Remove the Bean Validation annotation from the DTO by
deleting `@Min` on AdminCouponV1Dto.maxIssuanceCount and instead enforce the rule
inside the domain creation flow (e.g., in CouponTemplate.create(...)); throw the
project-standard CoreException when maxIssuanceCount is invalid so errors flow
through ApiControllerAdvice rather than producing
MethodArgumentNotValidException from `@Valid` at the HTTP layer; update or add
tests to assert that both HTTP requests and direct service calls with
maxIssuanceCount 0 or negative values produce the same CoreException-backed API
error contract.
In
`@apps/commerce-streamer/src/main/java/com/loopers/application/coupon/CouponIssueProcessor.java`:
- Around line 24-57: At process() start, load and validate the
CouponIssueRequest entity (use couponIssueRequestJpaRepository.findById or
equivalent) and assert it exists, has status PENDING, and that its
userId/templateId match the incoming payload; if status is SUCCESS or FAILED,
return no-op (idempotent) and do not change counts or save coupons. Only after
this validation, perform the duplicate coupon check
(existsByRefUserIdAndRefTemplateId) and then call tryIncrementIssuedCount; when
updating the request status (couponIssueRequestJpaRepository.updateStatus) make
the update conditional (e.g. WHERE id = ? AND status = PENDING) so
concurrent/duplicate deliveries cannot flip SUCCESS->FAILED, and apply the same
conditional guard for the markFailed() path. Add tests covering: re-delivery of
a SUCCESS message (no changes), delivery with non-existent requestId (no
coupon/count changes), and payload mismatch requestId (no coupon/count changes).
In
`@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponIssueRequestJpaRepository.java`:
- Around line 11-13: The `@Modifying` update in CouponIssueRequestJpaRepository
(method updateStatus) can leave stale entities in the persistence context
because it bypasses the EntityManager; modify the `@Modifying` annotation to
include clearAutomatically = true so the persistence context is cleared after
the bulk update, ensuring subsequent loads of CouponIssueRequestEntity within
the same transaction reflect the DB-updated status and failReason.
In
`@apps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/CouponIssueConsumer.java`:
- Around line 29-37: The consumer currently swallows any Exception from
processRecord and then always calls acknowledgment.acknowledge(), which can
permanently lose messages when the record's status wasn't finalized (e.g.,
deserialization error, missing requestId, or markFailed() DB failure); update
consume(List<ConsumerRecord<Object,Object>> records, Acknowledgment
acknowledgment) to: for each record call processRecord(record) and catch
exceptions only when you can deterministically finalize state (e.g., you
successfully called markFailed(requestId)); if you cannot determine/record a
terminal state (missing requestId, deserialization, or markFailed failure),
rethrow the exception so the container error handler/DLT can handle retries;
only call acknowledgment.acknowledge() after all records have been processed and
confirmed terminal, and add tests covering malformed payload, missing requestId,
and markFailed() DB failure to assert they are retried or sent to DLT rather
than being acked.
---
Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/coupon/CouponIssueRequestInfo.java`:
- Around line 12-19: from(CouponIssueRequest request) currently assumes request
is non-null and will NPE if called with null; add a defensive null check at the
start of CouponIssueRequestInfo.from to validate the request parameter (e.g., if
(request == null) throw new IllegalArgumentException("request must not be null")
or a NullPointerException with a clear message) before accessing
request.getId()/getTemplateId()/getStatus()/getFailReason(), so the method fails
fast with a clear error instead of causing an implicit NPE.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponIssueRequest.java`:
- Around line 8-9: CouponIssueRequest currently has mutable fields status and
failReason but no setters; either make the intent explicit by marking both
fields final and initializing them via the constructor (update the
CouponIssueRequest constructor to accept initial status and failReason as
needed) OR add domain state-transition methods on CouponIssueRequest (e.g.,
markSuccess() which sets status to SUCCESS and clears failReason, and
markFailed(String reason) which sets status to FAILED and sets failReason) to
encapsulate state changes; locate the class CouponIssueRequest and modify the
fields/status-handling accordingly (use CouponIssueRequest.status and
CouponIssueRequest.failReason and add methods markSuccess and markFailed if
choosing the domain-method route).
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/coupon/UserCouponV1Dto.java`:
- Around line 52-59: The from(CouponIssueRequestInfo info) method lacks
null-input protection; add a guard at the start of
CouponIssueRequestResponse.from so it mirrors the existing CouponResponse.from()
pattern (i.e., if info is null return null) before constructing the new
CouponIssueRequestResponse using info.requestId(), info.templateId(),
info.status(), info.failReason(). Ensure you reference the from(...) static
method and CouponIssueRequestInfo/CouponIssueRequestResponse types when applying
the change.
In
`@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponEntity.java`:
- Around line 14-15: Add a DB-level unique constraint to the CouponEntity
mapping to prevent duplicate issuance under concurrent requests: update the
`@Table` annotation on class CouponEntity to include a UniqueConstraint for the
"ref_user_id" and "ref_template_id" columns (i.e., `@Table`(name="coupons",
uniqueConstraints = {`@UniqueConstraint`(columnNames = {"ref_user_id",
"ref_template_id"})})). Ensure column names match the entity field mappings for
refUserId and refTemplateId so the JPA schema and future migration reflect the
uniqueness guarantee.
- Around line 57-67: The constructor CouponEntity(Long refUserId, Long
refTemplateId, String name, DiscountType type, Integer discountValue, Integer
minOrderAmount, ZonedDateTime expiredAt) must perform fail-fast null checks for
required fields (at least refUserId, refTemplateId, name and any other
non-nullable fields) before assigning to the instance; update that constructor
to validate using Objects.requireNonNull(...) or explicit checks and throw
IllegalArgumentException/NullPointerException with clear messages indicating
which parameter is missing (referencing the constructor and fields refUserId,
refTemplateId, name, type, etc.), so the entity cannot be created with null
mandatory values.
In
`@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponTemplateEntity.java`:
- Around line 43-55: The issuedCount field is being initialized twice (field
declaration issuedCount = 0 and constructor assignment this.issuedCount = 0);
remove one of these to avoid redundancy (either drop the "= 0" on the field or
remove the assignment in the constructor inside CouponTemplateEntity
constructor). Also add a public getter getIssuedCount() in CouponTemplateEntity
if the issued count must be exposed by the API.
In
`@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponTemplateJpaRepository.java`:
- Around line 15-22: The update method tryIncrementIssuedCount in
CouponTemplateJpaRepository should be annotated with
`@Modifying`(clearAutomatically = true) to ensure the persistence context is
cleared after the native update; modify the `@Modifying` annotation on
tryIncrementIssuedCount to add clearAutomatically = true so cached/stale
CouponTemplate entities aren't returned in the same transaction after the native
update.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2b944eaf-85e8-4612-ad2b-26cd9b5684d7
📒 Files selected for processing (30)
apps/commerce-api/src/main/java/com/loopers/application/coupon/CouponFacade.javaapps/commerce-api/src/main/java/com/loopers/application/coupon/CouponIssueRequestInfo.javaapps/commerce-api/src/main/java/com/loopers/application/coupon/CouponTemplateInfo.javaapps/commerce-api/src/main/java/com/loopers/application/coupon/CreateCouponTemplateCommand.javaapps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponIssueRequest.javaapps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponIssueRequestRepository.javaapps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponIssueRequestStatus.javaapps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponTemplate.javaapps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponTemplateService.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/coupon/CouponIssueRequestEntity.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/coupon/CouponIssueRequestJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/coupon/CouponIssueRequestRepositoryImpl.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/coupon/CouponTemplateEntity.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/admin/coupon/AdminCouponV1Controller.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/admin/coupon/AdminCouponV1Dto.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/coupon/UserCouponV1Controller.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/coupon/UserCouponV1Dto.javaapps/commerce-api/src/main/java/com/loopers/support/error/ErrorMessage.javaapps/commerce-streamer/src/main/java/com/loopers/application/coupon/CouponIssueException.javaapps/commerce-streamer/src/main/java/com/loopers/application/coupon/CouponIssueProcessor.javaapps/commerce-streamer/src/main/java/com/loopers/domain/coupon/CouponIssueRequestStatus.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponEntity.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponIssueRequestEntity.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponIssueRequestJpaRepository.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponJpaRepository.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponTemplateEntity.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponTemplateJpaRepository.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponUsageType.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/DiscountType.javaapps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/CouponIssueConsumer.java
✅ Files skipped from review due to trivial changes (6)
- apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponUsageType.java
- apps/commerce-streamer/src/main/java/com/loopers/domain/coupon/CouponIssueRequestStatus.java
- apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/DiscountType.java
- apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponIssueRequestStatus.java
- apps/commerce-api/src/main/java/com/loopers/infrastructure/coupon/CouponIssueRequestJpaRepository.java
- apps/commerce-streamer/src/main/java/com/loopers/application/coupon/CouponIssueException.java
apps/commerce-api/src/main/java/com/loopers/application/coupon/CouponFacade.java
Show resolved
Hide resolved
| ZonedDateTime expiredAt, | ||
| Integer maxIssuanceCount | ||
| ) { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: CouponTemplate 도메인에서 maxIssuanceCount 검증 로직 확인
ast-grep --pattern $'public static CouponTemplate create($$$) {
$$$
}'
rg -n "maxIssuanceCount" --type=java -g '*CouponTemplate.java' -C5Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 7614
maxIssuanceCount 필드에 대한 도메인 검증이 누락되었다.
도메인 클래스 CouponTemplate.create() 메서드에서 discountType과 expiredAt는 validateDiscountType(), validateExpiredAt() 메서드로 검증되지만, maxIssuanceCount는 검증 로직이 없이 직접 전달된다. maxIssuanceCount가 null인 경우(무제한)는 허용되더라도, 0 또는 음수 값에 대한 방어가 필요하다.
validateMaxIssuanceCount() 메서드를 추가하여 다음을 검증해야 한다:
- null이 아닌 경우, 1 이상의 값만 허용
- 0 또는 음수가 입력될 경우 적절한 예외 발생
이 검증을 create() 메서드에서 호출하고, 추가 테스트로 유효하지 않은 값 입력 시 예외 발생을 확인해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/coupon/CreateCouponTemplateCommand.java`
around lines 11 - 13, The CouponTemplate.create() flow is missing validation for
maxIssuanceCount; add a validateMaxIssuanceCount(Integer maxIssuanceCount)
method and call it from CouponTemplate.create() alongside validateDiscountType()
and validateExpiredAt(); the validator should allow null (meaning unlimited) but
throw an IllegalArgumentException (or the project’s domain validation exception)
if maxIssuanceCount is non-null and < 1, and include a clear message;
update/create unit tests to assert that null is accepted and that 0 or negative
values cause the expected exception.
| public static CouponIssueRequest create(Long userId, Long templateId) { | ||
| return new CouponIssueRequest(null, userId, templateId, CouponIssueRequestStatus.PENDING, null); | ||
| } |
There was a problem hiding this comment.
create() 메서드에서 필수 파라미터 null 검증이 누락되었다.
userId와 templateId가 null로 전달되면 도메인 객체가 유효하지 않은 상태로 생성된다. 이후 영속화 시점까지 오류가 지연되어 디버깅이 어려워진다.
🛡️ null 검증 추가
public static CouponIssueRequest create(Long userId, Long templateId) {
+ if (userId == null || templateId == null) {
+ throw new IllegalArgumentException("userId and templateId cannot be null");
+ }
return new CouponIssueRequest(null, userId, templateId, CouponIssueRequestStatus.PENDING, null);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponIssueRequest.java`
around lines 19 - 21, The create factory method (CouponIssueRequest.create)
currently constructs a domain object with possible null userId or templateId;
add explicit null checks for userId and templateId at the start of create and
throw a clear runtime exception (e.g., IllegalArgumentException or
NullPointerException) with descriptive messages if either is null so invalid
objects are not created (keep the rest of construction using
CouponIssueRequestStatus.PENDING unchanged).
| public static CouponTemplate create(String name, DiscountType discountType, Integer discountValue, Integer minOrderAmount, ZonedDateTime expiredAt, Integer maxIssuanceCount) { | ||
| validateDiscountType(discountType); | ||
| validateExpiredAt(expiredAt); | ||
|
|
||
| return new CouponTemplate(null, new Name(name), discountType, discountValue, new Money(minOrderAmount), expiredAt); | ||
| return new CouponTemplate(null, new Name(name), discountType, discountValue, new Money(minOrderAmount), expiredAt, maxIssuanceCount, 0); | ||
| } | ||
|
|
||
| public CouponTemplate updateTemplate(String name, DiscountType discountType, Integer discountValue, Integer minOrderAmount, ZonedDateTime expiredAt) { | ||
| validateDiscountType(discountType); | ||
| validateExpiredAt(expiredAt); | ||
|
|
||
| return new CouponTemplate(this.id, new Name(name), discountType, discountValue, new Money(minOrderAmount), expiredAt); | ||
| return new CouponTemplate(this.id, new Name(name), discountType, discountValue, new Money(minOrderAmount), expiredAt, this.maxIssuanceCount, this.issuedCount); | ||
| } | ||
|
|
||
| public static CouponTemplate restore(Long id, String name, DiscountType type, Integer discountValue, Integer minOrderAmount, ZonedDateTime expiredAt) { | ||
| return new CouponTemplate(id, new Name(name), type, discountValue, new Money(minOrderAmount), expiredAt); | ||
| public static CouponTemplate restore(Long id, String name, DiscountType type, Integer discountValue, Integer minOrderAmount, ZonedDateTime expiredAt, Integer maxIssuanceCount, Integer issuedCount) { | ||
| return new CouponTemplate(id, new Name(name), type, discountValue, new Money(minOrderAmount), expiredAt, maxIssuanceCount, issuedCount); |
There was a problem hiding this comment.
발급 수량 불변식은 도메인에서 보장해야 한다
현재 create()와 restore()는 maxIssuanceCount=0/-1, 음수 issuedCount, issuedCount > maxIssuanceCount를 막지 않는다. 이런 값이 들어오면 발급이 영구 차단되거나 관리자 조회에 불가능한 수치가 노출되므로, 두 팩토리에서 null 또는 1 이상, issuedCount >= 0, max != null이면 issuedCount <= max를 공통 검증하고 CoreException으로 차단해야 한다. create/restore 각각에 대해 0·음수·초과 수량 거절과 unlimited 허용 테스트를 추가해야 한다. Based on learnings, validation logic should live in the domain layer so it is enforced regardless of entrypoint and supports complex domain rules.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/coupon/CouponTemplate.java`
around lines 35 - 50, Ensure CouponTemplate enforces issuance invariants in the
domain: in the static create(...) and restore(...) factories validate that
maxIssuanceCount is either null (meaning unlimited) or >= 1, issuedCount >= 0,
and when maxIssuanceCount != null then issuedCount <= maxIssuanceCount; if any
rule fails throw CoreException. Update the create(...) factory to reject
zero/negative maxIssuanceCount and the restore(...) factory to validate
issuedCount bounds and max/issued relation. Add unit tests for create and
restore to reject 0/negative/overflow counts and to allow null (unlimited)
maxIssuanceCount.
| @Override | ||
| public CouponIssueRequest save(CouponIssueRequest request) { | ||
| CouponIssueRequestEntity entity = CouponIssueRequestEntity.create(request); | ||
| return jpaRepository.save(entity).toDomain(); | ||
| } |
There was a problem hiding this comment.
save() 메서드가 기존 엔티티 업데이트를 지원하지 않는다.
CouponIssueRequestEntity.create(request)는 항상 새 엔티티를 생성한다. 도메인 객체에 ID가 있는 경우(DB에서 조회 후 상태 변경)에도 새 row가 생성되어 데이터 중복이 발생할 수 있다. 비동기 Consumer가 상태를 SUCCESS/FAILED로 업데이트해야 하는 시나리오에서 문제가 된다.
🔧 수정안: ID 존재 여부에 따른 분기 처리
`@Override`
public CouponIssueRequest save(CouponIssueRequest request) {
- CouponIssueRequestEntity entity = CouponIssueRequestEntity.create(request);
- return jpaRepository.save(entity).toDomain();
+ if (request.getId() == null) {
+ CouponIssueRequestEntity entity = CouponIssueRequestEntity.create(request);
+ return jpaRepository.save(entity).toDomain();
+ }
+ CouponIssueRequestEntity entity = jpaRepository.findById(request.getId())
+ .orElseThrow(() -> new IllegalStateException("Entity not found for update"));
+ entity.updateStatus(request.getStatus(), request.getFailReason());
+ return jpaRepository.save(entity).toDomain();
}이 수정안을 적용하려면 CouponIssueRequestEntity에 updateStatus(status, failReason) 메서드 추가가 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/coupon/CouponIssueRequestRepositoryImpl.java`
around lines 15 - 19, The save logic in CouponIssueRequestRepositoryImpl.save
currently always calls CouponIssueRequestEntity.create(request) which inserts a
new row even when the domain has an existing ID; change it to detect whether the
incoming CouponIssueRequest has an ID, and if so load the existing entity via
jpaRepository.findById(id), update its fields (use a new
CouponIssueRequestEntity.updateStatus(status, failReason) method as noted) and
then call jpaRepository.save(entity); if no ID is present, keep creating a new
entity via CouponIssueRequestEntity.create(request) and save it. Ensure you
reference CouponIssueRequestRepositoryImpl.save,
CouponIssueRequestEntity.create, CouponIssueRequestEntity.updateStatus,
jpaRepository.findById(...) and jpaRepository.save(...) when implementing the
fix.
| @Comment("현재 발급된 수량") | ||
| @Column(name = "issued_count", nullable = false) | ||
| private Integer issuedCount = 0; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Expect: a migration that adds max_issuance_count and backfills issued_count to 0 before NOT NULL enforcement"
rg -n --glob '*.sql' --glob '*.xml' --glob '*.yml' --glob '*.yaml' \
'coupon_templates|max_issuance_count|issued_count|ALTER TABLE .*coupon_templates' .Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 193
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find migration directories
echo "=== Finding migration directories ==="
fd -t d 'migration|db|flyway|liquibase' . 2>/dev/null | head -20
# Search for Java-based migration files
echo -e "\n=== Searching for Flyway/Liquibase Java migration files ==="
fd -t f 'V[0-9].*\.java' . 2>/dev/null | head -20
# Check for common migration paths
echo -e "\n=== Checking common migration paths ==="
for path in 'src/main/resources/db/migration' 'src/main/resources/db' 'src/test/resources/db/migration'; do
if [ -d "$path" ]; then
echo "Found: $path"
find "$path" -type f 2>/dev/null | head -10
fi
doneRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2021
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify the actual CouponTemplateEntity file
echo "=== Reading CouponTemplateEntity.java ==="
find . -name 'CouponTemplateEntity.java' -type f | while read file; do
echo "File: $file"
wc -l "$file"
echo "---"
# Show lines around 48-50
sed -n '40,60p' "$file"
doneRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2184
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Explore the db directory structure
echo "=== Contents of db directory ==="
find ./db -type f 2>/dev/null | head -30
echo -e "\n=== Check for any SQL files in entire repo ==="
fd -t f '\.(sql|SQL)$' . 2>/dev/null | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 211
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Check for Flyway/Liquibase configuration
echo "=== Searching for Flyway/Liquibase config ==="
fd -t f '\.(yml|yaml|properties)$' . 2>/dev/null | xargs rg -l 'spring\.jpa\.hibernate|flyway|liquibase' 2>/dev/null | head -10
# Also check for alternative migration strategies
echo -e "\n=== Checking pom.xml or gradle.build for migration tools ==="
fd -t f 'pom\.xml|build\.gradle' . 2>/dev/null | xargs grep -l 'flyway\|liquibase' 2>/dev/null || echo "No migration tools found in build files"
# List all SQL files in the repo to be thorough
echo -e "\n=== All files in db directory ==="
find ./db -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 132
마이그레이션 파일 없이 NOT NULL 컬럼을 추가할 수 없다
저장소에 Flyway나 Liquibase 마이그레이션 파일이 존재하지 않는다. issued_count를 NOT NULL로 선언한 상태로 배포하면 기존 데이터가 있는 운영 DB의 DDL 실행이 실패하거나 기존 행이 NULL로 남아 애플리케이션 오류를 유발한다. Java의 = 0 기본값은 새로운 행에만 적용되며 이미 존재하는 행의 NULL 값을 처리하지 않는다.
마이그레이션 전략을 다음과 같이 수립해야 한다:
issued_count컬럼을 먼저 추가하되 nullable로 생성- 기존 모든 행의
issued_count를 0으로 백필 - 컬럼에 NOT NULL 제약조건 추가
- 기존 데이터 스냅샷에 마이그레이션을 적용하는 테스트 작성
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/coupon/CouponTemplateEntity.java`
around lines 48 - 50, CouponTemplateEntity에 선언된 issuedCount 필드를 바로 NOT NULL로
배포하면 기존 DB에서 DDL 실패나 NULL 값으로 인한 런타임 에러가 발생하므로, 대신 Flyway/Liquibase 마이그레이션을
추가하여: 1) 새로운 마이그레이션에서 issued_count 컬럼을 nullable로 먼저 추가, 2) 모든 기존 행에 대해
issued_count를 0으로 백필 업데이트, 3) 컬럼에 NOT NULL 제약을 추가해 고정, 그리고 4) 이 마이그레이션을 기존 데이터
스냅샷으로 적용하는 통합 테스트를 작성하여 issuedCount 필드(CouponTemplateEntity.issuedCount) 변경을
안전하게 배포하세요.
| @Min(value = 1, message = "최대 발급 수량은 1 이상이어야 합니다") | ||
| Integer maxIssuanceCount // null이면 무제한 |
There was a problem hiding this comment.
새 필드 검증을 DTO Bean Validation에 두면 에러 응답 규약이 다시 갈린다
maxIssuanceCount 규칙을 여기 두면 HTTP entrypoint에만 묶이거나, @Valid가 붙은 곳에서는 MethodArgumentNotValidException으로 다른 에러 포맷이 나올 수 있다. 이 검증은 CouponTemplate.create(...) 같은 도메인 경계에서 CoreException으로 처리하고 DTO annotation은 제거하는 편이 안전하다. maxIssuanceCount=0/-1에 대해 HTTP 요청과 서비스 직접 호출이 동일한 에러 계약으로 실패하는 테스트를 추가해야 한다. Based on learnings, Do not use Bean Validation annotations on DTOs in this project; move validation logic into the domain layer and route errors through CoreException to ApiControllerAdvice for a consistent response format.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/admin/coupon/AdminCouponV1Dto.java`
around lines 31 - 32, Remove the Bean Validation annotation from the DTO by
deleting `@Min` on AdminCouponV1Dto.maxIssuanceCount and instead enforce the rule
inside the domain creation flow (e.g., in CouponTemplate.create(...)); throw the
project-standard CoreException when maxIssuanceCount is invalid so errors flow
through ApiControllerAdvice rather than producing
MethodArgumentNotValidException from `@Valid` at the HTTP layer; update or add
tests to assert that both HTTP requests and direct service calls with
maxIssuanceCount 0 or negative values produce the same CoreException-backed API
error contract.
| @Transactional | ||
| public void process(Long requestId, Long userId, Long templateId) { | ||
| // 중복 발급 방지 | ||
| if (couponJpaRepository.existsByRefUserIdAndRefTemplateId(userId, templateId)) { | ||
| throw new CouponIssueException("이미 발급된 쿠폰입니다."); | ||
| } | ||
|
|
||
| // 선착순 수량 제한 (atomic UPDATE) | ||
| int updated = couponTemplateJpaRepository.tryIncrementIssuedCount(templateId); | ||
| if (updated == 0) { | ||
| throw new CouponIssueException("쿠폰 발급 수량이 초과되었습니다."); | ||
| } | ||
|
|
||
| // 쿠폰 발급 (템플릿 스냅샷 저장) | ||
| CouponTemplateEntity template = couponTemplateJpaRepository.findById(templateId) | ||
| .orElseThrow(() -> new CouponIssueException("쿠폰 템플릿을 찾을 수 없습니다.")); | ||
|
|
||
| couponJpaRepository.save(new CouponEntity( | ||
| userId, | ||
| templateId, | ||
| template.getName(), | ||
| template.getType(), | ||
| template.getDiscountValue(), | ||
| template.getMinOrderAmount(), | ||
| template.getExpiredAt() | ||
| )); | ||
|
|
||
| couponIssueRequestJpaRepository.updateStatus(requestId, CouponIssueRequestStatus.SUCCESS, null); | ||
| log.info("쿠폰 발급 성공. requestId={}, userId={}, templateId={}", requestId, userId, templateId); | ||
| } | ||
|
|
||
| @Transactional(propagation = Propagation.REQUIRES_NEW) | ||
| public void markFailed(Long requestId, String reason) { | ||
| couponIssueRequestJpaRepository.updateStatus(requestId, CouponIssueRequestStatus.FAILED, reason); |
There was a problem hiding this comment.
requestId 상태를 먼저 검증하지 않으면 재전달로 SUCCESS가 FAILED로 뒤집힌다
Kafka 재전달은 정상 동작인데, 현재는 requestId의 존재·PENDING 상태·payload 일치 여부를 확인하지 않고 쿠폰 저장과 수량 증가를 먼저 수행한다. 그 결과 이미 성공한 메시지가 다시 오면 duplicate check 후 consumer가 markFailed()를 호출해 SUCCESS -> FAILED로 덮어쓸 수 있고, 존재하지 않는 requestId도 고아 쿠폰을 만들 수 있다. process() 시작 시 요청을 조회해 PENDING과 userId/templateId 일치를 검증하고, SUCCESS/FAILED면 no-op 또는 idempotent success로 종료하며, 상태 갱신은 WHERE status = PENDING 같은 조건으로 보호해야 한다. 성공 후 동일 메시지 재전달, 존재하지 않는 requestId, payload 불일치 requestId에서 상태와 issued_count가 변하지 않는 테스트를 추가해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-streamer/src/main/java/com/loopers/application/coupon/CouponIssueProcessor.java`
around lines 24 - 57, At process() start, load and validate the
CouponIssueRequest entity (use couponIssueRequestJpaRepository.findById or
equivalent) and assert it exists, has status PENDING, and that its
userId/templateId match the incoming payload; if status is SUCCESS or FAILED,
return no-op (idempotent) and do not change counts or save coupons. Only after
this validation, perform the duplicate coupon check
(existsByRefUserIdAndRefTemplateId) and then call tryIncrementIssuedCount; when
updating the request status (couponIssueRequestJpaRepository.updateStatus) make
the update conditional (e.g. WHERE id = ? AND status = PENDING) so
concurrent/duplicate deliveries cannot flip SUCCESS->FAILED, and apply the same
conditional guard for the markFailed() path. Add tests covering: re-delivery of
a SUCCESS message (no changes), delivery with non-existent requestId (no
coupon/count changes), and payload mismatch requestId (no coupon/count changes).
| @Modifying | ||
| @Query("UPDATE CouponIssueRequestEntity e SET e.status = :status, e.failReason = :failReason WHERE e.id = :id") | ||
| void updateStatus(@Param("id") Long id, @Param("status") CouponIssueRequestStatus status, @Param("failReason") String failReason); |
There was a problem hiding this comment.
@Modifying에 clearAutomatically = true 추가가 필요하다.
@Modifying 쿼리는 영속성 컨텍스트를 우회하여 직접 DB를 갱신한다. 동일 트랜잭션 내에서 해당 엔티티가 이미 로드되어 있다면, 1차 캐시에 갱신 전 상태가 남아 있어 이후 조회 시 stale 데이터를 반환할 수 있다.
운영 관점 문제: 장애 상황에서 상태 갱신 후 재조회 시 실제 DB 값과 다른 값을 읽어 디버깅을 어렵게 하고, 후속 로직에서 잘못된 판단을 유발할 수 있다.
🔧 수정안
- `@Modifying`
+ `@Modifying`(clearAutomatically = true)
`@Query`("UPDATE CouponIssueRequestEntity e SET e.status = :status, e.failReason = :failReason WHERE e.id = :id")
void updateStatus(`@Param`("id") Long id, `@Param`("status") CouponIssueRequestStatus status, `@Param`("failReason") String failReason);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-streamer/src/main/java/com/loopers/infrastructure/coupon/CouponIssueRequestJpaRepository.java`
around lines 11 - 13, The `@Modifying` update in CouponIssueRequestJpaRepository
(method updateStatus) can leave stale entities in the persistence context
because it bypasses the EntityManager; modify the `@Modifying` annotation to
include clearAutomatically = true so the persistence context is cleared after
the bulk update, ensuring subsequent loads of CouponIssueRequestEntity within
the same transaction reflect the DB-updated status and failReason.
| public void consume(List<ConsumerRecord<Object, Object>> records, Acknowledgment acknowledgment) { | ||
| for (ConsumerRecord<Object, Object> record : records) { | ||
| try { | ||
| processRecord(record); | ||
| } catch (Exception e) { | ||
| log.error("쿠폰 발급 요청 처리 실패. partition={}, offset={}", record.partition(), record.offset(), e); | ||
| } | ||
| } | ||
| acknowledgment.acknowledge(); |
There was a problem hiding this comment.
예상 밖 예외를 삼킨 뒤 배치를 ACK하면 요청이 영구 유실된다
Line 33-35는 역직렬화 실패, 필드 파싱 실패, markFailed() 실패까지 로그만 남기고 계속 진행한 뒤 Line 37에서 배치 전체 offset을 commit한다. 특히 Line 41-46에서 터지는 예외는 requestId를 얻기 전이라 실패 상태 기록도 못 한다. 상태를 확정하지 못한 예외는 컨테이너 error handler로 넘겨 재시도하거나 DLT로 보내고, ack는 모든 레코드가 성공 또는 명시적 실패 전환을 끝낸 뒤에만 수행해야 한다. malformed payload, missing requestId, markFailed() DB failure가 재시도 또는 DLT로 흐르는 테스트를 추가해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/CouponIssueConsumer.java`
around lines 29 - 37, The consumer currently swallows any Exception from
processRecord and then always calls acknowledgment.acknowledge(), which can
permanently lose messages when the record's status wasn't finalized (e.g.,
deserialization error, missing requestId, or markFailed() DB failure); update
consume(List<ConsumerRecord<Object,Object>> records, Acknowledgment
acknowledgment) to: for each record call processRecord(record) and catch
exceptions only when you can deterministically finalize state (e.g., you
successfully called markFailed(requestId)); if you cannot determine/record a
terminal state (missing requestId, deserialization, or markFailed failure),
rethrow the exception so the container error handler/DLT can handle retries;
only call acknowledgment.acknowledge() after all records have been processed and
confirmed terminal, and add tests covering malformed payload, missing requestId,
and markFailed() DB failure to assert they are retried or sent to DLT rather
than being acked.
eventId 중복 체크(동일 Kafka 메시지 재수신)에 더해 같은 논리적 이벤트가 다른 offset으로 늦게 도착하는 경우를 차단 event_handled에 entityId + occurredAt 추가 더 최신 이벤트가 이미 처리된 경우 stale 이벤트 skip Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
섹션 16: 도메인별 토픽 설계로 변경 및 PartitionKey 순서 보장 이유 설명 섹션 24: event_handled와 로그 테이블 분리 이유 섹션 25: stale 이벤트 필터링 — eventId 중복 체크만으로 부족한 이유 Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (2)
modules/kafka/src/main/java/com/loopers/confg/kafka/KafkaConfig.java (2)
34-47: 운영 환경에서 Kafka 토픽 복제 계수(replicas) 설정 검토가 필요하다.운영 관점 문제점:
replicas(1)설정은 브로커 장애 시 해당 파티션의 데이터가 유실되고 서비스가 중단된다. 개발 환경에서는 문제가 없으나, 운영 환경에서는 최소replicas(2)이상 설정이 필요하다.수정안: 환경별 설정 분리를 권장한다.
제안: 환경별 복제 계수 설정
+import org.springframework.beans.factory.annotation.Value; + public class KafkaConfig { + `@Value`("${kafka.topic.replicas:1}") + private int topicReplicas; + `@Bean` public NewTopic catalogEventsTopic() { - return TopicBuilder.name("catalog-events").partitions(3).replicas(1).build(); + return TopicBuilder.name("catalog-events").partitions(3).replicas(topicReplicas).build(); }추가 확인: 운영 환경 Kafka 클러스터의 브로커 수 및
min.insync.replicas설정과 함께 복제 계수를 결정해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/kafka/src/main/java/com/loopers/confg/kafka/KafkaConfig.java` around lines 34 - 47, The topic beans catalogEventsTopic(), orderEventsTopic(), and couponIssueRequestsTopic() currently hardcode replicas(1); change them to read a configurable replication factor (e.g., from an application property or environment variable like kafka.replication.factor) and use that value in TopicBuilder.replicas(...) so production can use >=2 while dev defaults to 1; update KafkaConfig to inject the property (or System.getenv) and validate/convert to int, and optionally log or throw if the configured replication factor exceeds the cluster broker count or contradicts min.insync.replicas.
1-1: 패키지명confg를config로 수정한다.
com.loopers.confg.kafka는 실제로config의 오타이며, 현재 3개 파일에서 이 패키지를 import하고 있다(ProductMetricsConsumer, CouponIssueConsumer, DemoKafkaConsumer). 동시에 다른 모듈들(commerce-api, jpa, redis)은 모두com.loopers.config로 올바르게 네이밍하고 있어 일관성이 깨져있다. 패키지명의 불일치는 유지보수성을 해치고 신규 개발자의 코드 이해를 어렵게 한다.수정 방법:
- 패키지명을
com.loopers.config.kafka로 변경- 다음 3개 파일의 import 문 수정:
- apps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/ProductMetricsConsumer.java
- apps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/CouponIssueConsumer.java
- apps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/DemoKafkaConsumer.java
- 수정 후 해당 Consumer 클래스들의 컴파일 및 기본 동작을 검증한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@modules/kafka/src/main/java/com/loopers/confg/kafka/KafkaConfig.java` at line 1, 패키지 오타로 com.loopers.confg.kafka가 사용되고 있으니 KafkaConfig의 package 선언을 com.loopers.config.kafka로 수정하고, 이를 참조하는 ProductMetricsConsumer, CouponIssueConsumer, DemoKafkaConsumer의 import 구문에서 com.loopers.confg.kafka → com.loopers.config.kafka로 변경해 주세요; 변경 후 KafkaConfig, ProductMetricsConsumer, CouponIssueConsumer, DemoKafkaConsumer가 정상 컴파일되고 기본 동작(consumer 초기화/구독)이 통과하는지 빌드/간단한 실행으로 검증하십시오.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/payment/PaymentFacade.java`:
- Line 107: OrderItem.refProductId is a Long but ProductMetricsConsumer
deserializes productIds as List<Integer>, risking silent overflow for IDs >
Integer.MAX_VALUE; update ProductMetricsConsumer (reference
ProductMetricsConsumer) to deserialize productIds as List<Long> or List<Number>
and handle them as Longs, or alternatively ensure the payload serialization that
builds the "productIds" field from OrderItem.refProductId preserves Long typing;
after the change add an integration test that sends a productId >
Integer.MAX_VALUE and verifies correct Long handling and metrics attribution.
In
`@apps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/ProductMetricsConsumer.java`:
- Line 50: The code in ProductMetricsConsumer currently casts
payload.get("productIds") to List<Integer>, which overflows for IDs >
Integer.MAX_VALUE; change the cast to List<Number> and convert each entry with
number.longValue() before using it (e.g., when mapping to productId or comparing
to OrderItem.refProductId), ensuring all downstream uses expect long/Long;
update any variable types from List<Integer>/int to List<Long>/long or use a
List<Long> built from the Number values and add an integration test for an
ORDER_CONFIRMED event containing productIds > Integer.MAX_VALUE.
- Around line 46-47: In ProductMetricsConsumer, the direct casts and parse of
payload fields cause NPE/DateTimeParseException when "type" or "occurredAt" are
missing; add defensive validation around payload.get("type") and
payload.get("occurredAt") before casting/parsing (e.g., verify non-null and
non-empty), handle parse failures for ZonedDateTime.parse((String)
payload.get("occurredAt")) with a try/catch or Optional/DateTimeFormatter
fallback, and when validation fails log a clear warning including the record
id/context and skip processing that message instead of allowing an exception to
propagate; ensure unit tests cover missing "type" and "occurredAt" cases to
assert graceful skip.
---
Nitpick comments:
In `@modules/kafka/src/main/java/com/loopers/confg/kafka/KafkaConfig.java`:
- Around line 34-47: The topic beans catalogEventsTopic(), orderEventsTopic(),
and couponIssueRequestsTopic() currently hardcode replicas(1); change them to
read a configurable replication factor (e.g., from an application property or
environment variable like kafka.replication.factor) and use that value in
TopicBuilder.replicas(...) so production can use >=2 while dev defaults to 1;
update KafkaConfig to inject the property (or System.getenv) and
validate/convert to int, and optionally log or throw if the configured
replication factor exceeds the cluster broker count or contradicts
min.insync.replicas.
- Line 1: 패키지 오타로 com.loopers.confg.kafka가 사용되고 있으니 KafkaConfig의 package 선언을
com.loopers.config.kafka로 수정하고, 이를 참조하는 ProductMetricsConsumer,
CouponIssueConsumer, DemoKafkaConsumer의 import 구문에서 com.loopers.confg.kafka →
com.loopers.config.kafka로 변경해 주세요; 변경 후 KafkaConfig, ProductMetricsConsumer,
CouponIssueConsumer, DemoKafkaConsumer가 정상 컴파일되고 기본 동작(consumer 초기화/구독)이 통과하는지
빌드/간단한 실행으로 검증하십시오.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 803b337f-352a-4ae9-9c67-ad2713f8cd43
⛔ Files ignored due to path filters (1)
docs/07-event-driven.mdis excluded by!**/*.mdand included by**
📒 Files selected for processing (12)
apps/commerce-api/src/main/java/com/loopers/application/coupon/CouponFacade.javaapps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.javaapps/commerce-api/src/main/java/com/loopers/application/payment/PaymentFacade.javaapps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.javaapps/commerce-streamer/src/main/java/com/loopers/application/metrics/ProductMetricsProcessor.javaapps/commerce-streamer/src/main/java/com/loopers/domain/event/EventHandled.javaapps/commerce-streamer/src/main/java/com/loopers/domain/event/EventHandledRepository.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/event/EventHandledEntity.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/event/EventHandledJpaRepository.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/event/EventHandledRepositoryImpl.javaapps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/ProductMetricsConsumer.javamodules/kafka/src/main/java/com/loopers/confg/kafka/KafkaConfig.java
✅ Files skipped from review due to trivial changes (1)
- apps/commerce-streamer/src/main/java/com/loopers/domain/event/EventHandled.java
🚧 Files skipped from review as they are similar to previous changes (5)
- apps/commerce-streamer/src/main/java/com/loopers/domain/event/EventHandledRepository.java
- apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java
- apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java
- apps/commerce-streamer/src/main/java/com/loopers/infrastructure/event/EventHandledJpaRepository.java
- apps/commerce-api/src/main/java/com/loopers/application/coupon/CouponFacade.java
| "type", "ORDER_CONFIRMED", | ||
| "orderId", order.getId(), | ||
| "userId", order.getRefUserId(), | ||
| "productIds", order.getItems().stream().map(OrderItem::refProductId).toList(), |
There was a problem hiding this comment.
productId 타입 불일치로 대량 ID 오버플로우 위험이 있다.
운영 관점 문제점:
OrderItem.refProductId는 Long 타입이나, ProductMetricsConsumer(Line 50)에서 List<Integer>로 역직렬화한다. productId가 Integer.MAX_VALUE(2,147,483,647)를 초과하면 silent overflow가 발생하여 잘못된 상품의 메트릭이 증가한다. 운영 환경에서 auto-increment ID가 이 값을 초과하면 데이터 정합성 장애로 이어진다.
수정안: Consumer 측에서 List<Long> 또는 List<Number>로 역직렬화하거나, payload 직렬화 시 명시적으로 Long 타입을 보존한다.
추가 테스트: productId가 Integer.MAX_VALUE를 초과하는 케이스에서 정상 처리되는지 통합 테스트가 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/payment/PaymentFacade.java`
at line 107, OrderItem.refProductId is a Long but ProductMetricsConsumer
deserializes productIds as List<Integer>, risking silent overflow for IDs >
Integer.MAX_VALUE; update ProductMetricsConsumer (reference
ProductMetricsConsumer) to deserialize productIds as List<Long> or List<Number>
and handle them as Longs, or alternatively ensure the payload serialization that
builds the "productIds" field from OrderItem.refProductId preserves Long typing;
after the change add an integration test that sends a productId >
Integer.MAX_VALUE and verifies correct Long handling and metrics attribution.
| String eventType = (String) payload.get("type"); | ||
| ZonedDateTime occurredAt = ZonedDateTime.parse((String) payload.get("occurredAt")); |
There was a problem hiding this comment.
payload 필드 누락 시 NPE가 발생한다.
운영 관점 문제점:
payload.get("type") 또는 payload.get("occurredAt")가 null이거나 누락된 경우, (String) 캐스팅 또는 ZonedDateTime.parse()에서 NPE/DateTimeParseException이 발생한다. 이로 인해 해당 레코드뿐 아니라 배치 전체 처리 흐름이 영향을 받을 수 있다.
수정안: 필수 필드에 대한 방어적 검증을 추가한다.
제안: 필수 필드 검증 추가
private void processRecord(ConsumerRecord<Object, Object> record) throws Exception {
String baseEventId = record.topic() + "-" + record.partition() + "-" + record.offset();
byte[] bytes = (byte[]) record.value();
Map<String, Object> payload = objectMapper.readValue(bytes, new TypeReference<>() {});
- String eventType = (String) payload.get("type");
- ZonedDateTime occurredAt = ZonedDateTime.parse((String) payload.get("occurredAt"));
+ String eventType = (String) payload.get("type");
+ String occurredAtStr = (String) payload.get("occurredAt");
+ if (eventType == null || occurredAtStr == null) {
+ log.warn("필수 필드 누락. eventId={}, payload={}", baseEventId, payload);
+ return;
+ }
+ ZonedDateTime occurredAt = ZonedDateTime.parse(occurredAtStr);추가 테스트: 필수 필드가 누락된 메시지 처리 시 예외 없이 skip되는지 단위 테스트가 필요하다.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| String eventType = (String) payload.get("type"); | |
| ZonedDateTime occurredAt = ZonedDateTime.parse((String) payload.get("occurredAt")); | |
| String eventType = (String) payload.get("type"); | |
| String occurredAtStr = (String) payload.get("occurredAt"); | |
| if (eventType == null || occurredAtStr == null) { | |
| log.warn("필수 필드 누락. eventId={}, payload={}", baseEventId, payload); | |
| return; | |
| } | |
| ZonedDateTime occurredAt = ZonedDateTime.parse(occurredAtStr); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/ProductMetricsConsumer.java`
around lines 46 - 47, In ProductMetricsConsumer, the direct casts and parse of
payload fields cause NPE/DateTimeParseException when "type" or "occurredAt" are
missing; add defensive validation around payload.get("type") and
payload.get("occurredAt") before casting/parsing (e.g., verify non-null and
non-empty), handle parse failures for ZonedDateTime.parse((String)
payload.get("occurredAt")) with a try/catch or Optional/DateTimeFormatter
fallback, and when validation fails log a clear warning including the record
id/context and skip processing that message instead of allowing an exception to
propagate; ensure unit tests cover missing "type" and "occurredAt" cases to
assert graceful skip.
| ZonedDateTime occurredAt = ZonedDateTime.parse((String) payload.get("occurredAt")); | ||
|
|
||
| if ("ORDER_CONFIRMED".equals(eventType)) { | ||
| List<Integer> productIds = (List<Integer>) payload.get("productIds"); |
There was a problem hiding this comment.
productIds를 List<Integer>로 역직렬화하면 Long 범위 ID가 오버플로우된다.
운영 관점 문제점:
PaymentFacade에서 OrderItem.refProductId는 Long 타입으로 직렬화되나, 이 코드에서 List<Integer>로 캐스팅한다. productId가 Integer.MAX_VALUE를 초과하면 잘못된 값으로 처리되어 엉뚱한 상품의 메트릭이 증가한다.
수정안: List<Number>로 캐스팅 후 longValue()로 변환한다.
제안: Long 타입 안전하게 처리
if ("ORDER_CONFIRMED".equals(eventType)) {
- List<Integer> productIds = (List<Integer>) payload.get("productIds");
- for (Integer productId : productIds) {
+ `@SuppressWarnings`("unchecked")
+ List<Number> productIds = (List<Number>) payload.get("productIds");
+ for (Number productId : productIds) {
String eventId = baseEventId + "-" + productId;
processor.process(eventId, eventType, productId.longValue(), occurredAt);
}추가 테스트: Integer.MAX_VALUE를 초과하는 productId가 포함된 ORDER_CONFIRMED 이벤트 처리 시 정상 동작하는지 통합 테스트가 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-streamer/src/main/java/com/loopers/interfaces/consumer/ProductMetricsConsumer.java`
at line 50, The code in ProductMetricsConsumer currently casts
payload.get("productIds") to List<Integer>, which overflows for IDs >
Integer.MAX_VALUE; change the cast to List<Number> and convert each entry with
number.longValue() before using it (e.g., when mapping to productId or comparing
to OrderItem.refProductId), ensuring all downstream uses expect long/Long;
update any variable types from List<Integer>/int to List<Long>/long or use a
List<Long> built from the Number values and add an integration test for an
ORDER_CONFIRMED event containing productIds > Integer.MAX_VALUE.
🏗️ 리뷰 포인트
🧭 Context & Decision
1. ApplicationEvent 도입
문제 정의
PaymentFacade하나에 결제 + 쿠폰 + 집계 + 알림이 모두 들어가 있었다.문제 1 — 하나의 트랜잭션이므로 쿠폰 실패 시 결제/주문까지 함께 롤백된다.
문제 2 — 모든 후처리가 동기 실행이므로 응답이 느리다.
문제 3 — 도메인 간 직접 의존으로 결합도가 높다.
선택지
결정
C. @TransactionalEventListener(AFTER_COMMIT)
근거
sequenceDiagram participant TX as 결제 트랜잭션 participant Listener as CouponListener TX->>TX: payment.markSuccess() TX->>TX: order.confirm() TX->>TX: publishEvent(OrderConfirmedEvent) TX->>TX: COMMIT Note over TX: 트랜잭션 종료 TX-->>Listener: AFTER_COMMIT 실행 (REQUIRES_NEW) Listener->>Listener: coupon.issue() Note over Listener: 실패해도 결제 트랜잭션과 무관커밋이 확정된 이후에만 후처리가 실행되므로, 후처리 실패가 결제에 영향을 주지 않는다.
또한 만약 B(@eventlistener)는 의존 방향만 없앤 것. 같은 트랜잭션 안에서 동기 실행되므로 리스너 예외가 발행자 트랜잭션을 롤백시킨다. 사용할 큰 이점이 없다고 판단하였다.
단, AFTER_COMMIT 시점엔 원래 트랜잭션이 이미 종료됐으므로
@Transactional(REQUIRES_NEW)를 함께 명시해야 한다.2. Transactional Outbox Pattern 도입
문제 정의
트랜잭션 분리는 해결됐지만, AFTER_COMMIT 이후 Kafka로 직접 발행하면 새로운 문제가 생긴다.
결제는 성공했는데 Kafka 이벤트가 사라져 쿠폰/집계가 영영 처리되지 않는다.
모든 이벤트가 문제인 건 아니다. 소실 허용 여부가 다르다:
ORDER_CONFIRMED가 있으므로 소실 방지 메커니즘이 필요하다.선택지
결정
C. Transactional Outbox Pattern
근거
3. Outbox 릴레이 방식: Polling vs CDC
문제 정의
Outbox 테이블에 쌓인 미발행 이벤트를 어떻게 Kafka로 전달할지.
선택지
결정
A. Polling
근거
CDC는 서버 설치/설정/운영이 별도로 필요하다. 수초 지연이 허용 가능한 비즈니스(쿠폰 발급, 집계)에서 인프라 복잡도를 굳이 높일 이유가 없다고 생각했고, Outbox + Polling으로도 충분히 문제 해결이 가능하다고 판단 하였음.
4. Outbox 발행 완료 처리: publishedAt vs DELETE
문제 정의
Kafka 발행에 성공한 outbox 레코드를 어떻게 처리할지.
선택지
published_at IS NULL AND created_at < 5분 전결정
B. published_at 업데이트
근거
5. Topic 설계: 이벤트 타입별 vs 도메인별
문제 정의
초기 설계는 이벤트 타입마다 별도 topic이었다 (
LIKED,UNLIKED,PRODUCT_VIEWED각각).PartitionKey로 순서를 보장하려 해도 topic이 다르면 partition도 다르다.
같은 productId에 대해 LIKED → UNLIKED 순으로 발행해도 Consumer가 처리 순서를 보장받을 수 없다.
선택지
결정
B. 도메인별 묶음 topic
이벤트 종류는 payload의
type필드로 구분한다.근거
graph TD subgraph "A안 — 순서 깨짐" A1["LIKED topic\npartition 2"] --> A3["Consumer"] A2["UNLIKED topic\npartition 0"] --> A3 A3 -. "어느 것이 먼저 처리될지 모름" .-> A4[❌] end subgraph "B안 — 순서 보장" B1["catalog-events\npartition 2: LIKED"] --> B3["Consumer"] B2["catalog-events\npartition 2: UNLIKED"] --> B3 B3 --> B4["발행 순서대로 처리 ✅"] endKafka의 partition 라우팅 규칙(
같은 key → 같은 partition)은 topic 내에서만 적용된다. topic이 다르면 같은 key여도 서로 다른 partition에 배정되므로 순서를 보장할 수 없다.partition은 순서가 보장된 큐이므로, 같은 topic + 같은 key → 같은 partition → Consumer가 발행 순서대로 처리한다.
6. Consumer 멱등 처리: 2단계 체크
문제 정의
Kafka는 At Least Once 보장: 같은 메시지가 두 번 이상 올 수 있다.
eventId(topic-partition-offset)중복 체크만으로 충분한가?아니다. Outbox relay에서 논리적으로 동일한 이벤트가 두 개의 다른 Kafka 메시지로 발행되는 경우가 있다. 두 메시지는 offset이 다르므로 eventId가 달라 중복 체크를 통과해버린다.
선택지
결정
B. 2단계 체크
근거
flowchart TD A[이벤트 수신] --> B{"eventId\n중복 체크"} B -->|있음| C["skip\n네트워크 레벨 중복"] B -->|없음| D{"occurredAt\nstale 체크\nentityId + eventType + occurredAt ≥ ?"} D -->|있음| E["skip\n논리적 중복"] D -->|없음| F["처리 후 기록"] style C fill:#888,color:#fff style E fill:#888,color:#fff style F fill:#2d8a2d,color:#fff두 체크는 서로 다른 문제를 막는다:
7. 선착순 쿠폰 수량 제어
문제 정의
100명 제한 쿠폰에 1000명이 동시에 요청하면,
issued_count를 안전하게 제어해야 한다.선택지
A. Redis INCR
Redis 원자적 카운터로 빠르게 선점. 하지만 Redis 카운터와 DB 실제 발급 수량이 두 군데 존재한다. Consumer 실패 시 Redis=100이지만 DB=98인 불일치가 발생할 수 있다.
B. Pessimistic Lock
SELECT FOR UPDATE로 row를 잠근다. 동시 요청이 몰리면 모든 Consumer가 같은 row 잠금을 기다리며 병목이 생긴다.C. Atomic UPDATE
affected rows = 0이면 수량 초과.결정
C. Atomic UPDATE
근거
sequenceDiagram participant C1 as Consumer1 participant C2 as Consumer2 participant DB as DB (issued_count=99, max=100) C1->>DB: UPDATE SET count=count+1 WHERE count < 100 C2->>DB: UPDATE SET count=count+1 WHERE count < 100 DB-->>C1: affected=1 ✅ (count=100) DB-->>C2: affected=0 ❌ 수량 초과DB 하나가 진실의 원천 → Redis-DB 정합성 문제 없음.
명시적 lock 없이 DB 엔진이 원자적으로 처리 → 경합 시에도 병목 없음.
🏗️ 주요 컴포넌트
commerce-api — 발행 측
LikedEvent/UnlikedEventOrderConfirmedEventProductViewedEvent/ProductLikedEvent/ProductUnlikedEvent/OrderRequestedEventLikeEventListenerLikedEvent수신 → 좋아요 집계 업데이트 (AFTER_COMMIT)OrderConfirmedEventListenerOrderConfirmedEvent수신 → 쿠폰 발급 (AFTER_COMMIT)UserActionLogListenerOutboxEventOutboxEventRepositoryOutboxRelaySchedulerOutboxEventHelpercommerce-streamer — 수신 측
ProductMetricsConsumercatalog-events/order-events배치 수신 → payload 파싱 → Processor 위임 → ACKCouponIssueConsumercoupon-issue-requests수신 → Processor 위임 → 성공/실패 처리 → ACKProductMetricsProcessorCouponIssueProcessorEventHandledEventHandledRepository🔁 Flow Diagram
graph TD subgraph commerce-api A1["LikeFacade\nProductFacade"] -->|"LIKED/UNLIKED\nPRODUCT_VIEWED"| OB[(outbox_events)] A2["PaymentFacade"] -->|"ORDER_CONFIRMED"| OB OB -->|"@Scheduled 1초"| RELAY[OutboxRelayScheduler] end RELAY -->|catalog-events| K1[Kafka] RELAY -->|order-events| K1 subgraph "commerce-api (쿠폰 요청)" A3["UserCouponV1Controller"] -->|PENDING INSERT| DB2[(coupon_issue_requests)] A3 -->|coupon-issue-requests| K2[Kafka] end subgraph commerce-streamer K1 --> MC[ProductMetricsConsumer] K2 --> CC[CouponIssueConsumer] MC --> MP[ProductMetricsProcessor\n집계 업데이트] CC --> CP[CouponIssueProcessor\n쿠폰 발급] MP --> EH[(event_handled)] CP --> EH end변경 목적
Kafka 기반 이벤트 드리븐 아키텍처와 Outbox 패턴 도입으로 결제 확정·좋아요·상품 조회 등 비즈니스 이벤트를 비동기로 처리하고, 상품 메트릭스 실시간 집계를 통해 시스템 확장성·응답성 개선 및 도메인간 느슨한 결합 달성.
핵심 변경점
commerce-api: Outbox 도메인·엔티티·JPA 저장소와 OutboxEvent 작성(결제확정·좋아요·조회 등), ApplicationEvent 발행으로 동작 변경, 5초 주기 OutboxRelayScheduler를 통한 Kafka 전송; Like 흐름은 이벤트 발행 기반으로 리팩토링됨. modules/kafka: 토픽(catalog-events, order-events, coupon-issue-requests)과 프로듀서 idempotence/acks 설정 추가. commerce-streamer: 배치 Kafka 컨슈머(ProductMetricsConsumer, CouponIssueConsumer), EventHandled·ProductMetrics 도메인·엔티티·저장소, ProductMetricsProcessor 및 CouponIssueProcessor(tryIncrementIssuedCount 기반 선착순 발급) 추가로 소비자측 처리·멱등성·경합 제어 구현.
리스크/주의사항
테스트/검증 방법
중복·역순 이벤트 재전송 시 멱등성 검증(같은 eventId, entityId+occurredAt 시나리오), OutboxRelay 장애 시 재시도·적체 해소 시나리오(로그·모니터링 지표 관찰), Kafka 컨슈머 배치 처리·예외 복구 및 ack 시점 테스트, 쿠폰 동시 발급 부하 테스트(tryIncrementIssuedCount 동작·경합률·성능), 운영 지표(Outbox 큐 길이, 발행 실패율, 소비 지연) 경보 설정 검증.
(확인 요청) Outbox에 기록되는 이벤트의 메시지 포맷/스키마(명세·버전 관리)와 장애 시 알림·재처리 정책이 마련되어 있나요?