Handling Message Duplication in Distributed Systems with At-Least-Once Delivery
A production issue recently surfaced where duplicate data entries caused processing failures. Envestigation revealed missing idempotency checks and lack of unique constraints on order numbers, which should have prevented duplicate inserts.
Upstream systems denied sending duplicate orders, but logs showed only one call from their side while our system received two requests. The culprit was message queue behavior - specifically SofaMQ's "at-least-once" delivery guarantee that prioritizes message delivery over deduplication.
Message delivery semantics typically offer three levels:
- At-most-once: Messages may be lost but never duplicated
- At-least-once: Messages won't be lost but may be duplicated
- Exactly-once: Each message is delivered precisely once
For payment processing scenarios where duplicate deductions must be prevented, implementing idempotency is crucial. The standard approach uses unique transaction IDs with database constraints:
Payment payment = paymentRepository.findByTransactionId(txId);
if (payment == null) {
paymentRepository.save(newPayment);
}
This fails under concurrency, requiring databace-level unique constraints. While effective, it mixes business logic with technical concerns. A cleaner solution introduces a dedicated message tracking table:
CREATE TABLE message_consumption (
message_id VARCHAR(255) PRIMARY KEY,
status ENUM('PENDING', 'PROCESSING', 'COMPLETED'),
created_at TIMESTAMP
);
The consumption flow becomes:
if (messageTracker.tryInsert(messageId)) {
processPayment(paymentRequest);
}
For atomicity without transactions, consider state machines:
- Insert with 'PENDING' status
- Update to 'PROCESSING' before business logic
- Mark 'COMPLETED' after success
- Reject duplicates by checking status
This separates technical message handling from business logic while maintaining reliability.