Implementing Distributed Transaction Compensation with XXL-Job
In distributed systems, ensuring data consistency across multiple services often requires a compensation mechanism. By leveraging a distributed task scheduler like XXL-Job, systems can periodically scan for pending or failed transaction records—typically stored in a reliable intermediate storage like Redis—and attempt to re-execute the business logic to achieve final consistency.
Below is an implementation of a compensation task handler. This job retrieves serialized transaction logs from Redis, checks their status, and performs idempotent order creation and inventory updates.
@Component
@Slf4j
@RequiredArgsConstructor
public class TransactionCompensationHandler {
private static final DateTimeFormatter DATE_TIME_FORMATTER = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
private static final String PENDING_STATUS = "PENDING";
private static final String SUCCESS_STATUS = "COMPLETED";
private final KafkaTemplate<String, String> kafkaTemplate;
private final OrderService orderService;
private final ProductService productService;
private final RedisService redisService;
private final ObjectMapper objectMapper;
@XxlJob("orderCompensationJobHandler")
public void reconcileTransactions() {
List<String> pendingRecords = redisService.getHashValues("transaction_log_buffer");
if (CollectionUtils.isEmpty(pendingRecords)) {
return;
}
for (String recordJson : pendingRecords) {
try {
TransactionLog txLog = objectMapper.readValue(recordJson, TransactionLog.class);
// Skip if the transaction is already finalized or status is invalid
if (!PENDING_STATUS.equals(txLog.getStatus())) {
continue;
}
String transactionId = txLog.getTransactionId();
// Idempotency check: Ensure the order hasn't already been created
if (orderService.existsByTransactionId(transactionId)) {
log.info("Transaction {} already processed at {}", transactionId, LocalDateTime.now().format(DATE_TIME_FORMATTER));
markAsCompleted(txLog);
continue;
}
// Construct domain objects
Order order = buildOrder(txLog);
Product productUpdate = buildProductUpdate(txLog);
// Execute business logic
int orderSaved = orderService.createOrder(order);
int inventoryUpdated = productService.updateStock(productUpdate);
// Confirm transaction success
if (orderSaved > 0 && inventoryUpdated > 0) {
markAsCompleted(txLog);
kafkaTemplate.send("order_completed_topic", objectMapper.writeValueAsString(order));
}
} catch (Exception e) {
log.error("Failed to process compensation record: {}", recordJson, e);
}
}
}
private void markAsCompleted(TransactionLog txLog) throws JsonProcessingException {
txLog.setStatus(SUCCESS_STATUS);
String updatedLog = objectMapper.writeValueAsString(txLog);
redisService.putHash("transaction_log_buffer", txLog.getKey(), updatedLog);
}
private Order buildOrder(TransactionLog txLog) {
Order order = new Order();
order.setId(txLog.getTransactionId());
order.setUserId(txLog.getUserId());
order.setProductId(txLog.getProductId());
order.setOrderName(txLog.getOrderName());
return order;
}
private Product buildProductUpdate(TransactionLog txLog) {
Product product = new Product();
product.setId(txLog.getProductId());
product.setStock(txLog.getStock());
return product;
}
}
XXL-Job Overview
XXL-Job is a distributed task scheduling platform designed for large-scale applications. It decouples the scheduling logic from the execution logic, providing a centralized administration center for managing tasks across a cluster.
Core Capabilities
- Distributed Execution: The platform supports horizontal scaling of executor nodes. Tasks can be routed to different nodes based on strategies like round-robin or sharding, which is essential for handling high-volume compensation tasks.
- Centralized Management: A web-based console allows developers to configure triggers, view execution logs, and manage task lifecycles without redeploying applications.
- Failure Handling: It incorporates built-in retry mechanisms and failure alarm callbacks (email, DingTalk, etc.), ensuring that transient issues in the compensation logic do not go unnoticed.
- Diverse Task Types: Supports Bean modde (Java classes), GLUE mode (inline scripts), and standard shell/HTTP tasks, offering flexibility for various maintenance scenarios.
Comparison with Other Scheduling Solutions
| Framework | Scope | Management UI | Distributed Support |
|---|---|---|---|
| XXL-Job | Distributed | Rich, built-in UI | Native support with sharding |
| Quartz | Single Node / Cluster (via DB) | Requires third-party tools | Requires database locking for clustering |
| Spring @Scheduled | Single Node | None | None (requires custom distributed locks) |
| Elastic-Job | Distributed | Basic UI | Zookeeper based coordination |
While Spring's @Scheduled is sufficient for simple monolithic applications, it lacks coordination capabilities in a clustered environment, leading to duplicate executions. Quartz is powerful but often carries a heavier operational burden for cluster configuration. XXL-Job bridges this gap by offering a lightweight, out-of-the-box solution for distributed scheduling with visual monitoring.
Integration
To integrate XXL-Job, add the core dependency to the project:
<dependency>
<groupId>com.xuxueli</groupId>
<artifactId>xxl-job-core</artifactId>
<version>2.4.0</version>
</dependency>
A method can be registered as a job handler using the @XxlJob annotation, allowing the scheduling center to trigger it via CRON expressions.