Building Cluster-Safe Once-Only Methods with Locks, Java and Postgres or DynamoDB

In distributed systems, ensuring that certain operations execute only once across multiple instances is a critical requirement. Whether you’re processing payments, sending notifications, or performing data migrations, you need guarantees that these operations don’t accidentally run multiple times. This article explores how to use PostgreSQL advisory locks through the lock-postgres utility to create cluster-safe once-only methods in Java.

See here for our OSS implementation of cluster locks using postgresql. We also have an implementation using DynamoDB here (same API).

The Challenge of Distributed Execution

Consider a common scenario: you have multiple instances of your application running in a cluster, and each instance processes scheduled tasks. Without proper coordination, you might end up with:

  • Duplicate payment processing
  • Multiple notification emails sent
  • Race conditions in data migrations
  • Resource contention issues

Traditional Java synchronization mechanisms like synchronized blocks or ReentrantLock only work within a single JVM. For cluster-wide coordination, you need a distributed locking mechanism.

Solution 1: Using PostgreSQL Advisory Locks

PostgreSQL provides advisory locks – lightweight, application-level locks that don’t interfere with table-level locks. These locks are perfect for coordinating application logic across multiple instances.

The lock-postgres utility leverages PostgreSQL’s advisory lock functions:

  • pg_try_advisory_xact_lock(key) – Non-blocking lock attempt
  • pg_advisory_xact_lock(key) – Blocking lock acquisition

These locks are automatically released when the database transaction commits or rolls back, making them ideal for transactional operations.

Solution 2: Using DynamoDB and the AWS AmazonDynamoDBLockClient

We also have a dynamo DB solution using the AWS AmazonDynamoDBLockClient as the implementation of the Lock API. This is an implementation of the same lock API in the examples in this article.

Setting Up the Dependencies

First, add the required dependencies to your project:

PostgreSQL implementation:

<dependency>   
  <groupId>com.limemojito.oss.standards.lock</groupId>  
  <artifactId>lock-postgres</artifactId>
  <version>15.3.2</version>
</dependency>

DynamoDB Implementation

<dependency>   
  <groupId>com.limemojito.oss.standards.lock</groupId>  
  <artifactId>lock-dynamodb</artifactId>
  <version>15.3.2</version>
</dependency>

Basic Usage Pattern

The PostgresLockService implements the LockService interface and provides two primary methods for lock acquisition:

@Service
@RequiredArgsConstructor
public class OnceOnlyService {   
     private final LockService lockService;   
     private final PaymentProcessor paymentProcessor;        
     @Transactional    public void processPaymentOnceOnly(String paymentId) {
        String lockName = "payment-processing-" + paymentId;
        // Try to acquire the lock - non-blocking
        Optional<DistributedLock> lock = lockService.tryAcquire(lockName);
        if (lock.isPresent()) {           
          try (DistributedLock distributedLock = lock.get()) {                
                // Only one instance will execute this block
                paymentProcessor.process(paymentId);
                log.info("Payment {} processed successfully", paymentId);
           }
        } else {
            log.info("Payment {} is already being processed by another instance", paymentId);
        }
    }
}

Blocking vs Non-Blocking Lock Acquisition

The lock service provides two approaches:

1. Non-Blocking (tryAcquire)

@Transactional
public void tryProcessOnceOnly(String taskId) {
    Optional<DistributedLock> lock = lockService.tryAcquire("task-" + taskId);
        if (lock.isPresent()) {
        try (DistributedLock distributedLock = lock.get()) {
            // Process the task
            performCriticalOperation(taskId);
        }
    } else {
        // Task is being processed elsewhere, skip or handle accordingly
        log.info("Task {} is already being processed", taskId);
    }
}

2. Blocking (acquire)

@Transactional
public void waitAndProcessOnceOnly(String taskId) {
    // This will wait until the lock becomes available
    try (DistributedLock lock = lockService.acquire("task-" + taskId)) {
        // Guaranteed to execute once the lock is acquired
        performCriticalOperation(taskId);
    }
    // Lock is automatically released when the transaction commits
}

Real-World Example: Daily Report Generation

Let’s implement a practical example where multiple application instances need to coordinate daily report generation:

@Component
@RequiredArgsConstructor
@Slf4j
public class DailyReportService {
    private final LockService lockService;
    private final ReportRepository reportRepository;
    private final NotificationService notificationService;

    @Scheduled(cron = "0 0 2 * * *") // Run at 2 AM daily
    @Transactional
    public void generateDailyReport() {
        String today = LocalDate.now().toString();
        String lockName = "daily-report-" + today;
        Optional<DistributedLock> lock = lockService.tryAcquire(lockName);
        if (lock.isPresent()) {
            try (DistributedLock distributedLock = lock.get()) {
                log.info("Starting daily report generation for {}", today);

                // Check if report already exists (additional safety)
                if (reportRepository.existsByDate(today)) {
                    log.info("Report for {} already exists, skipping", today);
                    return;
                }
 
                // Generate the report
                Report report = generateReport(today);
                reportRepository.save(report);
 
                // Send notifications
                notificationService.sendReportGeneratedNotification(report);
                log.info("Daily report for {} generated successfully", today);
            }
        } else {
            log.info("Daily report for {} is being generated by another instance", today);
        }
    }

    private Report generateReport(String date) {
        // Implementation of report generation logic
        return new Report(date, collectDailyMetrics());
    }
}

Advanced Patterns

1. Lock with Timeout Handling

For blocking locks, you can implement timeout handling using Spring’s @Transactional timeout:

@Transactional(timeout = 30) // 30-second timeout
public void processWithTimeout(String taskId) {
    try (DistributedLock lock = lockService.acquire("timeout-task-" + taskId)) {
        performLongRunningOperation(taskId);
    } catch (DataAccessException e) {
        log.error("Failed to acquire lock within timeout period", e);
        throw new LockTimeoutException("Could not acquire lock for task: " + taskId);
    }
}

2. Hierarchical Locking

Create hierarchical locks for complex operations:

@Transactional
public void processOrderWithHierarchy(String customerId, String orderId) {
    // First acquire customer-level lock
    try (DistributedLock customerLock = lockService.acquire("customer-" + customerId)) {
        // Then acquire order-level lock
        try (DistributedLock orderLock = lockService.acquire("order-" + orderId)) {
            processOrderSafely(customerId, orderId);
        }
    }
}

3. Conditional Processing with Fallback

@Transactional
public ProcessingResult processWithFallback(String taskId) {
    Optional<DistributedLock> lock = lockService.tryAcquire("primary-task-" + taskId);
    if (lock.isPresent()) {
        try (DistributedLock distributedLock = lock.get()) {
            return performPrimaryProcessing(taskId);
        }
    } else {
        // Primary processing is happening elsewhere, perform alternative action
        return performAlternativeProcessing(taskId);
    }
}

Configuration and Best Practices

1. Database Configuration

Ensure your PostgreSQL database is properly configured for advisory locks:

-- Check current lock status
SELECT * FROM pg_locks WHERE locktype = 'advisory';
-- Set appropriate connection and statement timeouts
SET statement_timeout = '30s';
SET lock_timeout = '10s';

2. Spring Configuration

Configure your PostgresLockService bean:

@Configuration
public class LockConfiguration {
    @Bean
    public LockService lockService(JdbcTemplate jdbcTemplate) {
        return new PostgresLockService(jdbcTemplate);
    }
}

Key Benefits and Considerations

Benefits:

  • Cluster-safe: Works across multiple JVM instances
  • Transactional: Automatically releases locks on transaction completion
  • Lightweight: No additional infrastructure required
  • Reliable: Leverages PostgreSQL’s proven lock mechanisms
  • Flexible: Supports both blocking and non-blocking approaches

Considerations:

  • Database dependency: Requires PostgreSQL database connection
  • Transaction requirement: Locks must be used within database transactions
  • Lock key collision: Different lock names with same hash could collide
  • Connection pooling: Consider impact on database connection pools

Conclusion

PostgreSQL advisory locks provide a robust foundation for implementing cluster-safe once-only methods in Java applications. The lock-postgres utility simplifies this implementation by providing a clean API that integrates seamlessly with Spring’s transaction management.

By using these distributed locks, you can ensure that critical operations execute exactly once across your entire cluster, preventing data inconsistencies and duplicate processing. The transaction-based approach ensures that locks are automatically cleaned up, even in failure scenarios, making your distributed system more reliable and maintainable.

Whether you’re processing financial transactions, generating reports, or coordinating data migrations, PostgreSQL advisory locks offer a battle-tested solution for distributed coordination without the complexity of additional infrastructure components.

Using AWS Cognito, API Gateway and Spring Cloud Function Lambda for security authorisation.

This article explains using our OSS lambda-utilities to configure a spring cloud function java lambda to allow method level authorisation using API Gateway and Cognito.

See our OSS repository here.

Architecture Overview

The security setup integrates three key AWS services:

  1. AWS Cognito – Identity provider and JWT issuer
  2. AWS API Gateway – HTTP API with JWT authorizer
  3. AWS Lambda – Function execution environment

Key Components

1. ApiGatewayResponseDecoratorFactory

This is the central factory that creates decorated Spring Cloud Functions with security and error handling:

@Service
public class ApiGatewayResponseDecoratorFactory {
// Creates decorated functions that handle security, errors, and responses
public <Input, Output> Function<Input, APIGatewayV2HTTPResponse> create(Function<Input, Output> function)
}

Purpose:

  • Wraps your business logic functions
  • Automatically handles authentication extraction from API Gateway events
  • Converts exceptions to proper HTTP responses
  • Manages Spring Security context

2. Security Configuration Setup

The security is configured through : AwsCloudFunctionSpringSecurityConfiguration

@EnableMethodSecurity
@Configuration
@Import({LimeJacksonJsonConfiguration.class, ApiGatewayResponseDecoratorFactory.class})
@ComponentScan(basePackageClasses = ApiGatewayAuthenticationMapper.class)
public class AwsCloudFunctionSpringSecurityConfiguration

This enables:

  • Method-level security (@PreAuthorize@Secured, etc.)
  • Automatic authentication mapping
  • Exception handling for security violations

3. Authentication Flow

The authentication process works as follows:

  1. API Gateway receives request with JWT token in Authorization header
  2. JWT Authorizer validates the token against Cognito
  3. API Gateway forwards the validated JWT claims in the request context
  4. extracts authentication from the event: 
    • Reads JWT claims from request context
    • Creates object ApiGatewayAuthentication
    • Maps Cognito groups to Spring Security authorities
    ApiGatewayAuthenticationMapper
  5. Spring Security context is populated for method-level security

AWS Infrastructure Setup

API Gateway Configuration

# Example CDK/CloudFormation for HTTP API with JWT Authorizer
HttpApi:
Type: AWS::ApiGatewayV2::Api
Properties:
Name: MySecureApi
ProtocolType: HTTP

JwtAuthorizer:
Type: AWS::ApiGatewayV2::Authorizer
Properties:
ApiId: !Ref HttpApi
AuthorizerType: JWT
IdentitySource:
- $request.header.Authorization
JwtConfiguration:
Audience:
- your-cognito-client-id
Issuer: https://cognito-idp.{region}.amazonaws.com/{user-pool-id}

Route:
Type: AWS::ApiGatewayV2::Route
Properties:
ApiId: !Ref HttpApi
RouteKey: POST /secure-endpoint
Target: !Sub integrations/${LambdaIntegration}
AuthorizerId: !Ref JwtAuthorizer
AuthorizationType: JWT

Cognito Configuration

UserPool:
Type: AWS::Cognito::UserPool
Properties:
UserPoolName: MyAppUsers
Schema:
- Name: email
AttributeDataType: String
Required: true
Policies:
PasswordPolicy:
MinimumLength: 8

UserPoolClient:
Type: AWS::Cognito::UserPoolClient
Properties:
UserPoolId: !Ref UserPool
ClientName: MyAppClient
GenerateSecret: false
ExplicitAuthFlows:
- ADMIN_NO_SRP_AUTH
- USER_PASSWORD_AUTH

Implementation Example

1. Create Your Business Function

@Component
public class SecureBusinessLogic {

public String processSecureData(MyRequest request) {
// Your business logic here
return "Processed: " + request.getData();
}
}

2. Create the Lambda Handler

@Configuration
@Import(LimeAwsLambdaConfiguration.class)
public class LambdaConfiguration {

@Autowired
private ApiGatewayResponseDecoratorFactory decoratorFactory;

@Autowired
private SecureBusinessLogic businessLogic;

@Bean
public Function<APIGatewayV2HTTPEvent, APIGatewayV2HTTPResponse> secureFunction() {
return decoratorFactory.create(event -> {
// Extract request body
MyRequest request = parseRequest(event.getBody());

// Business logic with automatic security context
return businessLogic.processSecureData(request);
});
}
}

3. Add Method-Level Security

@Component
public class SecureBusinessLogic {

@PreAuthorize("hasAuthority('ADMIN')")
public String processAdminData(MyRequest request) {
return "Admin processed: " + request.getData();
}

@PreAuthorize("hasAuthority('USER') or hasAuthority('ADMIN')")
public String processUserData(MyRequest request) {
return "User processed: " + request.getData();
}
}

4. Access Current User Context

@Bean
public Function<APIGatewayV2HTTPEvent, APIGatewayV2HTTPResponse> contextAwareFunction() {
return decoratorFactory.create(event -> {
// Access current authentication
ApiGatewayContext context = decoratorFactory.getCurrentApiGatewayContext();
ApiGatewayAuthentication auth = context.getAuthentication();

if (auth.isAuthenticated()) {
String username = auth.getPrincipal().getName();
Set<String> groups = auth.getAuthorities()
.stream()
.map(GrantedAuthority::getAuthority)
.collect(Collectors.toSet());

return new UserResponse(username, groups, "Success");
} else {
return new UserResponse("anonymous", Set.of("ANONYMOUS"), "Limited access");
}
});
}

Configuration Properties

The authentication mapper supports several configuration properties:

com:
limemojito:
aws:
lambda:
security:
claimsKey: "cognito:groups" # Cognito groups claim
anonymous:
sub: "ANONYMOUS"
userName: "anonymous"
authority: "ANONYMOUS"

Security Benefits

  1. Automatic JWT Validation: API Gateway validates tokens before reaching Lambda
  2. Claims Extraction: Automatic mapping of Cognito user groups to Spring authorities
  3. Method Security: Use standard Spring Security annotations
  4. Exception Handling: Automatic conversion of security exceptions to HTTP responses
  5. Context Access: Easy access to user information and claims
  6. Anonymous Support: Graceful handling of unauthenticated requests

Error Handling

The decorator automatically handles:

  • Authentication failures → 401 Unauthorized
  • Authorization failures → 403 Forbidden
  • Validation errors → 400 Bad Request
  • General exceptions → 500 Internal Server Error

This architecture provides a robust, scalable security solution that leverages AWS managed services while maintaining clean separation of concerns in your Spring Cloud Function implementation.

Debugging Maven Projects with Conflicting JAR Versions

Maven dependency conflicts are one of the most frustrating issues developers encounter when building Java applications. When multiple versions of the same library exist in your classpath, it can lead to runtime errors, unexpected behavior, and difficult-to-debug issues. This article provides a comprehensive guide to identifying, understanding, and resolving JAR version conflicts in Maven projects.

Understanding Dependency Conflicts

What Are Dependency Conflicts?

Dependency conflicts occur when your project’s dependency tree contains multiple versions of the same artifact (same groupId and artifactId but different versions). Maven’s dependency resolution mechanism will choose one version based on its rules, but this choice might not be compatible with all parts of your application.

Common Symptoms

  • ClassNotFoundException or NoClassDefFoundError at runtime
  • NoSuchMethodError or AbstractMethodError
  • IncompatibleClassChangeError
  • Unexpected behavior in libraries that worked in isolation
  • Different behavior between development and production environments

Identifying Conflicts

1. Using Maven Dependency Plugin

The most effective way to identify conflicts is using Maven’s built-in dependency plugin:

mvn dependency:tree

This command shows your complete dependency tree. Look for multiple versions of the same artifact:

[INFO] +- com.fasterxml.jackson.core:jackson-core:jar:2.13.0:compile
[INFO] +- com.fasterxml.jackson.core:jackson-databind:jar:2.13.0:compile
[INFO] |  \- com.fasterxml.jackson.core:jackson-core:jar:2.12.0:compile (omitted for conflict with 2.13.0)

2. Analyzing Conflicts with Verbose Output

For more detailed conflict analysis:

mvn dependency:tree -Dverbose

This shows which dependencies are omitted due to conflicts and why Maven chose specific versions.

3. Using the Dependency Analyze Goal

mvn dependency:analyze

This command identifies:

  • Used undeclared dependencies
  • Unused declared dependencies
  • Potential conflicts

4. IDE-Based Analysis

Most modern IDEs provide visual dependency analysis:

  • IntelliJ IDEA: Right-click on pom.xml → Analyze Dependencies
  • Eclipse: Project Properties → Java Build Path → Libraries → Maven Dependencies

Understanding Maven’s Resolution Strategy

Maven uses these rules to resolve conflicts:

  1. Nearest Definition: Dependencies closer to the root in the dependency tree win
  2. First Declaration: If dependencies are at the same depth, the first one declared wins
  3. Version Range: Explicit version ranges override transitive dependencies

Resolution Strategies

1. Explicit Dependency Declaration

The most straightforward approach is to explicitly declare the version you want:

<dependencies>    
  <dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-core</artifactId>
    <version>2.13.0</version>
  </dependency>
</dependencies>

2. Dependency Management Section

Use the <dependencyManagement> section to centrally manage versions:

<dependencyManagement>    
  <dependencies>        
    <dependency>
      <groupId>com.fasterxml.jackson.core</groupId>
      <artifactId>jackson-core</artifactId> 
      <version>2.13.0</version>   
    </dependency>
  </dependencies>
</dependencyManagement>

3. Excluding Transitive Dependencies

Exclude problematic transitive dependencies:

<dependency>    
  <groupId>org.springframework</groupId>  
  <artifactId>spring-web</artifactId>   
  <version>5.3.0</version>  
  <exclusions>
     <exclusion>     
        <groupId>com.fasterxml.jackson.core</groupId>       
        <artifactId>jackson-core</artifactId>  
        </exclusion>  
     </exclusions>
</dependency>

4. Using Maven Enforcer Plugin

Prevent conflicts by failing the build when they occur:

<plugin>    
  <groupId>org.apache.maven.plugins</groupId> 
  <artifactId>maven-enforcer-plugin</artifactId>
  <version>3.0.0</version> 
  <executions>       
    <execution>
       <id>enforce-no-duplicate-dependencies</id>        
       <goals>            
         <goal>enforce</goal>       
       </goals>  
       <configuration>       
          <rules>                
            <dependencyConvergence/>        
            <requireNoRepositories/>           
          </rules>  
       </configuration>   
     </execution>
   </executions>
</plugin>

Advanced Debugging Techniques

1. Creating a Dependency Report

Generate detailed dependency reports:

mvn project-info-reports:dependencies

This creates an HTML report showing all dependencies and their relationships.

2. Using Maven’s Debug Output

Run Maven with debug output to see detailed resolution information:

mvn -X dependency:tree

3. Checking Effective POM

View the effective POM to see resolved dependencies:

mvn help:effective-pom

Best Practices

1. Use Bill of Materials (BOM)

Import BOMs for consistent dependency versions:

<dependencyManagement>  
  <dependencies>       
    <dependency>       
      <groupId>org.springframework.boot</groupId>    
      <artifactId>spring-boot-dependencies</artifactId>      
      <version>2.7.0</version>     
      <type>pom</type>      
      <scope>import</scope>      
    </dependency>    
  </dependencies>
</dependencyManagement>

2. Regular Dependency Updates

Keep dependencies up to date and use tools like:

  • mvn versions:display-dependency-updates
  • mvn versions:use-latest-releases

3. Minimize Direct Dependencies

Reduce the number of direct dependencies to minimize conflict opportunities.

4. Use Dependency Scopes Appropriately

  • compile: Default scope
  • provided: Available at compile time but not packaged
  • runtime: Not needed for compilation but required at runtime
  • test: Only available during testing

Preventing Future Conflicts

1. Establish Dependency Governance

  • Create a team-wide dependency management strategy
  • Use parent POMs for version consistency
  • Regular dependency audits

2. Automated Conflict Detection

Integrate conflict detection into your CI/CD pipeline:

<plugin>    <groupId>org.apache.maven.plugins</groupId>    <artifactId>maven-dependency-plugin</artifactId>    <executions>        <execution>            <goals>                <goal>analyze-only</goal>            </goals>            <configuration>                <failOnWarning>true</failOnWarning>            </configuration>        </execution>    </executions></plugin>

3. Version Range Strategy

Be cautious with version ranges. Prefer specific versions for stability:

<!-- Avoid --><version>[1.0,2.0)</version>
<!-- Prefer --><version>1.5.2</version>

Common Conflict Scenarios

Spring Framework Conflicts

Spring projects often have complex dependency trees. Use Spring Boot’s dependency management or Spring Framework BOM.

Logging Framework Conflicts

Multiple logging frameworks (Log4j, Logback, Commons Logging) often conflict. Use SLF4J as a facade and bridge other frameworks.

Jackson Library Conflicts

Jackson modules must use compatible versions. Manage them centrally in dependencyManagement.

Conclusion

Debugging Maven dependency conflicts requires a systematic approach:

  1. Identify conflicts using Maven tools
  2. Understand Maven’s resolution strategy
  3. Apply appropriate resolution techniques
  4. Prevent future conflicts with good practices

The key is to be proactive rather than reactive. Establish good dependency management practices early in your project lifecycle, and use automated tools to catch conflicts before they reach production.

Remember that dependency conflicts are often symptoms of deeper architectural issues. Sometimes the best solution is to refactor your application to reduce complex dependency chains rather than working around conflicts with exclusions and forced versions.

By following these practices and using the tools outlined in this article, you’ll be well-equipped to handle even the most complex dependency conflict scenarios in your Maven projects.