Deploying Java Lambda with Localstack

We deploy and debug our Java Lambda on development machines using Localstack to emulate and Amazon Web Services (AWS) account. This article walks through the architecture, deployment using our open source java framework to local stack and enabling a debug mode for remote debugging using any Java integrated development environment (IDE).

These capabilities live in our test-utilities module, LambdaSupport.java.

Localstack development architecture

Our build framework uses Docker to deploy a Localstack image, then we use AWS Api calls to deploy a zip of our lambda java classes to the Localstack lambda engine. Due to the size of the zip files, we need to deploy the lambda using a S3 url. We use Localstack’s S3 implementation to emulate the process.

When the lambda is deployed, the Localstack Lambda engine will pull the AWS Lambda Runtime image from public ECR and then perform the deployment steps. Using the Localstack endpoint for lambda we now have a full environment where we can perform a lambda.invoke to test the deployed function.

Figure 1: Development architecture using Localstack for lambda deployment

Viewing lambda logs

With the appropriate Localstack configuration we can view lambda logs for both startup and run of the lambda. Note these logs appear in the docker logs for the AWS Lambda Runtime Container. This container spins up when the lambda is deployed.

The easiest method we use to see the logs is to:

  1. Run the Junit test in debug, with a breakpoint after the lambda invoke.
  2. When the breakpoint is hit, use docker ps and docker logs to see the output of the Lambda Runtime.
  3. In IntelliJ Ultimate, you can see the containers deployed via the Services pane after connecting to your docker daemon.

Using the architecture in debug mode

We can use this architecture to remote debug the deployed lambda. Our LambdaSupport class includes configuration on deploy to enable debug mode as per the Localstack documentation https://docs.localstack.cloud/user-guide/lambda-tools/debugging/. With our support class you simply switch from java() to javaDebug() and the deploy will configure the runtime for debug mode (port 5050 by default).

In your docker-compose.yml, set the environment variable LAMBDA_DOCKER_FLAGS=-p 127.0.0.1:5050:5050 -e LS_LOG=debug.

This enables port passthrough for the java debugger from localhost to port 5050 of the container (assuming that is where the JVM debugging is configured for).

Do not commit this code as it will BLOCK test threads until a debugger is connected (port 5050 by default).

Figure 2: Localstack Java Lambda debug architecture

References:

Code examples

See https://github.com/LimeMojito/oss-maven-standards/blob/master/development-test/jar-lambda-poc/src/test/java/ApplicationIT.java for a full example.

Adding test-utilities to your maven project

These are included by default if you use our jar-lambda-development parent POM.

See our post about using our build system for maven.

Otherwise you can manually add the support as below (version omitted),

<dependency>
    <groupId>com.limemojito.oss.test</groupId>
    <artifactId>test-utilities</artifactId>
    <scope>test</scope>
</dependency>
<dependency>
    <!-- Access for LambdaSupport -->
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>lambda</artifactId>
    <scope>test</scope>
</dependency>
<dependency>
    <!-- Access for LambdaSupport -->
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>s3</artifactId>
    <scope>test</scope>
</dependency>

Loading the lambda as a static variable in a unit test.

We recommend a static initialised once a junit setup function due to the time to deploy the lambda.

The LambdaSupport.java method performs deployment of the supplied module zip to Localstack S3, then invokes the AWS Lambda API to confirm that the lambda has started cleanly (state == Active).

private static Lambda LAMBDA;
...
// environment variables for the lambda configuration
final Map<String, String> environment = Map.of(
                    "SPRING_PROFILES_ACTIVE", "integration-test"
                    "SPRING_CLOUD_FUNCTION_DEFINITION","get"
            );
// using the lambda zip that was built in module ../jar-lambda-poc
LAMBDA = lambdaSupport.java("../jar-lambda-poc",
                            LimeAwsLambdaConfiguration.LAMBDA_HANDLER,
                            environment);

Invoking the lambda for black box testing

This example is using a static variable for the Lambda, JUnit 5 and assert4J. An AWS API Gateway event JSON is loaded and invoked to the deployed lambda. The result is asserted.

Full example is in our oss-maven-standards repository as in integration test (IT, run by failsafe).

@Test
public void shouldCallTransactionPostOkApiGatewayEvent() {
    final APIGatewayV2HTTPEvent event = json.loadLambdaEvent("/events/postApiEvent.json",
                                                             APIGatewayV2HTTPEvent.class);

    final APIGatewayV2HTTPResponse response = lambdaSupport.invokeLambdaEvent(LAMBDA,
                                                                              event,
                                                                              APIGatewayV2HTTPResponse.class);

    assertThat(response.getStatusCode()).isEqualTo(200);
    String output = json.parse(response.getBody(), String.class);
    assertThat(output).isEqualTo("world");
}

Localstack lambda deployment debug example

We alter the setup to use the deprecated javaDebug function. Do not commit this code as it will BLOCK test threads until a debugger is connected (port 5050 by default).

For a clean setup in Intelij that waits for the lambda to start in debug mode, see the excellent article on Localstack https://docs.localstack.cloud/user-guide/lambda-tools/debugging/ “Configuring IntelliJ IDEA for remote JVM debugging”.

// using the lambda zip that was built in module ../jar-lambda-poc
LAMBDA = lambdaSupport.javaDebug("../jar-lambda-poc",
                                 LimeAwsLambdaConfiguration.LAMBDA_HANDLER,
                                 environment);

Bending Maven with Ant

Need to bend Maven to your will wthout writing a maven plugin? Some hackery with Ant and Ant-Contrib‘s if task can solve many problems.

Lime Mojito’s approach to avoiding multiple build script technologies

We use maven at Lime Mojito for most of our builds due to the wealth of maven plugins available and “hardened” plugins that are suited to our “fail fast” approach to our builds. Checkstyle, Enforcer, Jacoco Coverage are a few of the very useful plugins we use. However, sometimes you need some custom build script and doing that in maven without using exec and a shell script can be tricky.

For more details see our post on Maintainable Builds with Maven to see how we keep our “application level” pom files so small.

We try and avoid having logic “spread” amongst different implementation technologies, and reduce cognitive load when maintining software by having the logic in one place. Usually this is a parent POM from a service’s perspective, but we try to avoid the “helper script” pattem as much as possible. We also strive to reduce the number of technologies in play so that maintaining services doesn’t require learning 47 different techologies to simply deploy a cloud service.

So how can you program Maven?

Not easily. Maven is “declarative” – you are meant to declare how the plugins are executed in order inside maven’s pom.xml for a source module. If you want to include control statements, conditionals, etc like a programming language maven is not the technology to do this in.

However, there is a maven plugin, ant-run, which allows us to embed Ant tasks and their “evil” logic companion, Ant Contrib, into our maven build.

Ant in Maven! Why would you do this?

Because maven is essentially an XML file. Ant instructions are also XML and embedding in the maven POM maintain a flow while editing. Property replacement between maven and ant is almost seamless, and this gives a good maintenance experience.

And yes, the drawback is that xml can become quite verbose. If our xml gets too big we consider it a “code smell” that we may need to write a proper maven plugin.

See our post on maintainable maven for tips on how we keep our service pom files so small.

Setting up the AntRun Maven plugin to use Ant Contrib.

We have this in our base pom.xml – see our Github Repository.

We configure the maven plugin, but add a dependency for the ant-contrib library. That library is older, and has a poor dependency with ant in it’s POM so we exclude the ant jar as below. Once enabled, we can add any ant-contrib task using the XML namespace antlib:net.sf.antcontrib.

For a quick tutorial on XML and namespaces, see W3 Schools here.

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-antrun-plugin</artifactId>
    <version>3.1.0</version>
    <!--
      This allows us to use things like if in ant tasks.  We use ant to add some control flow to
      maven builds.

      <configuration>
          <target xmlns:ac="antlib:net.sf.antcontrib">
              <ac:if>
                  ...
  -->
    <dependencies>
        <dependency>
            <groupId>ant-contrib</groupId>
            <artifactId>ant-contrib</artifactId>
            <version>1.0b3</version>
            <exclusions>
                <exclusion>
                    <groupId>ant</groupId>
                    <artifactId>ant</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
    </dependencies>
</plugin>

Example one – Calculation to set a property in the pom.xml

Here we configure the ant plugin’s execution to calculate a property to be set in the maven POM.xml. This allows later tasks, and even other maven plugins, to use the set property to alter their execution. We use skip configurations a lot to separate our “full” builds from “fast builds” where a fast build might skip most checks and just produce a deliverable for manual debugging. This plugin’s execution runs in the Maven process-resources phase – before you write executions a solid understanding of the Maven Lifecycle is required.

Because the file links back to our parent pom, we do not need to repeat version, ant-contrib setup, etc. This example does not need ant-contrib.

The main trick is that we set exportAntProperties on the plugin execution so that properties we set in ant are set in the maven project object model. The Maven property <test.compose.location> is set in the <properties> section of the POM. It is replaced in the ant script before it is executed by ant seamlessly by the maven-antrun-plugin.

Note that the XML inside the <target> tag is Ant XML. We are using the condition task and an available task to see if a file exists. If it does then we set the property docker.compose.skip to true.

This example is in our java-development/pom.xml which is the base POM for all our various base POMs for jars, spring boot, Java AWS Lambdas, etc.

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-antrun-plugin</artifactId>
    <executions>
        <execution>
            <id>identify-docker-compose</id>
            <phase>process-resources</phase>
            <goals>
                <goal>run</goal>
            </goals>
            <configuration>
                <exportAntProperties>true</exportAntProperties>
                <target>
                    <condition property="docker.compose.skip" else="false">
                        <or>
                            <equals arg1="${lime.fast-build}" arg2="true" />
                            <not>
                                <available file="${test.compose.location}" />
                            </not>
                        </or>
                    </condition>
                    <!--suppress UnresolvedMavenProperty -->
                    <echo level="info" message="docker.compose.skip is ${docker.compose.skip}" />
                </target>
            </configuration>
        </execution>
        ...

Example 2 – If we are not skipping, wait for Docker Compose up before integration test start

We use another plugin to manage docker compose before our integration test phase using failsafe for our Java integration tests. This older plugin was before docker had healthcheck support in compose files – we recommend this compose healthcheck approach in modern development. Our configuration of this plugin uses docker.compose.skip property to skip execution if set to true.

However we can specify a port in our Maven pom.xml and the build will wait until that port responds 200 on http://localhost. As ant-run is before the failsafe plugin in our declaration, its execution happens before the failsafe test run.

Note that the XML inside the <target> tag is Ant XML. The <ac:if> is using the namespace defined in the <target> element that tells ant to use the ant-contrib jar for the task. We are using the ant-contrib if task to only perform a waitFor if the docker.compose.skip property is set to false. This was performed earlier in the lifecucle by the example above.

This example is in our java-development/pom.xml which is the base POM for all our various base POMs for jars, spring boot, Java AWS Lambdas, etc.

<execution>
    <id>wait-for-docker</id>
    <phase>integration-test</phase>
    <goals>
        <goal>run</goal>
    </goals>
    <configuration>
        <target xmlns:ac="antlib:net.sf.antcontrib">
            <ac:if>
                <!--suppress UnresolvedMavenProperty -->
                <equals arg1="${docker.compose.skip}" arg2="false" />
                <then>
                    <echo level="info" message="Waiting for Docker on port ${docker.compose.port}" />
                    <waitfor maxWait="2" maxWaitUnit="minute">
                        <http url="http://localhost:${docker.compose.port}" />
                    </waitfor>
                </then>
            </ac:if>
        </target>
    </configuration>
</execution>

Conclusion

This aproach of ant hackery can produce small pieces of functionality in a maven build that can smooth the use of other plugins. Modern Ant has some support for if in a task dependency manner, but the older contrib tasks add a procedural approach that make the build cleaner in our opinion.

Why not Gradle? We have a lot of code in maven, and most of our projects fall into out standard deliverables of jars, Spring Boot jars or AWS Java Lambdas that are all easy to build in Maven. Our use of Java AWS CDK also uses maven so it ties nicely together from a limiting the number of technologies perspective. Given our service poms are so small due to our Maintainable Maven approach the benefits of Gradle seem small.

References

Mavenhttps://maven.apache.org
Maven AntRun Pluginhttps://maven.apache.org/plugins/maven-antrun-plugin/
Anthttps://ant.apache.org/manual/index.html
Ant Contrib taskshttps://ant-contrib.sourceforge.net/tasks/tasks/index.html
GitHub OSS standardshttps://github.com/LimeMojito/oss-maven-standards

AWS Development: LocalStack or an AWS Account per Developer

To test a highly AWS integrated solution, such as deployments on AWS Lambda, you can test deployments on an AWS “stub”, such as LocalStack or an AWS account per developer (or even per solution). Shared AWS account models are flawed for development as the environment can not be effectively shared with multiple developers without adding a lot of deployment complexity such as naming conventions.

What are the pros and cons of choosing a stub such as LocalStack versus an account management policy such as an AWS account per developer?

When is LocalStack a good approach?

LocalStack allows a configuration of AWS endpoints to point to a local service running stub AWS endpoints. These services implement most of the AWS API allowing a developer to check that their cloud implementations have basic functionality before deploying to a real AWS Account. LocalStack runs on a developer’s machine standalone or as a Docker container.

For example, you can deploy a Lambda implementation that uses SQS, S3, SNS, etc and test that connectivity works including subscriptions and file writes on LocalStack.

As LocalStack mimics the AWS API, it can be used with AWS-CLI, AWS SDKs, Cloudformation, CDK, etc.

LocalStack (at 28th July 2024) does not implement IAM security rules so a developer’s deployment code will not be tested for the enforcement of IAM policies.

Some endpoints (such as S3) require configuration so that the AWS API responds with URLs that can be used by the application correctly.

Using a “fresh” environment for development pipelines can be simulated by running a “fresh” LocalStack container. For example you can do a System Test environment by using a new container, provisioning and then running system testing.

If you have a highly managed and siloed corporate deployment environment, it may be easier, quicker and more pragmatic to configure LocalStack for your development team then attempt to have multiple accounts provisioned and managed by multiple specialist teams.

When is an AWS Account per developer a good approach?

An AWS account per developer can remove a lot of complexity in the build process. Rather than managing the stub endpoints and configuration, developers can be forced to deploy with security rules such as IAM roles and consider costing of services as part of the development process.

However this requires a high level of account provisioning and policy automation. Accounts need to be monitored for cost control and features such as account destruction and cost saving shutdowns need to be implemented. Security scans for policy issues, etc can be implemented across accounts and policies for AWS technology usage can be controlled using AWS Organisations.

An account per developer opens a small step to account per environment which allows the provisioning of say System Test environments on an ad hoc basis. AWS best practices for security also suggest account per service to limit blast radius and maintain separate controls for critical services such as payment processing.

If the organisation already has centralised account policy management and a strong provisioning team(s), this may be an effective approach to reduce the complexity in development while allowing modern automated pipeline practices.

Conclusion

ApproachProsCons
LocalStackCan be deployed on a developer’s machine.

Does not require using managed environments in a corporate setting.

Can be used with AWS-SDKs, AWS-CLI, Cloudformation, SAM, CDK for deployments

Development environments are separated without naming conventions in shared accounts, etc.

Fresh LocalStacks can be used to mimic environments for various development pipeline stages.

Development environment control within the development team.
Requires application configuration to use LocalStack.

Does not test security policies before deployment.

May have incomplete behaviours compared to production. Note that the most common use cases are functionally covered.

Developers are not forced to be aware of cost issues in implementations.

Developers may implement solutions and then find policy issues when deploying to real AWS accounts.
AWS Account per DeveloperRemoves stubbing and configuration effort other than setting AWS Account Id.

Development environments are separated without naming conventions in shared accounts, etc.

Forces implementations of IAM policies to be checked in development cycle.

Opens account per environment as an option for various development pipeline stages.

Developers need to be more aware of costs when designing.

Development environment control within the development team, with policies from the provisioning team.
Requires automated AWS account creation.

Requires shared AWS Organisation policy enforcement.

Requires ongoing monitoring and management of the account usage.

Requires cost monitoring if heavyweight deployments such as EC2, ECS, EKS, etc are used.
LocalStack or an AWS Account per developer: Summary

References

Maintainable builds – with Maven!

Maven is known to be a verbose, opinionated framework for building applications, primarily for a Java Stack. In this article we discuss Lime Mojito’s view on maven, and how we use it to produce maintainable, repeatable builds using modern features such as automated testing, AWS stubbing (LocalStack) and deployment. We have OSS standards you can use in your own maven builds at https://bitbucket.org/limemojito/oss-maven-standards/src/master/ and POM’s on maven central.

Before we look at our standards, we set the context of what drives our build design by looking at our technology choices. We’ll cover why our developer builds are setup this way, but not how our Agile Continuous Integration works in this post.

Lime Mojito’s Technology Choices

Lime Mojito uses a Java based technology stack with Spring, provisioned on AWS. We use AWS CDK (Java) for provisioning and our lone exception is for web based user interfaces (UI), where we use Typescript and React with Material UI and AWS Amplify.

Our build system is developer machine first focused, using Maven as the main build system for all components other than the UI.

Build Charter

  • The build enforces our development standards to reduce the code review load.
  • The build must have a simple developer interface – mvn clean install.
  • If the clean install passes – we can move to source Pull Request (PR).
    • PR is important, as when a PR is merged we may automatically deploy to production.
  • Creating a new project or module must not require a lot of configuration (“xml hell”).
  • A module must not depend on another running Lime Mojito module for testing.
  • Any stub resources for testing must be a docker image.
  • Stubs will be managed by the build process for integration test phase.
  • The build will handle style and code metric checks (CheckStyle, Maven Enforcer, etc) so that we do not waste time in PR reviews.
  • For open source, we will post to Maven Central on a Release Build.

Open Source Standards For Our Maven Builds

Our very “top” level of build standards is open source and available for others to use or be inspired by:

Bitbucket: https://bitbucket.org/limemojito/oss-maven-standards/src/master/

The base POM files are also available on the Maven Central Repository if you want to use our approach in your own builds.

https://repo.maven.apache.org/maven2/com/limemojito/oss/standards/

Maven Example pom.xml for building a JAR library

This example will do all the below with only 6 lines of extra XML in your maven pom.xml file:

  • enforce your dependencies are a single java version
  • resolve dependencies via the Bill of Materials Library that we use too smooth out our Spring + Spring Boot + Spring Cloud + Spring Function + AWS SDK(s) dependency web.
  • Enable Lombok for easier java development with less boilerplate
  • Configure code signing
  • Configure maven repository deployment locations (I suggest overriding these for your own deployments!)
  • Configure CheckStyle for code style checking against our standards at http://standards.limemojito.com/oss-checkstyle.xml
  • Configure optional support for docker images loading before integration-test phase
  • Configure Project Lombok for Java Development with less boilerplate at compile time.
  • Configure logging support with SLF4J
  • Build a jar with completed MANIFEST.MF information including version numbers.
  • Build javadoc and source jars on a release build
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>my.dns.reversed.project</groupId>
    <artifactId>my-library</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <parent>
        <groupId>com.limemojito.oss.standards</groupId>
        <artifactId>jar-development</artifactId>
        <version>13.0.4</version>
        <relativePath/>
    </parent>
</project>

When you add dependencies, common ones that are in or resolved via our library pom.xml do not need version numbers as they are managed by our modern Bill of Materials (BOM) style dependency setup.

Example using the AWS SNS sdk as part of the jar:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>my.dns.reversed.project</groupId>
    <artifactId>my-library</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <parent>
        <groupId>com.limemojito.oss.standards</groupId>
        <artifactId>jar-development</artifactId>
        <version>13.0.4</version>
        <relativePath/>
    </parent>

    <dependencies>
        <dependency>
            <groupId>software.amazon.awssdk</groupId>
            <artifactId>sns</artifactId>
        </dependency>
    </dependencies>
</project>

Our Open Source Standards library supports the following module types (archetypes) out of the box:

TypeDescription
java-developmentBase POM used to configure deployment locations, checkstyle, enforcer, docker, plugin versions, profiles, etc. Designed to be extended for different archetypes (JAR, WAR, etc.).
jar-developmentBuild a jar file with test and docker support
jar-lamda-developmentBuild a Spring Boot Cloud Function jar suitable for lambda use (java 17 Runtime) with AWS dependencies added by default. Jar is shaded for simple upload.
spring-boot-developmentSpring boot jar constructed with the base spring-boot-starter and lime mojito aws-utilities for local stack support.
Available Module Development Types

We hope that you might find these standards interesting to try out.