Optimising AWS SnapStart and Spring Boot Java Lambdas

This article looks at optimising a Java Spring Boot application (Cloud Function style) with AWS SnapStart, and covered advanced optimisation with lifecycle management of pre snapshots and post restore of the application image by AWS SnapStart. We cover optimising a lambda for persistent network connection style conversational resources, such as an RDBMS, SQL, legacy messaging framework, etc.

How Snap Start Works

To import start up times for a cold start, SnapStart snapshots a virtual machine and uses the restore of the snapshot rather than the whole JVM + library startup time. For Java applications built on frameworks such as Spring Boot, this provides order of magnitude time reductions on cold start time. For a comparison with raw, SnapStart and Graal Native performance see our article here.

What frameworks do we use with Spring Boot?

For our Java Lambdas we use Spring Cloud Function with the AWS Lambda Adaptor. For an example for how we set this up, and links to our development frameworks and code, see our article AWS SnapStart for Faster Java Lambdas

Default SnapStart: Simple Optimisation of the Lambda INIT phase

When the lambda version is published SnapStart will run up the Java application to the point that the lambda is initialised. For a spring cloud function application, this will complete the Spring Boot lifecycle to the Container Started phase. In short, all your beans will be constructed, injected and started from a Spring Container perspective.

AWS: Lambda Execution Lifecycle

SnapStart will then snapshot the virtual machine with all the loaded information. When the image is restored, the exact memory layout of all classes and data in the JVM is restored. Thus any data loaded in this phase as part of a Spring Bean Constructor, @PostCreate annotated methods and ContextRefresh event handlers will have been reloaded as part of the restore.

Issues with persistent network connections

Where this breaks down is if you wish to use a “persistent” network connection style resource, such as a RDBMS connection. In this example, usually in a Spring Boot application a Data Source is configured and the network connections initialised pre container start. This can cause significant slow downs when restoring an image, perhaps weeks after its creation, as all the network connections will be broken.

For a self healing data source, when a connection is requested the connection will check, timeout and have to reconnect the connection and potentially start a new transaction for the number of configured connections in the pool. Even if you smartly set the pool size to one, given the single threaded lambda execution model, that connection timeout and reconnect may take significant time depending on network and database settings.

Advanced Java SnapStart: CRaC Lifecycle Management

Project CRaC, Co-ordinated Restore at Checkpoint, is a JVM project that allows responses to the host operating system having a checkpoint pre a snapshot operation, and the signal that a operating system restore has occurred. The AWS Java Runtime supports integration with CRaC so that you can optimise your cold starts even under SnapStart.

At the time of our integration, we used the CRaC library to create a base class that could be used to create a support class that can handle “manual” tailoring of preSnapshot and postRestore events. Newer versions of boot are integrating CRaC support – see here for details.

We have created a base class, SnapStartOptimizer, that can be used to create a spring bean that can respond to preSnapshot and postRestore events. This gives us two hooks into the lifecycle:

  1. Load more data into memory before the snapshot occurs.
  2. Restore data and connections after we are running again.

Optimising pre snapshot

In this example we have a simple Spring Component that we use to exercise some functionality (http based) to load and lazy classes, data, etc. We also exercise the lookup of our spring cloud function definition bean.

@Component
@RequiredArgsConstructor
public class SnapStartOptimisation extends SnapStartOptimizer {

    private final UserManager userManager;
    private final TradingAccountManager accountManager;
    private final TransactionManager transactionManager;

    @Override
    protected void performBeforeCheckpoint() {
        swallowError(() -> userManager.fetchUser("thisisnotatoken"));
        swallowError(() -> accountManager.accountsFor(new TradingUser("bob", "sub")));
        final int previous = 30;
        final int pageSize = 10;
        swallowError(() -> transactionManager.query("435345345",
                                                    Instant.now().minusSeconds(previous),
                                                    Instant.now(),
                                                    PaginatedRequest.of(pageSize)));
        checkSpringCloudFunctionDefinitionBean();
    }
}

Optimising post restore – LambdaSqlConnection class.

In this example we highlight our LambdaSqlConnection class, which is already optimised for SnapStart. This class exercises a delegated java.sql.Connection instance preSnapshot to confirm connectivity, but replaces the connection on postRestore. This class is used to implement a bean of type java.sql.Connection, allowing you to write raw JDBC in lambdas using a single RDBMS connection for the lambda instance.

Note: Do not use default Spring Boot JDBC templates, JPA, Hibernate, etc in lambdas. The overhead of the default multi connection pools, etc is inappropriate for lambda use. For heavy batch processing a “Run Task” ECS image is more appropriate, and does not have 15 minute timeout constraints.

So how does it work?

Instances and interfaces managed by LambdaSqlConnection
  1. The LambdaSqlConnection class manages the Connection bean instance.
  2. When preSnapshot occurs, LambdaSqlConnection closes the Connection instance.
  3. When postRestore occurs, LambdaSqlConnection reconnects the Connection instance.

Because LambdaSqlConnection creating a dynamic proxy as the Connection instance, it can manage the delegated connection “behind” the proxy without your injected Connection instance changing.

Using Our SQL Connection replacement in Spring Boot

See the code at https://github.com/LimeMojito/oss-maven-standards/tree/master/utilities/aws-utilities/lambda-sql.

Maven dependency:

<dependency>
   <groupId>com.limemojito.oss.standards.aws</groupId>
   <artifactId>lambda-sql</artifactId>
   <version>15.0.2</version>
</dependency>

Importing our java.sql.Connection interceptor

@Import(LambdaSqlConnection.class)
@SpringBootApplication
public class MySpringBootApplication {

You can now remove any code that is creating a java.sql.Connection and simply use a standard java.sql.Connection instance injected as a dependency in your code. This configuration creates a java.sql.Connection compatible bean that is optimised with SnapStart and delegates to a real SQL connection.

Configuring your (real) DB connection

Example with Postgres driver.

lime:
  jdbc:
    driver:
      classname: org.postgresql.Driver
    url: 'jdbc:postgresql://localhost:5432/postgres'
    username: postgres
    password: postgres

Example spring bean using SQL

@Service
@RequiredArgsConstructor
public class MyService {
    private final Connection connection;

    @SneakyThrows
    public int fetchCount() {
      try(Statement statement = connection.createStatement()){
         try(ResultSet results = statement.executeQuery("count(1) from some_table")) {
             results.next();
             results.getInt(1);
         }
      }
    }
}

References

Deploying Java Lambda with Localstack

We deploy and debug our Java Lambda on development machines using Localstack to emulate and Amazon Web Services (AWS) account. This article walks through the architecture, deployment using our open source java framework to local stack and enabling a debug mode for remote debugging using any Java integrated development environment (IDE).

These capabilities live in our test-utilities module, LambdaSupport.java.

Localstack development architecture

Our build framework uses Docker to deploy a Localstack image, then we use AWS Api calls to deploy a zip of our lambda java classes to the Localstack lambda engine. Due to the size of the zip files, we need to deploy the lambda using a S3 url. We use Localstack’s S3 implementation to emulate the process.

When the lambda is deployed, the Localstack Lambda engine will pull the AWS Lambda Runtime image from public ECR and then perform the deployment steps. Using the Localstack endpoint for lambda we now have a full environment where we can perform a lambda.invoke to test the deployed function.

Figure 1: Development architecture using Localstack for lambda deployment

Viewing lambda logs

With the appropriate Localstack configuration we can view lambda logs for both startup and run of the lambda. Note these logs appear in the docker logs for the AWS Lambda Runtime Container. This container spins up when the lambda is deployed.

The easiest method we use to see the logs is to:

  1. Run the Junit test in debug, with a breakpoint after the lambda invoke.
  2. When the breakpoint is hit, use docker ps and docker logs to see the output of the Lambda Runtime.
  3. In IntelliJ Ultimate, you can see the containers deployed via the Services pane after connecting to your docker daemon.

Using the architecture in debug mode

We can use this architecture to remote debug the deployed lambda. Our LambdaSupport class includes configuration on deploy to enable debug mode as per the Localstack documentation https://docs.localstack.cloud/user-guide/lambda-tools/debugging/. With our support class you simply switch from java() to javaDebug() and the deploy will configure the runtime for debug mode (port 5050 by default).

In your docker-compose.yml, set the environment variable LAMBDA_DOCKER_FLAGS=-p 127.0.0.1:5050:5050 -e LS_LOG=debug.

This enables port passthrough for the java debugger from localhost to port 5050 of the container (assuming that is where the JVM debugging is configured for).

Do not commit this code as it will BLOCK test threads until a debugger is connected (port 5050 by default).

Figure 2: Localstack Java Lambda debug architecture

References:

Code examples

See https://github.com/LimeMojito/oss-maven-standards/blob/master/development-test/jar-lambda-poc/src/test/java/ApplicationIT.java for a full example.

Adding test-utilities to your maven project

These are included by default if you use our jar-lambda-development parent POM.

See our post about using our build system for maven.

Otherwise you can manually add the support as below (version omitted),

<dependency>
    <groupId>com.limemojito.oss.test</groupId>
    <artifactId>test-utilities</artifactId>
    <scope>test</scope>
</dependency>
<dependency>
    <!-- Access for LambdaSupport -->
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>lambda</artifactId>
    <scope>test</scope>
</dependency>
<dependency>
    <!-- Access for LambdaSupport -->
    <groupId>software.amazon.awssdk</groupId>
    <artifactId>s3</artifactId>
    <scope>test</scope>
</dependency>

Loading the lambda as a static variable in a unit test.

We recommend a static initialised once a junit setup function due to the time to deploy the lambda.

The LambdaSupport.java method performs deployment of the supplied module zip to Localstack S3, then invokes the AWS Lambda API to confirm that the lambda has started cleanly (state == Active).

private static Lambda LAMBDA;
...
// environment variables for the lambda configuration
final Map<String, String> environment = Map.of(
                    "SPRING_PROFILES_ACTIVE", "integration-test"
                    "SPRING_CLOUD_FUNCTION_DEFINITION","get"
            );
// using the lambda zip that was built in module ../jar-lambda-poc
LAMBDA = lambdaSupport.java("../jar-lambda-poc",
                            LimeAwsLambdaConfiguration.LAMBDA_HANDLER,
                            environment);

Invoking the lambda for black box testing

This example is using a static variable for the Lambda, JUnit 5 and assert4J. An AWS API Gateway event JSON is loaded and invoked to the deployed lambda. The result is asserted.

Full example is in our oss-maven-standards repository as in integration test (IT, run by failsafe).

@Test
public void shouldCallTransactionPostOkApiGatewayEvent() {
    final APIGatewayV2HTTPEvent event = json.loadLambdaEvent("/events/postApiEvent.json",
                                                             APIGatewayV2HTTPEvent.class);

    final APIGatewayV2HTTPResponse response = lambdaSupport.invokeLambdaEvent(LAMBDA,
                                                                              event,
                                                                              APIGatewayV2HTTPResponse.class);

    assertThat(response.getStatusCode()).isEqualTo(200);
    String output = json.parse(response.getBody(), String.class);
    assertThat(output).isEqualTo("world");
}

Localstack lambda deployment debug example

We alter the setup to use the deprecated javaDebug function. Do not commit this code as it will BLOCK test threads until a debugger is connected (port 5050 by default).

For a clean setup in Intelij that waits for the lambda to start in debug mode, see the excellent article on Localstack https://docs.localstack.cloud/user-guide/lambda-tools/debugging/ “Configuring IntelliJ IDEA for remote JVM debugging”.

// using the lambda zip that was built in module ../jar-lambda-poc
LAMBDA = lambdaSupport.javaDebug("../jar-lambda-poc",
                                 LimeAwsLambdaConfiguration.LAMBDA_HANDLER,
                                 environment);

Bending Maven with Ant

Need to bend Maven to your will wthout writing a maven plugin? Some hackery with Ant and Ant-Contrib‘s if task can solve many problems.

Lime Mojito’s approach to avoiding multiple build script technologies

We use maven at Lime Mojito for most of our builds due to the wealth of maven plugins available and “hardened” plugins that are suited to our “fail fast” approach to our builds. Checkstyle, Enforcer, Jacoco Coverage are a few of the very useful plugins we use. However, sometimes you need some custom build script and doing that in maven without using exec and a shell script can be tricky.

For more details see our post on Maintainable Builds with Maven to see how we keep our “application level” pom files so small.

We try and avoid having logic “spread” amongst different implementation technologies, and reduce cognitive load when maintining software by having the logic in one place. Usually this is a parent POM from a service’s perspective, but we try to avoid the “helper script” pattem as much as possible. We also strive to reduce the number of technologies in play so that maintaining services doesn’t require learning 47 different techologies to simply deploy a cloud service.

So how can you program Maven?

Not easily. Maven is “declarative” – you are meant to declare how the plugins are executed in order inside maven’s pom.xml for a source module. If you want to include control statements, conditionals, etc like a programming language maven is not the technology to do this in.

However, there is a maven plugin, ant-run, which allows us to embed Ant tasks and their “evil” logic companion, Ant Contrib, into our maven build.

Ant in Maven! Why would you do this?

Because maven is essentially an XML file. Ant instructions are also XML and embedding in the maven POM maintain a flow while editing. Property replacement between maven and ant is almost seamless, and this gives a good maintenance experience.

And yes, the drawback is that xml can become quite verbose. If our xml gets too big we consider it a “code smell” that we may need to write a proper maven plugin.

See our post on maintainable maven for tips on how we keep our service pom files so small.

Setting up the AntRun Maven plugin to use Ant Contrib.

We have this in our base pom.xml – see our Github Repository.

We configure the maven plugin, but add a dependency for the ant-contrib library. That library is older, and has a poor dependency with ant in it’s POM so we exclude the ant jar as below. Once enabled, we can add any ant-contrib task using the XML namespace antlib:net.sf.antcontrib.

For a quick tutorial on XML and namespaces, see W3 Schools here.

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-antrun-plugin</artifactId>
    <version>3.1.0</version>
    <!--
      This allows us to use things like if in ant tasks.  We use ant to add some control flow to
      maven builds.

      <configuration>
          <target xmlns:ac="antlib:net.sf.antcontrib">
              <ac:if>
                  ...
  -->
    <dependencies>
        <dependency>
            <groupId>ant-contrib</groupId>
            <artifactId>ant-contrib</artifactId>
            <version>1.0b3</version>
            <exclusions>
                <exclusion>
                    <groupId>ant</groupId>
                    <artifactId>ant</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
    </dependencies>
</plugin>

Example one – Calculation to set a property in the pom.xml

Here we configure the ant plugin’s execution to calculate a property to be set in the maven POM.xml. This allows later tasks, and even other maven plugins, to use the set property to alter their execution. We use skip configurations a lot to separate our “full” builds from “fast builds” where a fast build might skip most checks and just produce a deliverable for manual debugging. This plugin’s execution runs in the Maven process-resources phase – before you write executions a solid understanding of the Maven Lifecycle is required.

Because the file links back to our parent pom, we do not need to repeat version, ant-contrib setup, etc. This example does not need ant-contrib.

The main trick is that we set exportAntProperties on the plugin execution so that properties we set in ant are set in the maven project object model. The Maven property <test.compose.location> is set in the <properties> section of the POM. It is replaced in the ant script before it is executed by ant seamlessly by the maven-antrun-plugin.

Note that the XML inside the <target> tag is Ant XML. We are using the condition task and an available task to see if a file exists. If it does then we set the property docker.compose.skip to true.

This example is in our java-development/pom.xml which is the base POM for all our various base POMs for jars, spring boot, Java AWS Lambdas, etc.

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-antrun-plugin</artifactId>
    <executions>
        <execution>
            <id>identify-docker-compose</id>
            <phase>process-resources</phase>
            <goals>
                <goal>run</goal>
            </goals>
            <configuration>
                <exportAntProperties>true</exportAntProperties>
                <target>
                    <condition property="docker.compose.skip" else="false">
                        <or>
                            <equals arg1="${lime.fast-build}" arg2="true" />
                            <not>
                                <available file="${test.compose.location}" />
                            </not>
                        </or>
                    </condition>
                    <!--suppress UnresolvedMavenProperty -->
                    <echo level="info" message="docker.compose.skip is ${docker.compose.skip}" />
                </target>
            </configuration>
        </execution>
        ...

Example 2 – If we are not skipping, wait for Docker Compose up before integration test start

We use another plugin to manage docker compose before our integration test phase using failsafe for our Java integration tests. This older plugin was before docker had healthcheck support in compose files – we recommend this compose healthcheck approach in modern development. Our configuration of this plugin uses docker.compose.skip property to skip execution if set to true.

However we can specify a port in our Maven pom.xml and the build will wait until that port responds 200 on http://localhost. As ant-run is before the failsafe plugin in our declaration, its execution happens before the failsafe test run.

Note that the XML inside the <target> tag is Ant XML. The <ac:if> is using the namespace defined in the <target> element that tells ant to use the ant-contrib jar for the task. We are using the ant-contrib if task to only perform a waitFor if the docker.compose.skip property is set to false. This was performed earlier in the lifecucle by the example above.

This example is in our java-development/pom.xml which is the base POM for all our various base POMs for jars, spring boot, Java AWS Lambdas, etc.

<execution>
    <id>wait-for-docker</id>
    <phase>integration-test</phase>
    <goals>
        <goal>run</goal>
    </goals>
    <configuration>
        <target xmlns:ac="antlib:net.sf.antcontrib">
            <ac:if>
                <!--suppress UnresolvedMavenProperty -->
                <equals arg1="${docker.compose.skip}" arg2="false" />
                <then>
                    <echo level="info" message="Waiting for Docker on port ${docker.compose.port}" />
                    <waitfor maxWait="2" maxWaitUnit="minute">
                        <http url="http://localhost:${docker.compose.port}" />
                    </waitfor>
                </then>
            </ac:if>
        </target>
    </configuration>
</execution>

Conclusion

This aproach of ant hackery can produce small pieces of functionality in a maven build that can smooth the use of other plugins. Modern Ant has some support for if in a task dependency manner, but the older contrib tasks add a procedural approach that make the build cleaner in our opinion.

Why not Gradle? We have a lot of code in maven, and most of our projects fall into out standard deliverables of jars, Spring Boot jars or AWS Java Lambdas that are all easy to build in Maven. Our use of Java AWS CDK also uses maven so it ties nicely together from a limiting the number of technologies perspective. Given our service poms are so small due to our Maintainable Maven approach the benefits of Gradle seem small.

References

Mavenhttps://maven.apache.org
Maven AntRun Pluginhttps://maven.apache.org/plugins/maven-antrun-plugin/
Anthttps://ant.apache.org/manual/index.html
Ant Contrib taskshttps://ant-contrib.sourceforge.net/tasks/tasks/index.html
GitHub OSS standardshttps://github.com/LimeMojito/oss-maven-standards

AWS Development: LocalStack or an AWS Account per Developer

To test a highly AWS integrated solution, such as deployments on AWS Lambda, you can test deployments on an AWS “stub”, such as LocalStack or an AWS account per developer (or even per solution). Shared AWS account models are flawed for development as the environment can not be effectively shared with multiple developers without adding a lot of deployment complexity such as naming conventions.

What are the pros and cons of choosing a stub such as LocalStack versus an account management policy such as an AWS account per developer?

When is LocalStack a good approach?

LocalStack allows a configuration of AWS endpoints to point to a local service running stub AWS endpoints. These services implement most of the AWS API allowing a developer to check that their cloud implementations have basic functionality before deploying to a real AWS Account. LocalStack runs on a developer’s machine standalone or as a Docker container.

For example, you can deploy a Lambda implementation that uses SQS, S3, SNS, etc and test that connectivity works including subscriptions and file writes on LocalStack.

As LocalStack mimics the AWS API, it can be used with AWS-CLI, AWS SDKs, Cloudformation, CDK, etc.

LocalStack (at 28th July 2024) does not implement IAM security rules so a developer’s deployment code will not be tested for the enforcement of IAM policies.

Some endpoints (such as S3) require configuration so that the AWS API responds with URLs that can be used by the application correctly.

Using a “fresh” environment for development pipelines can be simulated by running a “fresh” LocalStack container. For example you can do a System Test environment by using a new container, provisioning and then running system testing.

If you have a highly managed and siloed corporate deployment environment, it may be easier, quicker and more pragmatic to configure LocalStack for your development team then attempt to have multiple accounts provisioned and managed by multiple specialist teams.

When is an AWS Account per developer a good approach?

An AWS account per developer can remove a lot of complexity in the build process. Rather than managing the stub endpoints and configuration, developers can be forced to deploy with security rules such as IAM roles and consider costing of services as part of the development process.

However this requires a high level of account provisioning and policy automation. Accounts need to be monitored for cost control and features such as account destruction and cost saving shutdowns need to be implemented. Security scans for policy issues, etc can be implemented across accounts and policies for AWS technology usage can be controlled using AWS Organisations.

An account per developer opens a small step to account per environment which allows the provisioning of say System Test environments on an ad hoc basis. AWS best practices for security also suggest account per service to limit blast radius and maintain separate controls for critical services such as payment processing.

If the organisation already has centralised account policy management and a strong provisioning team(s), this may be an effective approach to reduce the complexity in development while allowing modern automated pipeline practices.

Conclusion

ApproachProsCons
LocalStackCan be deployed on a developer’s machine.

Does not require using managed environments in a corporate setting.

Can be used with AWS-SDKs, AWS-CLI, Cloudformation, SAM, CDK for deployments

Development environments are separated without naming conventions in shared accounts, etc.

Fresh LocalStacks can be used to mimic environments for various development pipeline stages.

Development environment control within the development team.
Requires application configuration to use LocalStack.

Does not test security policies before deployment.

May have incomplete behaviours compared to production. Note that the most common use cases are functionally covered.

Developers are not forced to be aware of cost issues in implementations.

Developers may implement solutions and then find policy issues when deploying to real AWS accounts.
AWS Account per DeveloperRemoves stubbing and configuration effort other than setting AWS Account Id.

Development environments are separated without naming conventions in shared accounts, etc.

Forces implementations of IAM policies to be checked in development cycle.

Opens account per environment as an option for various development pipeline stages.

Developers need to be more aware of costs when designing.

Development environment control within the development team, with policies from the provisioning team.
Requires automated AWS account creation.

Requires shared AWS Organisation policy enforcement.

Requires ongoing monitoring and management of the account usage.

Requires cost monitoring if heavyweight deployments such as EC2, ECS, EKS, etc are used.
LocalStack or an AWS Account per developer: Summary

References

Maintainable builds – with Maven!

Maven is known to be a verbose, opinionated framework for building applications, primarily for a Java Stack. In this article we discuss Lime Mojito’s view on maven, and how we use it to produce maintainable, repeatable builds using modern features such as automated testing, AWS stubbing (LocalStack) and deployment. We have OSS standards you can use in your own maven builds at https://bitbucket.org/limemojito/oss-maven-standards/src/master/ and POM’s on maven central.

Before we look at our standards, we set the context of what drives our build design by looking at our technology choices. We’ll cover why our developer builds are setup this way, but not how our Agile Continuous Integration works in this post.

Lime Mojito’s Technology Choices

Lime Mojito uses a Java based technology stack with Spring, provisioned on AWS. We use AWS CDK (Java) for provisioning and our lone exception is for web based user interfaces (UI), where we use Typescript and React with Material UI and AWS Amplify.

Our build system is developer machine first focused, using Maven as the main build system for all components other than the UI.

Build Charter

  • The build enforces our development standards to reduce the code review load.
  • The build must have a simple developer interface – mvn clean install.
  • If the clean install passes – we can move to source Pull Request (PR).
    • PR is important, as when a PR is merged we may automatically deploy to production.
  • Creating a new project or module must not require a lot of configuration (“xml hell”).
  • A module must not depend on another running Lime Mojito module for testing.
  • Any stub resources for testing must be a docker image.
  • Stubs will be managed by the build process for integration test phase.
  • The build will handle style and code metric checks (CheckStyle, Maven Enforcer, etc) so that we do not waste time in PR reviews.
  • For open source, we will post to Maven Central on a Release Build.

Open Source Standards For Our Maven Builds

Our very “top” level of build standards is open source and available for others to use or be inspired by:

Bitbucket: https://bitbucket.org/limemojito/oss-maven-standards/src/master/

The base POM files are also available on the Maven Central Repository if you want to use our approach in your own builds.

https://repo.maven.apache.org/maven2/com/limemojito/oss/standards/

Maven Example pom.xml for building a JAR library

This example will do all the below with only 6 lines of extra XML in your maven pom.xml file:

  • enforce your dependencies are a single java version
  • resolve dependencies via the Bill of Materials Library that we use too smooth out our Spring + Spring Boot + Spring Cloud + Spring Function + AWS SDK(s) dependency web.
  • Enable Lombok for easier java development with less boilerplate
  • Configure code signing
  • Configure maven repository deployment locations (I suggest overriding these for your own deployments!)
  • Configure CheckStyle for code style checking against our standards at http://standards.limemojito.com/oss-checkstyle.xml
  • Configure optional support for docker images loading before integration-test phase
  • Configure Project Lombok for Java Development with less boilerplate at compile time.
  • Configure logging support with SLF4J
  • Build a jar with completed MANIFEST.MF information including version numbers.
  • Build javadoc and source jars on a release build
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>my.dns.reversed.project</groupId>
    <artifactId>my-library</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <parent>
        <groupId>com.limemojito.oss.standards</groupId>
        <artifactId>jar-development</artifactId>
        <version>13.0.4</version>
        <relativePath/>
    </parent>
</project>

When you add dependencies, common ones that are in or resolved via our library pom.xml do not need version numbers as they are managed by our modern Bill of Materials (BOM) style dependency setup.

Example using the AWS SNS sdk as part of the jar:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>my.dns.reversed.project</groupId>
    <artifactId>my-library</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <parent>
        <groupId>com.limemojito.oss.standards</groupId>
        <artifactId>jar-development</artifactId>
        <version>13.0.4</version>
        <relativePath/>
    </parent>

    <dependencies>
        <dependency>
            <groupId>software.amazon.awssdk</groupId>
            <artifactId>sns</artifactId>
        </dependency>
    </dependencies>
</project>

Our Open Source Standards library supports the following module types (archetypes) out of the box:

TypeDescription
java-developmentBase POM used to configure deployment locations, checkstyle, enforcer, docker, plugin versions, profiles, etc. Designed to be extended for different archetypes (JAR, WAR, etc.).
jar-developmentBuild a jar file with test and docker support
jar-lamda-developmentBuild a Spring Boot Cloud Function jar suitable for lambda use (java 17 Runtime) with AWS dependencies added by default. Jar is shaded for simple upload.
spring-boot-developmentSpring boot jar constructed with the base spring-boot-starter and lime mojito aws-utilities for local stack support.
Available Module Development Types

We hope that you might find these standards interesting to try out.

CPU Throttling – Scale by restricting work

We have a web service responding to web requests. The service has a thread pool where each web request uses one operating system thread. The requests are then managed by a multi-core CPU that time-slices between the various threads using the operating system scheduler.

This example is very similar to how Tomcat (Spring Boot MVC) works out of the box when servicing requests with servlets in the Java web server space. The Java VM (v17) matches a Java Thread to an operating system thread that is then scheduled for execution by a core.

So what happens when we have a lot of requests?

Many threads here are sliced between the 4 cores. This slicing of threads where a core works on one for a while, then context switches to another thread, can scale to any level. However, there is an expense in CPU time to switch between one thread to another. This context switch is expensive as it involves both memory and CPU manipulation.

Given enough threads, the CPU cores can quickly spend a significant amount of time context switching when compared to the actual amount of time processing the request.

How do we reduce context switching?

We can trade off context switching for latency by blocking a request thread until a vCPU is available to do the work. Provided the work is largely CPU bound this may reduce the overall throughput time if the context switching has become a major use of the available vCPU resources.

For our Java spring boot based application we introduce one of the standard Executors to provide a blocking task service. We use a WorkStealingPool which is an executor that defaults the worker threads to the number of CPUs available with an unlimited queue depth.

We now move the CPU heavy process into a task that can be scheduled onto the executor by a given thread. The thread will then block on the Future returned from submitting the task – this blocking occurs until a worker thread has completed the task’s job and returned a result.

On our application, this returned a 5X improvement to average throughput times for the same work being submitted to a single microservice performing the request processing. This goes to show that in our situation the majority of CPU was being spent on context switching between requests rather than servicing the CPU intensive task for each request.

In our case this translated to 5X less CPU required and a similar reduction in our AWS EC2 costs for this service as we needed less instances provisioned to support the same load.

AWS Snap Start for faster Java Lambda

After finding Native Java Lambda to be too fragile for runtimes we investigated AWS Snap Start to speed up our cold starts for Java Lambda. While not as fast as native, Snap Start is a supported AWS Runtime mode for Lambda and it is far easier to build and deploy compared to the requirements for native lambda.

How does Snap Start Work?

Snap Start runs up your Java lambda in the initialisation phase, then takes a VM snapshot. That snapshot becomes the starting point for a cold start when the lambda initialises, rather than the startup time of your java application.

With Spring Boot this shows a large decrease in cold start time as the JVM initialisation, reflection and general image setup is happening before the first request is sent.

Snap Start is configured by saving a Version of your lambda. This version phase takes the VM snapshot and loads that instead of the standard java runtime initialisation phase. The runtime required is the offical Amazon Lambda Runtime and no custom images are required.

What are the trade offs for Snap Start?

Version Publishing needs to be added to the lambda deployment. The deployment time is longer as that image needs to be taken when the version is published.

VM shared resources may behave differently to development as they are re-hydrated before use in the cold start case. For example DB connection pools will need to fail and reconnect as they be begin at request time in a disconnected state. However see AWS RDS Proxy for this serverless use case.

As at 26th August 2023 SnapStart is limited to the x86 Architecture for Lambda runtimes.

What are the speed differences?

After warm up there was no difference between a hot JVM and the native compiled hello world program. Cold start however showed a marked difference from memory settings of 512MB and higher due to the proportional allocation of more vCPU.

Times below are in milliseconds.

Architecture2565121024
Java506640543514
SnapStart4689.222345.21713.82
Native1002773670
Comparison of Architecture v Lambda Memory Configuration
Graph of Lambda Cold Start timings

At 1GB with have approximately 1 vCPU for the lambda runtime which makes a significant difference to the cold start times. Memory settings higher than 1vCPU had little effect.

While native is over twice as fast as SnapStart the fragility of deployment for lambda and the massive increase in build times and agent CPU requirements due to compilation was un productive for our use cases.

Snap start adds around 3 minutes to deployments to take the version snapshot (on AWS resources) which we consider acceptable compared to the build agent increase that we needed to do for native (6vCPU and 8GB). As we are back to Java and scripting our agents are back down to 2vCPU and 2GB with build times less than 10 minutes.

How do you integrate Snap Start with AWS CDK?

This is a little tricky as there are not specific CDK Function props to enable SnapStart (as at 26th August 2023). With CDK we have to fall back to a cloud formation primitive to enable snap start and then take a version

Code example from out Open Source Spring Boot framework below.

final IFunction function = new Function(this,
                                       LAMBDA_FUNCTION_ID,
                                       FunctionProps.builder()
                                                    .functionName(LAMBDA_FUNCTION_ID)
                                                    .description("Lambda example with Java 17")
                                                    .role(role)
                                                    .timeout(Duration.seconds(timeoutSeconds))
                                                    .memorySize(memorySize)
                                                    .environment(Map.of())
                                                    .code(assetCode)
                                                    .runtime(JAVA_17)
                                                    .handler(LAMBDA_HANDLER)
                                                    .logRetention(RetentionDays.ONE_DAY)
                                                    .architecture(X86_64)
                                                    .build());
CfnFunction cfnFunction = (CfnFunction) function.getNode().getDefaultChild();
cfnFunction.setSnapStart(CfnFunction.SnapStartProperty.builder()
                                                      .applyOn("PublishedVersions")
                                                      .build());
IFunction snapstartVersion = new Version(this,
                                         LAMBDA_FUNCTION_ID + "-snap",
                                         VersionProps.builder()
                                                     .lambda(function)
                                                     .description("Snapstart Version")
                                                     .build());

In CDK because Version and Function both implement IFunction, you can pass a Version to route constructs as below.

String apiId = LAMBDA_FUNCTION_ID + "-api";
HttpApi api = new HttpApi(this, apiId, HttpApiProps.builder()
                                                   .apiName(apiId)
                                                   .description("Public API for %s".formatted(LAMBDA_FUNCTION_ID))
                                                   .build());
HttpLambdaIntegration integration = new HttpLambdaIntegration(LAMBDA_FUNCTION_ID + "-integration",
                                                              snapstartVersion,
                                                              HttpLambdaIntegrationProps.builder()
                                                                                        .payloadFormatVersion(
                                                                                                VERSION_2_0)
                                                                                        .build());
HttpRoute build = HttpRoute.Builder.create(this, LAMBDA_FUNCTION_ID + "-route")
                                   .routeKey(HttpRouteKey.with("/" + LAMBDA_FUNCTION_ID, HttpMethod.GET))
                                   .httpApi(api)
                                   .integration(integration)
                                   .build();

Note in the HttpLambdaIntegration that we pass a Version rather than the Function object. This produces the Cloudformation that links the API Gateway integration to your published Snap Start version of the Java Lambda.

References

Native Java AWS Lambda with Graal VM

Update: 20/8/2023: After the CDK announcement that node 16 is no longer supported after September 2023 we realised that we can’t run CDK and node on Amazon Linux2 for our build agents. We upgraded our agents to AL2023 and found out the native build produces incompatible binaries due to GLIBC upgrades, and Lambda does not support AL2023 runtimes.
We have given up with this native approach due to the fragility of the platform and are investigating AWS Snapstart which now has Java 17 support.

Update: 02/9/2023: We have switched to AWS Snap Start as it appears to be a better trade off for application portability. Short builds and no more binary compatibility issues.

Native Java AWS Lambda refers to Java program that has been compiled down to native instructions so we can get faster “cold start” times on AWS Lambda deployments.

Cold start is the initial time spent in a Lambda Function when it is first deployed by AWS and run up to respond to a request. These cold start times are visible to a caller has higher latency to the first lambda request. Java applications are known for their high cold start times due to the time taken to spin up the Java Virtual Machine and the loading of various java libraries.

We built a small framework that can assemble either a AWS Lambda Java runtime zip, or a provided container implementation of a hello world function. The container provided version is an Amazon Linux 2 Lambda Runtime with a bootstrap shell script that runs our Native Java implementation.

These example lambdas are available (open source) at https://bitbucket.org/limemojito/spring-boot-framework/src/master/development-test/

Note that these timings were against the raw hello java lambda (not the spring cloud function version).

@Slf4j
public class MethodHandler {
    public String handleRequest(String input, Context context) {
        log.info("Input: " + input);
        return "Hello World - " + input;
    }
}

Native Java AWS Lambda timings

We open with a “Cold Start” – the time taken to provision the Lambda Function and run the first request. Then a single request to the hot lambda to get the pre-JIT (Just-In-Time compiler) latency. Then ten requests to warm the lambda further so we have some JIT activity. Max Memory use is also shown to get a feel system usage. We run up to 1GB memory sizing to approach 1vCPU as per various discussions online.

Note that we run the lambda at various AWS lambda memory settings as there is a direct proportional link between vCPU allocation and the amount of memory allocated to a lambda (see AWS documentation).

This first set of timings is for a Java 17 Lambda Runtime container running a zip of the hello world function. Times are in milliseconds.

Java Container1282565121024
Cold Start6464506640543514
19052165
10X603054
Max Mem126152150150
Java Container Results
Native Java1282565121024
Cold14271002773670
110445
10X4433
Max Mem111119119119
Native Java Results

The comparison of the times below show the large performance gains for cold start.

Conclusion

From our results we have a 6X performance improvement in cold starts leading to sub second performance for the initial request.

The native version shows a more consistent warm lambda behaviour due to the native lambda compilation process. Note that the execution times seem to trend for both Java and native down to sub 10ms response times.

While there is a reduction in memory usage this is of no realisable benefit as we configure a larger memory size to get more of a vCPU allocation.

However be aware that build times increased markedly due to the compilation phase (from 2 minutes to 8 for a hello world application). This compilation phase is very CPU and memory intensive so we had to increase our build agents to 6vCPU and 8GB for compiles to work.

Integrate AWS Cognito and Spring Security

How to integrate AWS Cognito and Spring Security using JSON Web Tokens (JWT), Cognito groups and mapping to Spring Security Roles. Annotations are used to secure Java methods.

The various software components of the authorisation flow.
Authorisation flow for a web request.

AWS Cognito Configuration

  1. Configure a user pool.
  2. Apply a web client
  3. Create a user with a group.

The user pool can be created from the AWS web console. The User Pool represents a collection of users with attributes, for more information see the amazon documentation.

An app client should be created that can generate JWT tokens on authentication. An example client configuration is below, and can be created from the pool settings in the Amazon web console. This client uses a simple username/password flow to generate id, access and refresh tokens on a successful auth.

Note this form of client authentication flow is not recommended for production use.

User Password Auth Client

We can now add a group so that we can bind new users to a group membership. This is added from the group tab on the user pool console.

Creating a user

We can easily create a user using the aws command line.

aws cognito-idp admin-create-user --user-pool-id us-west-2_XXXXXXXX --username hello
aws cognito-idp admin-set-user-password --user-pool-id us-west-2_XXXXXXXX --username hello --password testtestTest1! --permanent
aws cognito-idp admin-add-user-to-group --user-pool-id us-west-2_XXXXXXXX --username hello --group-name Admin 

Fetching a JWT token

The curl example below will generate a token for our hello test user. Note that you will need to adjust the URL to the region your user pool is in, and the client id as required. The client ID can be retrieved from the App Client Information page in the AWS Cognito web console.

aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --client-id NOT_A_REAL_ID --auth-parameters USERNAME=hello,PASSWORD=testtestTest1!

Example access token

eyJraWQiOiJLeUhCMkYzNmRyc0QrNXdNT0x4NTJlQVNUNG5ZSmJTczB4NjJWT1pJNE9FPSIsImFsZyI6IlJTMjU2In0.eyJzdWIiOiJlODAwNGYyNy1lMGVjLTQ0YTMtOGRlZC0yYmE1M2UzOWZkZDMiLCJjb2duaXRvOmdyb3VwcyI6WyJBZG1pbiJdLCJpc3MiOiJodHRwczpcL1wvY29nbml0by1pZHAudXMtd2VzdC0yLmFtYXpvbmF3cy5jb21cL3VzLXdlc3QtMl82bkpHeGZKdkQiLCJjbGllbnRfaWQiOiJzZzRraTkyNDByNnBsMTlhdjRwYjA4N3JlIiwiZXZlbnRfaWQiOiIyZDM2MGQ1NS0yYjNiLTRlZjYtODM1ZC0xODZhYjE4ODAzZTMiLCJ0b2tlbl91c2UiOiJhY2Nlc3MiLCJzY29wZSI6ImF3cy5jb2duaXRvLnNpZ25pbi51c2VyLmFkbWluIiwiYXV0aF90aW1lIjoxNjg1ODc1MjY0LCJleHAiOjE2ODU4Nzg4NjQsImlhdCI6MTY4NTg3NTI2NCwianRpIjoiMWZhNTkyNzgtMGVhOC00N2E5LTg4OGYtMGJjNTQ4OWQwYzk4IiwidXNlcm5hbWUiOiJ0ZXN0In0.BZIH55ud1zCduw3WiMBbSlfEuVC4XPT6ND5CmhpbAqOI4_NghX-Y8ghW9FdIDch1bO0vDREChSEEKfPoWIe7MScsM3Gb6uhMjiE3cJBdquolY5T6JnFMS4JduREnGvlNXUx9H19DLV3zxauwciag6gSajGedGb8418T6X_qSiPgTOQqKS7J_WdodBtZ6k1_XCiTekFIc9WIkiRQdL6mo3yowSQJB4YJ7bCOrWquDkfCnoPvllbqCov7RGr8RUbGVmtZR14dm82RU_tu-AAdMDFshmVvYpfS5ZQProH97y05LlxDjJQ9t0TZwRcrfaMCAxfehfhBUViVNpr5DBgfcuA

If you decode the access token, you will see we have the claim cognito:groups set to an array containing the group Admin. See https://jwt.io

Spring Configuration

Our example uses Spring Boot 2.7x and the following maven dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-oauth2-resource-server</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
</dependency>

We start by configuring a Spring Security OAuth 2.0 Resource server. This resource server represents our service and will be guarded by the AWS Cognito access token. This JWT contains the cognito claims as configured in the Cognito User Pool.

This configuration is simply to point the issuer URL (JWT iss claim) to the Cognito Issuer URL for your User Pool.

spring:
  security:
    oauth2:
      resourceserver:
        jwt:
          issuer-uri: https://cognito-idp.us-west-2.amazonaws.com/us-west-2_xxxxxxxxx

The following security configuration enables Spring Security method level authorisation using annotations, and configures the Resource Server to split the Cognito Groups claim into a set of roles that can be mapped by the Spring Security Framework.

This Spring Security configuration maps a default role, “USER” to all valid tokens, plus each of the group names in the JWT claim cognito:groups is mapped a a spring role of the same name. As per spring naming conventions, each role has the name prefixed with “ROLE_”. We also allow spring boot actuator in this example to function without any authentication, which gives us a health endpoint, etc. In production you will want to bar access to these URLs.

@Configuration
@EnableWebSecurity
@EnableGlobalMethodSecurity(prePostEnabled = true, securedEnabled = true, jsr250Enabled = true)
@Slf4j
public class SecurityConfig {

    public static final String ROLE_USER = "ROLE_USER";
    public static final String CLAIM_COGNITO_GROUPS = "cognito:groups";

    @Bean
    public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
        return http
                // actuator permit all
                .authorizeRequests((authz) -> authz.antMatchers("/actuator/**")
                                                   .permitAll())
                // configuration access is secured.
                .authorizeRequests((authz) -> authz.anyRequest().authenticated())
                // oauth authority conversion
                .oauth2ResourceServer(this::oAuthRoleConversion)
                .build();
    }

    private void oAuthRoleConversion(OAuth2ResourceServerConfigurer<HttpSecurity> oauth2) {
        oauth2.jwt(this::jwtToGrantedAuthExtractor);
    }

    private void jwtToGrantedAuthExtractor(OAuth2ResourceServerConfigurer<HttpSecurity>.JwtConfigurer jwtConfigurer) {
        jwtConfigurer.jwtAuthenticationConverter(grantedAuthoritiesExtractor());
    }

    private Converter<Jwt, ? extends AbstractAuthenticationToken> grantedAuthoritiesExtractor() {
        JwtAuthenticationConverter converter = new JwtAuthenticationConverter();
        converter.setJwtGrantedAuthoritiesConverter(this::userAuthoritiesMapper);
        return converter;
    }

    @SuppressWarnings("unchecked")
    private Collection<GrantedAuthority> userAuthoritiesMapper(Jwt jwt) {
        return mapCognitoAuthorities((List<String>) jwt.getClaims().getOrDefault(CLAIM_COGNITO_GROUPS, Collections.<String>emptyList()));
    }

    private List<GrantedAuthority> mapCognitoAuthorities(List<String> groups) {
        log.debug("Found cognito groups {}", groups);
        List<GrantedAuthority> mapped = new ArrayList<>();
        mapped.add(new SimpleGrantedAuthority(ROLE_USER));
        groups.stream().map(role -> new SimpleGrantedAuthority("ROLE_" + role)).forEach(mapped::add);
        log.debug("Roles: {}", mapped);
        return mapped;
    }
}

A now a code example of the annotations used to secure a method. The method below, annotated by PreAuthorize, requires a group of Admin to be linked to the user calling the method. Note that the role “Admin” amps to the spring security role “ROLE_Admin” which will be sourced from the Cognito group membership of “Admin” as previously configured in our Cognito setup above.

@PreAuthorize("hasRole('Admin')")
@PostMapping
public Mono<JobInfo<TickDataLoadRequest>> create(@RequestBody TickDataLoadRequest tickDataLoadRequest) {
   return client.getTickDataLoadClient().create(tickDataLoadRequest);
}

That’s it! You now have a working example for configuring cognito and Spring Security to work together. As this is based on the Authorisation header with a bearer token, it will work with minimal configuration of API Gateway, Lambda, etc.

Convert Dukascopy Tick Data to Bars in Excel

NZDUSD Excel Bar M5

Summary

This post looks at using Trading Data Stream to convert Dukascopy Tick Data to Bars in Excel using a Comma Separate Value (CSV) format. For more information on using Trading Data Stream, see our previous post Reading Dukascopy bi5 Tick History with the TradingData Stream Library for Java.

Want to use FX Bar data but want a JSON format rather than Dukascopy binary format? We have produced a java library, Trading Data Stream, that has a lot of functions to work with Dukascopy data.
See our source code at https://bitbucket.org/limemojito/trading-data-stream.

Our Trading Data Stream library includes a sample console program, example-cli, as a Spring Boot Application. This console program accepts;

  • a FX Pair
  • bar period
  • date range
  • CSV Output file name

producing a CSV of the bar period data for the time range. The bar data is built up from pair tick information (bid price) sourced from the public Dukascopy tick data set. The tick data is dynamically downloaded and cached on your local machine, so repeated queries are very fast.

Quickstart

  1. Download or clone Trading Data Stream from https://bitbucket.org/limemojito/trading-data-stream.
  2. Install Java 11 or higher, Maven 3.9.1 or higher
  3. Open a shell or command prompt at the folder you’ve installed to.
  4. Perform a maven build to assemble and install the code
    mvn clean install -DskipTests
  5. You should see BUILD SUCESSFUL
  6. We now run the example CLI to generate a CSV.
    • Note this post was written against version 2.1.3
    • Linux/Mac:
      java -jar example-cli/target/example-cli-2.1.3.jar --symbol=NZDUSD --period=M5 --start=2018-01-02T00:00:00Z --end=2018-01-02T00:59:59Z --output=test-nz.csv
    • Windows:
      java -jar example-cli\target\example-cli-2.1.3.jar --symbol=NZDUSD --period=M5 --start=2018-01-02T00:00:00Z --end=2018-01-02T00:59:59Z --output=test-nz.csv
    • Note this first run may take a little time as it is downloading data.
    • The final output should look like
  • CSV file output:
Epoch Time (UTC),Symbol,Period,Open,High,Low,Close
2018-01-02 00:00:00,NZDUSD,M5,70867,70897,70867,70879
2018-01-02 00:05:00,NZDUSD,M5,70879,70889,70877,70888
2018-01-02 00:10:00,NZDUSD,M5,70890,70896,70869,70891
2018-01-02 00:15:00,NZDUSD,M5,70893,70925,70890,70913
2018-01-02 00:20:00,NZDUSD,M5,70915,70974,70915,70972
2018-01-02 00:25:00,NZDUSD,M5,70971,70984,70955,70984
2018-01-02 00:30:00,NZDUSD,M5,70984,71015,70973,70992
2018-01-02 00:35:00,NZDUSD,M5,70992,71008,70972,71008
2018-01-02 00:40:00,NZDUSD,M5,71011,71026,71002,71023
2018-01-02 00:45:00,NZDUSD,M5,71024,71041,71003,71035
2018-01-02 00:50:00,NZDUSD,M5,71036,71042,71016,71024
2018-01-02 00:55:00,NZDUSD,M5,71024,71043,71022,71025

If you open this in Excel you’ll get a clean spreadsheet.

Image of excel spreadsheet showing bar data.