AWS Snap Start for faster Java Lambda

After finding Native Java Lambda to be too fragile for runtimes we investigated AWS Snap Start to speed up our cold starts for Java Lambda. While not as fast as native, Snap Start is a supported AWS Runtime mode for Lambda and it is far easier to build and deploy compared to the requirements for native lambda.

How does Snap Start Work?

Snap Start runs up your Java lambda in the initialisation phase, then takes a VM snapshot. That snapshot becomes the starting point for a cold start when the lambda initialises, rather than the startup time of your java application.

With Spring Boot this shows a large decrease in cold start time as the JVM initialisation, reflection and general image setup is happening before the first request is sent.

Snap Start is configured by saving a Version of your lambda. This version phase takes the VM snapshot and loads that instead of the standard java runtime initialisation phase. The runtime required is the offical Amazon Lambda Runtime and no custom images are required.

What are the trade offs for Snap Start?

Version Publishing needs to be added to the lambda deployment. The deployment time is longer as that image needs to be taken when the version is published.

VM shared resources may behave differently to development as they are re-hydrated before use in the cold start case. For example DB connection pools will need to fail and reconnect as they be begin at request time in a disconnected state. However see AWS RDS Proxy for this serverless use case.

As at 26th August 2023 SnapStart is limited to the x86 Architecture for Lambda runtimes.

What are the speed differences?

After warm up there was no difference between a hot JVM and the native compiled hello world program. Cold start however showed a marked difference from memory settings of 512MB and higher due to the proportional allocation of more vCPU.

Times below are in milliseconds.

Architecture2565121024
Java506640543514
SnapStart4689.222345.21713.82
Native1002773670
Comparison of Architecture v Lambda Memory Configuration
Graph of Lambda Cold Start timings

At 1GB with have approximately 1 vCPU for the lambda runtime which makes a significant difference to the cold start times. Memory settings higher than 1vCPU had little effect.

While native is over twice as fast as SnapStart the fragility of deployment for lambda and the massive increase in build times and agent CPU requirements due to compilation was un productive for our use cases.

Snap start adds around 3 minutes to deployments to take the version snapshot (on AWS resources) which we consider acceptable compared to the build agent increase that we needed to do for native (6vCPU and 8GB). As we are back to Java and scripting our agents are back down to 2vCPU and 2GB with build times less than 10 minutes.

How do you integrate Snap Start with AWS CDK?

This is a little tricky as there are not specific CDK Function props to enable SnapStart (as at 26th August 2023). With CDK we have to fall back to a cloud formation primitive to enable snap start and then take a version

Code example from out Open Source Spring Boot framework below.

final IFunction function = new Function(this,
                                       LAMBDA_FUNCTION_ID,
                                       FunctionProps.builder()
                                                    .functionName(LAMBDA_FUNCTION_ID)
                                                    .description("Lambda example with Java 17")
                                                    .role(role)
                                                    .timeout(Duration.seconds(timeoutSeconds))
                                                    .memorySize(memorySize)
                                                    .environment(Map.of())
                                                    .code(assetCode)
                                                    .runtime(JAVA_17)
                                                    .handler(LAMBDA_HANDLER)
                                                    .logRetention(RetentionDays.ONE_DAY)
                                                    .architecture(X86_64)
                                                    .build());
CfnFunction cfnFunction = (CfnFunction) function.getNode().getDefaultChild();
cfnFunction.setSnapStart(CfnFunction.SnapStartProperty.builder()
                                                      .applyOn("PublishedVersions")
                                                      .build());
IFunction snapstartVersion = new Version(this,
                                         LAMBDA_FUNCTION_ID + "-snap",
                                         VersionProps.builder()
                                                     .lambda(function)
                                                     .description("Snapstart Version")
                                                     .build());

In CDK because Version and Function both implement IFunction, you can pass a Version to route constructs as below.

String apiId = LAMBDA_FUNCTION_ID + "-api";
HttpApi api = new HttpApi(this, apiId, HttpApiProps.builder()
                                                   .apiName(apiId)
                                                   .description("Public API for %s".formatted(LAMBDA_FUNCTION_ID))
                                                   .build());
HttpLambdaIntegration integration = new HttpLambdaIntegration(LAMBDA_FUNCTION_ID + "-integration",
                                                              snapstartVersion,
                                                              HttpLambdaIntegrationProps.builder()
                                                                                        .payloadFormatVersion(
                                                                                                VERSION_2_0)
                                                                                        .build());
HttpRoute build = HttpRoute.Builder.create(this, LAMBDA_FUNCTION_ID + "-route")
                                   .routeKey(HttpRouteKey.with("/" + LAMBDA_FUNCTION_ID, HttpMethod.GET))
                                   .httpApi(api)
                                   .integration(integration)
                                   .build();

Note in the HttpLambdaIntegration that we pass a Version rather than the Function object. This produces the Cloudformation that links the API Gateway integration to your published Snap Start version of the Java Lambda.

References

Native Java AWS Lambda with Graal VM

Update: 20/8/2023: After the CDK announcement that node 16 is no longer supported after September 2023 we realised that we can’t run CDK and node on Amazon Linux2 for our build agents. We upgraded our agents to AL2023 and found out the native build produces incompatible binaries due to GLIBC upgrades, and Lambda does not support AL2023 runtimes.
We have given up with this native approach due to the fragility of the platform and are investigating AWS Snapstart which now has Java 17 support.

Update: 02/9/2023: We have switched to AWS Snap Start as it appears to be a better trade off for application portability. Short builds and no more binary compatibility issues.

Native Java AWS Lambda refers to Java program that has been compiled down to native instructions so we can get faster “cold start” times on AWS Lambda deployments.

Cold start is the initial time spent in a Lambda Function when it is first deployed by AWS and run up to respond to a request. These cold start times are visible to a caller has higher latency to the first lambda request. Java applications are known for their high cold start times due to the time taken to spin up the Java Virtual Machine and the loading of various java libraries.

We built a small framework that can assemble either a AWS Lambda Java runtime zip, or a provided container implementation of a hello world function. The container provided version is an Amazon Linux 2 Lambda Runtime with a bootstrap shell script that runs our Native Java implementation.

These example lambdas are available (open source) at https://bitbucket.org/limemojito/spring-boot-framework/src/master/development-test/

Note that these timings were against the raw hello java lambda (not the spring cloud function version).

@Slf4j
public class MethodHandler {
    public String handleRequest(String input, Context context) {
        log.info("Input: " + input);
        return "Hello World - " + input;
    }
}

Native Java AWS Lambda timings

We open with a “Cold Start” – the time taken to provision the Lambda Function and run the first request. Then a single request to the hot lambda to get the pre-JIT (Just-In-Time compiler) latency. Then ten requests to warm the lambda further so we have some JIT activity. Max Memory use is also shown to get a feel system usage. We run up to 1GB memory sizing to approach 1vCPU as per various discussions online.

Note that we run the lambda at various AWS lambda memory settings as there is a direct proportional link between vCPU allocation and the amount of memory allocated to a lambda (see AWS documentation).

This first set of timings is for a Java 17 Lambda Runtime container running a zip of the hello world function. Times are in milliseconds.

Java Container1282565121024
Cold Start6464506640543514
19052165
10X603054
Max Mem126152150150
Java Container Results
Native Java1282565121024
Cold14271002773670
110445
10X4433
Max Mem111119119119
Native Java Results

The comparison of the times below show the large performance gains for cold start.

Conclusion

From our results we have a 6X performance improvement in cold starts leading to sub second performance for the initial request.

The native version shows a more consistent warm lambda behaviour due to the native lambda compilation process. Note that the execution times seem to trend for both Java and native down to sub 10ms response times.

While there is a reduction in memory usage this is of no realisable benefit as we configure a larger memory size to get more of a vCPU allocation.

However be aware that build times increased markedly due to the compilation phase (from 2 minutes to 8 for a hello world application). This compilation phase is very CPU and memory intensive so we had to increase our build agents to 6vCPU and 8GB for compiles to work.

Integrate AWS Cognito and Spring Security

How to integrate AWS Cognito and Spring Security using JSON Web Tokens (JWT), Cognito groups and mapping to Spring Security Roles. Annotations are used to secure Java methods.

The various software components of the authorisation flow.
Authorisation flow for a web request.

AWS Cognito Configuration

  1. Configure a user pool.
  2. Apply a web client
  3. Create a user with a group.

The user pool can be created from the AWS web console. The User Pool represents a collection of users with attributes, for more information see the amazon documentation.

An app client should be created that can generate JWT tokens on authentication. An example client configuration is below, and can be created from the pool settings in the Amazon web console. This client uses a simple username/password flow to generate id, access and refresh tokens on a successful auth.

Note this form of client authentication flow is not recommended for production use.

User Password Auth Client

We can now add a group so that we can bind new users to a group membership. This is added from the group tab on the user pool console.

Creating a user

We can easily create a user using the aws command line.

aws cognito-idp admin-create-user --user-pool-id us-west-2_XXXXXXXX --username hello
aws cognito-idp admin-set-user-password --user-pool-id us-west-2_XXXXXXXX --username hello --password testtestTest1! --permanent
aws cognito-idp admin-add-user-to-group --user-pool-id us-west-2_XXXXXXXX --username hello --group-name Admin 

Fetching a JWT token

The curl example below will generate a token for our hello test user. Note that you will need to adjust the URL to the region your user pool is in, and the client id as required. The client ID can be retrieved from the App Client Information page in the AWS Cognito web console.

aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --client-id NOT_A_REAL_ID --auth-parameters USERNAME=hello,PASSWORD=testtestTest1!

Example access token

eyJraWQiOiJLeUhCMkYzNmRyc0QrNXdNT0x4NTJlQVNUNG5ZSmJTczB4NjJWT1pJNE9FPSIsImFsZyI6IlJTMjU2In0.eyJzdWIiOiJlODAwNGYyNy1lMGVjLTQ0YTMtOGRlZC0yYmE1M2UzOWZkZDMiLCJjb2duaXRvOmdyb3VwcyI6WyJBZG1pbiJdLCJpc3MiOiJodHRwczpcL1wvY29nbml0by1pZHAudXMtd2VzdC0yLmFtYXpvbmF3cy5jb21cL3VzLXdlc3QtMl82bkpHeGZKdkQiLCJjbGllbnRfaWQiOiJzZzRraTkyNDByNnBsMTlhdjRwYjA4N3JlIiwiZXZlbnRfaWQiOiIyZDM2MGQ1NS0yYjNiLTRlZjYtODM1ZC0xODZhYjE4ODAzZTMiLCJ0b2tlbl91c2UiOiJhY2Nlc3MiLCJzY29wZSI6ImF3cy5jb2duaXRvLnNpZ25pbi51c2VyLmFkbWluIiwiYXV0aF90aW1lIjoxNjg1ODc1MjY0LCJleHAiOjE2ODU4Nzg4NjQsImlhdCI6MTY4NTg3NTI2NCwianRpIjoiMWZhNTkyNzgtMGVhOC00N2E5LTg4OGYtMGJjNTQ4OWQwYzk4IiwidXNlcm5hbWUiOiJ0ZXN0In0.BZIH55ud1zCduw3WiMBbSlfEuVC4XPT6ND5CmhpbAqOI4_NghX-Y8ghW9FdIDch1bO0vDREChSEEKfPoWIe7MScsM3Gb6uhMjiE3cJBdquolY5T6JnFMS4JduREnGvlNXUx9H19DLV3zxauwciag6gSajGedGb8418T6X_qSiPgTOQqKS7J_WdodBtZ6k1_XCiTekFIc9WIkiRQdL6mo3yowSQJB4YJ7bCOrWquDkfCnoPvllbqCov7RGr8RUbGVmtZR14dm82RU_tu-AAdMDFshmVvYpfS5ZQProH97y05LlxDjJQ9t0TZwRcrfaMCAxfehfhBUViVNpr5DBgfcuA

If you decode the access token, you will see we have the claim cognito:groups set to an array containing the group Admin. See https://jwt.io

Spring Configuration

Our example uses Spring Boot 2.7x and the following maven dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-oauth2-resource-server</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-security</artifactId>
</dependency>

We start by configuring a Spring Security OAuth 2.0 Resource server. This resource server represents our service and will be guarded by the AWS Cognito access token. This JWT contains the cognito claims as configured in the Cognito User Pool.

This configuration is simply to point the issuer URL (JWT iss claim) to the Cognito Issuer URL for your User Pool.

spring:
  security:
    oauth2:
      resourceserver:
        jwt:
          issuer-uri: https://cognito-idp.us-west-2.amazonaws.com/us-west-2_xxxxxxxxx

The following security configuration enables Spring Security method level authorisation using annotations, and configures the Resource Server to split the Cognito Groups claim into a set of roles that can be mapped by the Spring Security Framework.

This Spring Security configuration maps a default role, “USER” to all valid tokens, plus each of the group names in the JWT claim cognito:groups is mapped a a spring role of the same name. As per spring naming conventions, each role has the name prefixed with “ROLE_”. We also allow spring boot actuator in this example to function without any authentication, which gives us a health endpoint, etc. In production you will want to bar access to these URLs.

@Configuration
@EnableWebSecurity
@EnableGlobalMethodSecurity(prePostEnabled = true, securedEnabled = true, jsr250Enabled = true)
@Slf4j
public class SecurityConfig {

    public static final String ROLE_USER = "ROLE_USER";
    public static final String CLAIM_COGNITO_GROUPS = "cognito:groups";

    @Bean
    public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
        return http
                // actuator permit all
                .authorizeRequests((authz) -> authz.antMatchers("/actuator/**")
                                                   .permitAll())
                // configuration access is secured.
                .authorizeRequests((authz) -> authz.anyRequest().authenticated())
                // oauth authority conversion
                .oauth2ResourceServer(this::oAuthRoleConversion)
                .build();
    }

    private void oAuthRoleConversion(OAuth2ResourceServerConfigurer<HttpSecurity> oauth2) {
        oauth2.jwt(this::jwtToGrantedAuthExtractor);
    }

    private void jwtToGrantedAuthExtractor(OAuth2ResourceServerConfigurer<HttpSecurity>.JwtConfigurer jwtConfigurer) {
        jwtConfigurer.jwtAuthenticationConverter(grantedAuthoritiesExtractor());
    }

    private Converter<Jwt, ? extends AbstractAuthenticationToken> grantedAuthoritiesExtractor() {
        JwtAuthenticationConverter converter = new JwtAuthenticationConverter();
        converter.setJwtGrantedAuthoritiesConverter(this::userAuthoritiesMapper);
        return converter;
    }

    @SuppressWarnings("unchecked")
    private Collection<GrantedAuthority> userAuthoritiesMapper(Jwt jwt) {
        return mapCognitoAuthorities((List<String>) jwt.getClaims().getOrDefault(CLAIM_COGNITO_GROUPS, Collections.<String>emptyList()));
    }

    private List<GrantedAuthority> mapCognitoAuthorities(List<String> groups) {
        log.debug("Found cognito groups {}", groups);
        List<GrantedAuthority> mapped = new ArrayList<>();
        mapped.add(new SimpleGrantedAuthority(ROLE_USER));
        groups.stream().map(role -> new SimpleGrantedAuthority("ROLE_" + role)).forEach(mapped::add);
        log.debug("Roles: {}", mapped);
        return mapped;
    }
}

A now a code example of the annotations used to secure a method. The method below, annotated by PreAuthorize, requires a group of Admin to be linked to the user calling the method. Note that the role “Admin” amps to the spring security role “ROLE_Admin” which will be sourced from the Cognito group membership of “Admin” as previously configured in our Cognito setup above.

@PreAuthorize("hasRole('Admin')")
@PostMapping
public Mono<JobInfo<TickDataLoadRequest>> create(@RequestBody TickDataLoadRequest tickDataLoadRequest) {
   return client.getTickDataLoadClient().create(tickDataLoadRequest);
}

That’s it! You now have a working example for configuring cognito and Spring Security to work together. As this is based on the Authorisation header with a bearer token, it will work with minimal configuration of API Gateway, Lambda, etc.

Convert Dukascopy Tick Data to Bars in Excel

NZDUSD Excel Bar M5

Summary

This post looks at using Trading Data Stream to convert Dukascopy Tick Data to Bars in Excel using a Comma Separate Value (CSV) format. For more information on using Trading Data Stream, see our previous post Reading Dukascopy bi5 Tick History with the TradingData Stream Library for Java.

Want to use FX Bar data but want a JSON format rather than Dukascopy binary format? We have produced a java library, Trading Data Stream, that has a lot of functions to work with Dukascopy data.
See our source code at https://bitbucket.org/limemojito/trading-data-stream.

Our Trading Data Stream library includes a sample console program, example-cli, as a Spring Boot Application. This console program accepts;

  • a FX Pair
  • bar period
  • date range
  • CSV Output file name

producing a CSV of the bar period data for the time range. The bar data is built up from pair tick information (bid price) sourced from the public Dukascopy tick data set. The tick data is dynamically downloaded and cached on your local machine, so repeated queries are very fast.

Quickstart

  1. Download or clone Trading Data Stream from https://bitbucket.org/limemojito/trading-data-stream.
  2. Install Java 11 or higher, Maven 3.9.1 or higher
  3. Open a shell or command prompt at the folder you’ve installed to.
  4. Perform a maven build to assemble and install the code
    mvn clean install -DskipTests
  5. You should see BUILD SUCESSFUL
  6. We now run the example CLI to generate a CSV.
    • Note this post was written against version 2.1.3
    • Linux/Mac:
      java -jar example-cli/target/example-cli-2.1.3.jar --symbol=NZDUSD --period=M5 --start=2018-01-02T00:00:00Z --end=2018-01-02T00:59:59Z --output=test-nz.csv
    • Windows:
      java -jar example-cli\target\example-cli-2.1.3.jar --symbol=NZDUSD --period=M5 --start=2018-01-02T00:00:00Z --end=2018-01-02T00:59:59Z --output=test-nz.csv
    • Note this first run may take a little time as it is downloading data.
    • The final output should look like
  • CSV file output:
Epoch Time (UTC),Symbol,Period,Open,High,Low,Close
2018-01-02 00:00:00,NZDUSD,M5,70867,70897,70867,70879
2018-01-02 00:05:00,NZDUSD,M5,70879,70889,70877,70888
2018-01-02 00:10:00,NZDUSD,M5,70890,70896,70869,70891
2018-01-02 00:15:00,NZDUSD,M5,70893,70925,70890,70913
2018-01-02 00:20:00,NZDUSD,M5,70915,70974,70915,70972
2018-01-02 00:25:00,NZDUSD,M5,70971,70984,70955,70984
2018-01-02 00:30:00,NZDUSD,M5,70984,71015,70973,70992
2018-01-02 00:35:00,NZDUSD,M5,70992,71008,70972,71008
2018-01-02 00:40:00,NZDUSD,M5,71011,71026,71002,71023
2018-01-02 00:45:00,NZDUSD,M5,71024,71041,71003,71035
2018-01-02 00:50:00,NZDUSD,M5,71036,71042,71016,71024
2018-01-02 00:55:00,NZDUSD,M5,71024,71043,71022,71025

If you open this in Excel you’ll get a clean spreadsheet.

Image of excel spreadsheet showing bar data.

Reading Dukascopy bi5 Tick History with the TradingData Stream Library for Java

This java library reads the publicly available binary format bi5 Dukascopy Bank tick history files and convert them to a Java InputStream to be used with your applications.

https://bitbucket.org/limemojito/trading-data-stream

TradingDataStream FX Data model library

This library supports;

  • High level search APIs for Tick and Bar streams, backed by cached dukascopy files.
  • on demand fetch from Dukascopy
  • local filesystem caching
  • Amazon Web Service S3 caching
  • Bar aggregation from the tick data
  • Bar search queries by barCount or date time range (UTC).
  • stream -> CSV file conversion.
  • stream -> JSON file conversion.
  • “Standlone” configuration for quick scripts.
  • Spring bean configurations and customisation for use in large applications.

Provided under the Apache 2.0 License, please refer to LICENSE.txt and DATA_DISCLAIMER.txt in our software code repository. This software is supplied as-is, use at your own risk and information from using this software does NOT constitute financial advice.

Please note we are not affiliated with Dukascopy in any way. This project was a clean room engineering effort to read the dukascopy files. This library was inspired by the C++ binding at https://github.com/ninety47/dukascopy.

Fetching Tick data using Dukascopy bi5 publicly available history data

Using TradingDataStream with a maven project

Add the following to the dependencies section of your pom.xml

<dependency>
  <groupId>com.limemojito.oss.trading.trading-data-stream</groupId>
  <artifactId>model</artifactId>
  <version>2.1.2</version>
</dependency>

TradingDataStream: Using the high level TradingSearch API for Tick data

This high level API allows you to use a query by time to retrieve ticks. An appropriate number of bi5 file are retrieved from dukascopy to answer the query, with data timing, etc to fit the results within the query parameters.

The standalone setup here uses local file caching in your user’s home directory under .dukascopy-cache to cache the bi5 files retrieved to increase the speed of repeated searches.

TradingSearch search=TradingDataStreamConfiguration.standaloneSetup();
try(TradingInputStream<Tick> ticks = search("EURUSD","2020-01-02T00:00:00Z","2020-01-02T00:59:59Z")){
    ticks.stream()
         .foreach(t -> log.info("{} {} bid: {}}, t.getMillisecondsUtc(), t.getSymbol(), t.getBid());
}

TradingDataStream: Reading an existing Dukascopy bi5 FX Tick History file with Java

We recommend using the TradingSearch APIs as these work with configured caches to reduce the load on the Dukascopy servers. Our low level APIs can read individual file data streams as below.

The separation of “path” and the file data is due to the naming convention of the data in the Dukascopy repository.

Symbol/Year/Month (0 indexed)/DayOfMonth/{24hourOfDay}h_ticks.bi5

String path = "EURUSD/2018/06/05/05h_ticks.bi5"
try(FileInputStream fileStream = new FileInputStream(path);
    TradingInputStream<Tick> ticks = new DukascopyTickInputStream(VALIDATOR, path, fileStream)) {
    ticks.stream().foreach( t -> log.info("{} {} bid: {}}, t.getMillisecondsUtc(), t.getSymbol(), t.getBid());
}

Tick Dukascopy File Format

Note that dukascopy is a UTC+0 offset so no time adjustment is necessary

The files I downloaded are named something like ’00h_ticks.bi5′. These ‘bi5’ files are LZMA compressed binary data files. The binary data file are formatted into 20-byte rows.

  • 32-bit integer: milliseconds since epoch
  • 32-bit float: Ask price
  • 32-bit float: Bid price
  • 32-bit float: Ask volume
  • 32-bit float: Bid volume

The ask and bid prices need to be multiplied by the point value for the symbol/currency pair. The epoch is extracted from the URL (and the folder structure I’ve used to store the files on disk). It represents the point in time that the file starts from e.g. 2013/01/14/00h_ticks.bi5 has the epoch of midnight on 14 January 2013. Example using C++ to work file format, including format and computation of “epoch time”:

LZ compression/decompression can be done with apache commons compress:

https://commons.apache.org/proper/commons-compress/

This format is “valid” after experimentation.

[   TIME  ] [   ASKP  ] [   BIDP  ] [   ASKV  ] [   BIDV  ]
[0000 0800] [0002 2f51] [0002 2f47] [4096 6666] [4013 3333]
  • TIME is a 32-bit big-endian integer representing the number of milliseconds that have passed since the beginning of this hour.
  • ASKP is a 32-bit big-endian integer representing the asking price of the pair, multiplied by 100,000.
  • BIDP is a 32-bit big-endian integer representing the bidding price of the pair, multiplied by 100,000.
  • ASKV is a 32-bit big-endian floating point number representing the asking volume, divided by 1,000,000.
  • BIDV is a 32-bit big-endian floating point number representing the bidding volume, divided by 1,000,000.

Tick Data JSON Format

Note that epoch milliseconds is relative to UTC timezone. source is live | historical

{
   "epochMilliseconds": 94875945798,
   "symbol": "EURUSD",
   "bid" :134567,
   "ask" : 134520,
   "source": "live",
   "streamId": "00000000-0000-0000-0000-000000000000" 
}

Spring Cloud Config to AWS Parameter Store easy conversion tool

Introducing our new utility to get you from YAML to AWS parameter store.

Why

One of the drawbacks with Spring Cloud Configuration Server is that the server needs to be running before applications can be spun up. As we have become more cloud native on AWS we’ve wanted to move to AWS centric configuration systems, but to do that we needed a path from the existing git version control system (VCS) based config server.

So what we were missing was an easy conversion to AWS Parameter Store from Spring Cloud Config.

How

We liked Spring Cloud Config Server for many years, as it provided the following benefits:

  • git Version control with encryption-at-rest for application config.
  • a single point of control for all applications as we could set global configurations that affected all applications deployed.
  • A very simple bootstrap.yml file for startup without having to specify a lot of configuration.

We use Spring Cloud AWS (now awspring.io) libraries in a lot of our applications, and the support for both AWS parameter store and secrets manager are now baked into a spring boot starter.

A quick experiment showed some benefits for going to AWS parameter store based config

  • configuration always available without remote hosted config server.
  • use of secureString could replace our encryption at rest with config server
  • bootstrap is even simpler with just the application name required.
  • still supports “global” spring application configuration, which we use a lot with Jackson.

We like having our application config in git, as this gives us a simple code on branch, review and merge process using bitbucket. This was the only drawback with going to AWS PS, but surely could be solved with some code.

We’re in a slow move to serverless, so any chance to remove the need for a low utilisation server gets us a step closer to no clusters.

Result

Our code and how to use it: https://bitbucket.org/limemojito/yaml-to-param-store.

So we are pleased to announce a small Open Source java jar that allows you to convert a single or a directory of yaml spring configuration files to AWS parameter store following the path and naming convention for Spring Cloud AWS. It included support for spring profiles conversion, AWS tagging the parameters and updating changed or new values on repeated runs. The command line tool does NOT delete parameters, though the code has support for removing an application by name including all of its profiles.

We have configured our own build server to checkout the configuration server repo, and run our tool over the yaml files to keep them in sync with parameter store.

Details on usage is available on bitbucket at https://bitbucket.org/limemojito/yaml-to-param-store.

For more information on using parameter store with a boot application, please see the configuration steps using Spring Cloud AWS in your Spring Boot application.