Skip to main content


In this guide we're going to use this repo's example app in the java directory to capture, transform and replay traffic with mocks.


  1. Speedctl is installed
  2. Clone
  3. Java is present
  4. Make sure JAVA_HOME is set correctly (on MacOS you can run /usr/libexec/java_home to find the correct JAVA_HOME)
  5. Install jq and make if not already on your desktop (brew install jq and brew install make on MacOS for example)

The App

The app is a Java Spring Boot web server with authenticated endpoints that makes requests out to a few external services. The project README (at java/ shows ways to run the app locally, in Docker and in Kubernetes. Speedscale can be configured to be compatible with whichever way you choose to deploy the app. Note that for this test run we'll disable DLP but you can find instructions on how to enable it near the end.


Make sure you navigate to the java subdirectory within the demo repository.

  1. Install the operator
  2. Run:
make kube-capture

This should start generating traffic that you can see in the Speedscale UI within a couple of minutes.


After a few minutes you should be able to see your traffic in the Speedscale dashboard. Make sure to select the the same service name that you entered in speedctl install from the traffic dropdown. You should be able to see the inbound and outbound calls for this app as shown below.


You can also drill down into specific requests-response pairs (RRPairs). For eg. we see the request we make to our app to get the max interest rate for treasuries.


And we can also see the outbound request our app makes to the Treasury API to fulfill this request.


You can and inspect it further or you can skip ahead to running a replay which will also create a snapshot as a side effect.


We're now going to run a replay for this captured traffic.

Click the Replay as Tests/Mocks button on the top right and walk through the wizard. All the default settings are ok here. You can also do this outside of the UI with more instructions here.

Making sense of a Replay

If you are already viewing the Snapshot you recorded, you can see your replay appear in the Replay tab. Alternatively, you can find a report for your Replay in the dashboard. It should look something like this.


We can see that all the requests our app makes to third party APIs were mocked out with 100% success rate. For eg. we can see the request our app made to the Treasury API that was actually mocked out which is great for isolation during tests but we can also do this for other parts of our development cycle as detailed here.


You can also see that the success rate isn't 100%. We can drill down into a specific assertion to see why.



The JWT we get from the login request is different which is expected. If we were replaying in a different environment, we might need different credentials entirely for the login request. Another pattern might be a set of traffic where we don't have a login request and instead we need to resign the JWT from our captured traffic. This is where Transforms come into play and we can edit our captured data to parameterize parts of it. As an example, we can transform our snapshot to edit the password.


When you find the request in the transform editor, you can click the pencil icon next to the field you want to edit which in our case is the password.


For now we'll just replace it with a constant but there are all sorts of options that can be chained together.


We just did a transform for the traffic coming into our app during a replay. We can also use transforms for mocks in case we have to parameterize fields like session ids, dates, etc. Check out this guide for a deep dive.


Another assertion that failed in our report is for the /spacex/ship endpoint.

Ship Endpoint

Our app returns a different ship ID every time we make a request so this is an expected failure. We can edit the test config for our Report to account for this.

Edit Test Config

Click the pencil next to the HTTP Response Body assertion and we can add ship_id as a json path to ignore.


After saving, we can reanalyze the report and we see much better results!



In this demo we:

  1. Captured some traffic
  2. Analyzed it in the Traffic Viewer
  3. Optionally configured DLP
  4. Ran a replay and created a snapshot
  5. Used auto generated mocks during the replay
  6. Transformed some snapshot traffic
  7. Edited the assertions
  8. Reanalyzed the report for a higher success rate

Next Steps

This is just a small subset of things you can do with Speedscale, other things to try out could be:

  1. Capture traffic from one of your own apps
  2. Replay traffic from one cluster into another
  3. Run a load test
  4. Integrate with CI/CD


If you'd like to remove the demo from your environment follow these instructions:

make kube-clean

(Optional) Data loss Protection

If we drill down into a request, we can also see data we may not want to leave our environment.


You can enable DLP to redact certain fields from an RRPair at capture time. Note that this will cause your replays to have low success rates because necessary information will be masked. Check out the dlp section for more information on DLP configuration. As a starter, you cna follow the instructions below.


speedctl infra dlp enable

Now we see the authorization header is redacted and never makes it to Speedscale.


For more complex DLP configuration you can use this guide.