Getting Real-World Test Data from Production

Why?

To responsibly test changes before deploying them to production, we need a test environment that behaves, in as many aspects as possible, the same way as our production environment does.

Integration tests should be running in an environment that is as close to production as possible.

We want the ability to reproduce production issues in our test environment for further analysis.

If our test environment only contains engineered test data, we’re going to potentially miss or have trouble reproducing a lot of input data or program state related issues.

Program State Issues

An example of a program state related issue would be having inconsistent data in our database. This would likely lead to undesired application behavior (bugs).

If our test environment only consists of manually engineered database records, it’s possible this bug would be missed in test, making it only detectable in production.

Input Data Issues

An example of an input data issue would be a program that reads CSV files into a database has an issue processing null fields, and some input data contains null fields.

If we only use manually engineered test input data, with no null fields, we will not encounter this issue in test.

Detecting/Reproducing Program State Issues: Mirroring Existing Production State in Test

Copy Down Production Data Stores

In deterministic applications, all program state is driven by input data. That being said, it’s often not possible to replay all input events from all-time or replicate historical one-off manual data migrations performed in production. This is why we should consider copying data stores to replicate application state.

The general idea here is that we want to copy down, from production to test, the data stores our application will interact with (dynamo tables, MySQL databases, etc).

Preferably, you’d do a one-time migration from your production data store to your test data store, and then as an ongoing synchronization method, copy all future state changing events from production to test (as we will talk about in the next section).

Figure 1: Production and test environments without a method of copying production data to test
Figure 2: Depiction of how to implement production mirroring into test environment

If it is not possible to duplicate all input data that changes state into your test environment (from production), the only feasible solution may be to set up timed jobs that copy data from production to test on a periodic basis. There are many tools that support running jobs on a timer, including Jenkins and Rundeck.

Figure 3: Depiction of a cron-based data copy down

Remember, our goal here is to create a test environment that replicates our production environment as closely as possible.

Stateless Applications

If your application is stateless (does not store data), there is no need to implement a data copy down process to mirror existing application state.

Addressing Input Data Related Issues

We want to copy input events from our production environment to our test environment to both detect input data related issues as well as maintain consistency with our production application state (in stateful applications).

Essentially, we need two things:

  1. A mechanism to copy input data (HTTP requests, events, etc) from production to test. This can be automatic or on-demand with some qualifier.
  2. Integration tests that incorporate as much real-world data as possible.

Often a combination of existing program state and specific input data are required to reproduce a bug, so it’s important to address both aspects of copying down production data.

Let it Bake

Once developers have a test environment that is comparable to their production environment: the infrastructure is the mirrored, the application state is mirrored, and the data being processed is flowing through both environments, they can let their releases “bake” in test before releasing to production.

Developers can deploy new releases to the test environment, wait a specified period of time, and then compare the output of their test deployment with the output of their production deployment.

A delayed release strategy will lower operational risk by allowing developers to see how their code behaves in a production-like environment before actually deploying to production.

Potential Pitfalls

Data is Confidential

If you are working with confidential data, or other limitations prevent you from copying production data to test as-is, you’ll need to implement a data anonymization process.

Figure 4: Data anonymization applied to input data copy down

If the fields considered confidential are not processed by the program under test, it may be sufficient to have your copy down process replace those irrelevant values with dummy values without any negative consequence (e.g. if SSN is irrelevant for our app, we can just zero-fill the field when copying down to test).

However, if the fields considered confidential are processed by the program under test, we will need to do consistent replacement.

Consistent Replacement

If you have a field that is considered sensitive and cannot be copied to the test environment, but, the field is used by the application, we must do a consistent replacement when anonymizing the data. Consistent replacement is where we replace sensitive fields with a consistent dummy value, using a persisted mapping, during our anonymization process.

Figure 5: Data anonymization applied to input data copy down with persisted mappings for consistent replacement

Take, for example, an application that reads in banking records, and indexes transactions in a database by the customer’s last name. If the last name field is considered confidential information, we cannot just randomly replace the field when anonymizing, we must replace the field with a consistent replacement value every time.

If, the customers’ last name was Johnson, and we replace it with Phillips, we must store that mapping (Johnson->Phillips) to ensure every future instance of Johnson is replaced with Phillips when anonymizing future data.

The replacement mappings should be treated as sensitive production data.

Whatever data store is used to store the mappings used for consistent replacement should be very high throughput to avoid bottle-necking the system. I’d consider Redis or AWS DynamoDB for this.

Don’t forget: Your consistent replacement method should also be used when doing the initial data migration.

Example Scopes of Consistent Replacement
  • Per input file
  • Daily (clear real->dummy cache daily)
  • Permanent

Not Feasible Due to Cost or Performance

Here are some things to consider if you are running into performance barriers:

  • Limit consistent replacement scope (don’t store replacement mappings permanently if you only need them daily)
  • Multi-tenant systems: Only process a subset of tenants
  • For stateless applications: Only copy down a subset of production data

Extract Method and Inline Variable in IntelliJ

Introduction

This blog post will show, using videos, how to perform basic refactorings in IntelliJ. I’m using the IntelliJ 2020 community edition and OBS Studio to create these videos.

The philosophy contained in these articles will largely be drawing from my favorite book on refactoring: Refactoring: Improving the Design of Existing Code by Martin Fowler.

Prerequisite: Test Coverage

There should exist (or you should write) comprehensive test coverage on all code within the scope of our refactor. This is to help ensure that nothing, functionally, is changed during refactoring.

If you need help getting test coverage on your legacy code, check out my other blog articles such as Suppressing Static Initializers with Mockito + Powermock.

Identifying the Need for Extract Method

A common pattern for junior developers is to place a lot of code in a single method, but logically break it up with comments and log statements. This tells us that there is one method responsible for multiple operations (not ideal) and alerts us that the class should probably be refactored.

Here we have a simplified example, Demo.java

import java.util.Map;

public class Demo {
    public String getFirstName(String domain, String username){

        // step 1, generate connection string
        System.out.println("Generating connection string");
        String connectionString = Constants.BASE_DB_URL + domain;

        // step 2, get mapping of usernames to first names
        DbQueryDummy dbQueryDummy = new DbQueryDummy(connectionString);
        Map<String, String> dbUsernameToFirstName = dbQueryDummy.getDbUsernameToFirstNameMap();

        return dbUsernameToFirstName.get(username);
    }
}

Refactored Version

With two extract method operations, and three inline variable operations, we can refactor this code to the following:

public class Demo {

    public String getFirstName(String domain, String username){
        return getFirstNameFromDB(username, getConnectionString(domain));
    }

    private String getFirstNameFromDB(String username, String connectionString) {
        return new DbQueryDummy(connectionString).getDbUsernameToFirstNameMap().get(username);
    }

    private String getConnectionString(String domain) {
        System.out.println("Generating connection string");
        return Constants.BASE_DB_URL + domain;
    }
}

In my opinion, this refactored code is better because we’ve:

  • Eliminated the temporary variables
  • Split the getFirstName method into two logically discrete parts
  • Alleviated the need for explaining comments; We have nicely named methods instead

If this was real world code, I’d love to refactor DbQueryDummy to remove the chained method call in our getFirstNameFromDB method, but that’s not important for the purposes of this article.

Behind the Magic: IntelliJ Refactoring Tools

To perform similar refactorings: I highly recommend becoming familiar with IntelliJ’s code refactoring tools.

Refactoring in your IDE does help eliminate silly mistakes, but it isn’t perfect. Using an IDE doesn’t negate the need for comprehensive testing around the code being refactored.

https://www.youtube.com/watch?v=QsPdRJnWlV0&feature=youtu.be

Suppressing Static Initializers with Mockito + Powermock

Often as developers, we need to add unit testing to legacy code while being unable to make non-trivial changes to the code under test.

In legacy code, logging and other resources are often set up in a static initializer. That code won’t execute nicely in a unit testing environment (could be using JNDI or something else that doesn’t work nicely with unit tests without mocking).

We can’t just add mock behavior to our class instance in the unit test, because the static initializer is run as soon as the class is accessed.

If we’re using Mockito with Powermock the solution to this problem is simple: we need to suppress the static initializer from executing.

Let’s take a look at a simple example of suppressing a static initializer. We have a class ClassWithStaticInit, where we’re assigning a string field in our static initializer to “abc”.

public class ClassWithStaticInit {
    final static String field;
    
    static {
        //this block could contain undesirable logging setup that doesn't work in a test environment
        field = "abc";
    }

    public String getField(){
        return field;
    }
}

In the following test, we suppress the static initializer for ClassWithStaticInit using the @SuppressStaticInitializationFor annotation. You can see the return value of the method unit.getField() is null, because we have suppressed the static initializer of ClassWithStaticInit, preventing the field from being set.

import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.core.classloader.annotations.SuppressStaticInitializationFor;
import org.powermock.modules.junit4.PowerMockRunner;

import static org.junit.Assert.assertEquals;

@RunWith(PowerMockRunner.class)
@SuppressStaticInitializationFor({"ClassWithStaticInit"})
@PrepareForTest({ClassWithStaticInit.class})
public class SuppressStaticInitTest {

    @InjectMocks
    ClassWithStaticInit unit = new ClassWithStaticInit();

    @Test
    public void testSuppressingStaticInitializer(){
        assertEquals(unit.getField(), null);
    }
}

Now, if we don’t suppress the static initializer, we can see that the static block has executed, assigning the value “abc” to field.

import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

import static org.junit.Assert.assertEquals;

@RunWith(PowerMockRunner.class)
@PrepareForTest({ClassWithStaticInit.class})
public class DoNotSuppressStaticInitTest {

    @InjectMocks
    ClassWithStaticInit unit = new ClassWithStaticInit();

    @Test
    public void testSuppressingStaticInitializer(){
        assertEquals(unit.getField(), "abc");
    }
}

pom.xml file

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.french</groupId>
    <artifactId>MockStaticInitilizerMockito</artifactId>
    <version>1.0-SNAPSHOT</version>
    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
        <maven.surefire.version>2.22.2</maven.surefire.version>
        <junit.version>4.13</junit.version>
        <mockito.version>2.28.2</mockito.version>
        <powermock.version>2.0.7</powermock.version>
    </properties>

    <build>
        <plugins>
            <plugin>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>${maven.surefire.version}</version>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>${junit.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.mockito</groupId>
            <artifactId>mockito-core</artifactId>
            <version>${mockito.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.powermock</groupId>
            <artifactId>powermock-core</artifactId>
            <version>${powermock.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.powermock</groupId>
            <artifactId>powermock-module-junit4</artifactId>
            <version>${powermock.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.powermock</groupId>
            <artifactId>powermock-api-mockito2</artifactId>
            <version>${powermock.version}</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>

Note: From my readings, it appears Powermock only supports JUnit 4, not JUnit 5 as a reasonable person would expect.

Full Code on GitHub

Testing vs. Alerting Part I

If you want to evaluate a testing plan, you must also consider your alerting plan.

Alerting and testing are complimentary, both serve to identify defects. Testing typically serves to identify defects before code is deployed to production, while alerting typically notifies developers of an issue with a running system.

You need to consider both testing and alerting to create an effective defect mitigation plan.

Start by asking yourself a few important questions:

  1. What is the businesses’ tolerance for defects in production of this system?
  2. Can we easily rectify production issues with this system postmortem (after being alerted), or will it cause non-trivial damage to business operations and reputation?
  3. Are developers capable and willing to do on-call fixes to production systems? How much ongoing cost is there in training?

Once you’ve identified your tolerance for defects in production (and ability to fix them), you can better evaluate what preventative measures should live as real-time alerting, and what measures should live as pre-deployment tests.

Often, I find that using alerting to catch errors is a magnitude less of a time investment compared to developing comprehensive integration testing to catch the same defects. The downside is that resolving the alerts still requires developer time and manual effort.

In my opinion, alerting on production systems is more fundamental than automated testing, but both should play some role in designing a defect mitigation plan.

The point is: You can’t design a good testing plan without having an alerting plan.

dependencyManagement tag in Maven

One very peculiar issue I’ve run into during development is an issue where I have a parent pom.xml file and a child pom.xml file. The child pom.xml file imports the parent artifact as a dependency.

Now, when we make changes to dependencies in the parent pom.xml file, we don’t see them reflected when building the child pom.xml file, the child is still pulling in an unwanted dependency version from the parent.

What’s the issue?

Is the parent artifact being published to the local maven repository correctly? Yes it is, that’s not the issue.

The problem was that our child pom.xml had a <dependencyManagement> tag that was over-writing the changes in the parent pom. Check out this stack overflow post to learn more.

Update 9/30/21: Better explanation can be found here.

The moral of the story: Be aware of what the <dependencyManagement> tag does when debugging maven issues.

Mocking Static Singletons in Java (Mockito 3.4+, EasyMock, JMockit)

A common problem for Java developers that wish to get comprehensive unit test coverage is handling the mocking of singletons that are implemented using static method calls.

Let’s look at how we can mock singleton behavior using three common Java testing libraries Mockito, EasyMock and JMockit.

Part 1: Write Code to Test

To start with, we need a simple contrived example. In this example, we have a class ClassUnderTest that accesses a singleton Singleton and then, based on the return value, calls a method in DummyApiClient.

The issue is, we need to be able to mock the behavior of the static instance of Singleton in order to test ClassUnderTest.

public class ClassUnderTest {

    private DummyApiClient dummyApiClient;

    public ClassUnderTest(DummyApiClient dummyApiClient){
        this.dummyApiClient = dummyApiClient;
    }

    public void methodUnderTest(){
        if(Singleton.getSingleton().getBool()){
            dummyApiClient.doPositive();
        } else {
            dummyApiClient.doNegative();
        }
    }
}
import java.util.Random;

public class Singleton {
    static Singleton singleton;
    static {
        singleton = new Singleton();
    }

    public static Singleton getSingleton(){
        return singleton;
    }

    public Boolean getBool(){
        return new Random().nextBoolean();
    }
}
public class DummyApiClient {
    public void doPositive(){ System.out.println("positive");};
    public void doNegative(){ System.out.println("negative");};
}

Full Part 1 Code on Github

Part 2: Mockito (mockito-inline) 3.4+

Wow! It’s now possible to mock static methods with mockito, without the additional dependency of PowerMock! Since version 3.4 of Mockito (PR), we can mock static methods using the mockStatic command. (examples)

First, let’s add the required dependencies to our pom.xml file. We need to use JUnit and mockito-inline (regular mockito-core will not work).

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>SingletonMockDemo</artifactId>
    <version>1.0-SNAPSHOT</version>
    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
        <maven.surefire.version>2.22.2</maven.surefire.version>
        <junit.version>5.6.2</junit.version>
        <mockito.version>3.4.0</mockito.version>
    </properties>

    <build>
        <plugins>
            <plugin>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>${maven.surefire.version}</version>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-api</artifactId>
            <version>${junit.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-engine</artifactId>
            <version>${junit.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.mockito</groupId>
            <artifactId>mockito-inline</artifactId>
            <version>${mockito.version}</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

</project>

Next, let’s write our test class, testing the behavior of both true and false from our singleton Singleton.

import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.mockito.MockedStatic;

import static org.mockito.Mockito.*;

public class ClassUnderTestTest {

    DummyApiClient dummyApiClient;
    Singleton singletonMock;

    ClassUnderTest unit;

    @BeforeEach
    public void beforeEach(){
        dummyApiClient = mock(DummyApiClient.class);
        singletonMock = mock(Singleton.class);

        unit = new ClassUnderTest(dummyApiClient);
    }

    @Test
    public void testMethodUnderTestPositive(){
        when(singletonMock.getBool()).thenReturn(true);

        try (MockedStatic<Singleton> staticSingleton = mockStatic(Singleton.class)) {
            staticSingleton.when(Singleton::getSingleton).thenReturn(singletonMock);

            unit.methodUnderTest();

            staticSingleton.verify(Singleton::getSingleton);
            verify(singletonMock).getBool();
            verify(dummyApiClient).doPositive();
        }
    }

    @Test
    public void testMethodUnderTestNegative(){
        when(singletonMock.getBool()).thenReturn(false);

        try (MockedStatic<Singleton> staticSingleton = mockStatic(Singleton.class)) {
            staticSingleton.when(Singleton::getSingleton).thenReturn(singletonMock);

            unit.methodUnderTest();

            staticSingleton.verify(Singleton::getSingleton);
            verify(singletonMock).getBool();
            verify(dummyApiClient).doNegative();
        }
    }
}

Full Part 2 Code on GitHub

Part 3: EasyMock

As of the time of writing, EasyMock does not support mocking of static methods without the use of an additional library; In our case, we will use PowerMock to support mocking static methods.

First, let’s update our pom.xml to reflect the libraries we are using:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>SingletonMockDemo</artifactId>
    <version>1.0-SNAPSHOT</version>
    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
        <maven.surefire.version>2.22.2</maven.surefire.version>
        <powermock.version>2.0.2</powermock.version>
        <junit.version>4.13</junit.version>
        <easymock.version>4.2</easymock.version>
    </properties>

    <build>
        <plugins>
            <plugin>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>${maven.surefire.version}</version>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>${junit.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.easymock</groupId>
            <artifactId>easymock</artifactId>
            <version>${easymock.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.powermock</groupId>
            <artifactId>powermock-module-junit4</artifactId>
            <version>${powermock.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.powermock</groupId>
            <artifactId>powermock-api-easymock</artifactId>
            <version>${powermock.version}</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

</project>

Next, let’s write out test class ClassUnderTestTest.java

import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.powermock.api.easymock.PowerMock;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;

import static org.easymock.EasyMock.expect;
import static org.powermock.api.easymock.PowerMock.mockStatic;

@RunWith(PowerMockRunner.class)
@PrepareForTest(Singleton.class)
public class ClassUnderTestTest {

    DummyApiClient dummyApiClient;
    Singleton singletonMock;


    ClassUnderTest unit;

    @Before
    public void beforeEach(){
        dummyApiClient = PowerMock.createMock(DummyApiClient.class);
        singletonMock = PowerMock.createMock(Singleton.class);
        mockStatic(Singleton.class);
        expect(Singleton.getSingleton()).andReturn(singletonMock);

        unit = new ClassUnderTest(dummyApiClient);
    }

    @Test
    public void testMethodUnderTestPositive(){
        expect(singletonMock.getBool()).andReturn(true);
        dummyApiClient.doPositive();
        PowerMock.expectLastCall();
        PowerMock.replayAll();

        unit.methodUnderTest();
    }

    @Test
    public void testMethodUnderTestNegative(){
        expect(singletonMock.getBool()).andReturn(false);
        dummyApiClient.doNegative();
        PowerMock.expectLastCall();
        PowerMock.replayAll();

        unit.methodUnderTest();
    }
}

Resetting Mocks: Note how mockStatic(Singleton.class); doesn’t have a corresponding reset call. This is because PowerMock resets mocks using the @PrepareForTest annoation. (stack overflow post)

Full Part 3 Code on GitHub

Part 4: JMockit

Again, we must first update our pom.xml file to pull in our required dependencies. You need to update the surefire configuration to use the javaagent initilization parameter.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>SingletonMockDemo</artifactId>
    <version>1.0-SNAPSHOT</version>
    <properties>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
        <maven.surefire.version>2.22.2</maven.surefire.version>
        <junit.version>5.6.2</junit.version>
        <jmockit.version>1.49</jmockit.version>
    </properties>

    <build>
        <plugins>
            <plugin>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>${maven.surefire.version}</version>
                <configuration>
                    <argLine>
                        -javaagent:${settings.localRepository}/org/jmockit/jmockit/${jmockit.version}/jmockit-${jmockit.version}.jar
                    </argLine>
                </configuration>
            </plugin>
        </plugins>
    </build>

    <dependencies>
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-api</artifactId>
            <version>${junit.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-engine</artifactId>
            <version>${junit.version}</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.jmockit</groupId>
            <artifactId>jmockit</artifactId>
            <version>${jmockit.version}</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

</project>

And, then we need to write our test class, ClassUnderTestTest, this time using JMockit.

import mockit.Expectations;
import mockit.Mocked;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;

public class ClassUnderTestTest {
    
    @Mocked
    DummyApiClient dummyApiClient;

    @Mocked
    Singleton singletonMock;

    ClassUnderTest unit;

    @BeforeEach
    public void beforeEach(){
        unit = new ClassUnderTest(dummyApiClient);
    }

    @Test
    public void testMethodUnderTestPositive(){
        new Expectations(){{
            singletonMock.getBool(); result = true;
            Singleton.getSingleton(); result = singletonMock;
            dummyApiClient.doPositive();
        }};

        unit.methodUnderTest();
    }


    @Test
    public void testMethodUnderTestNegative(){
        new Expectations(){{
            singletonMock.getBool(); result = false;
            Singleton.getSingleton(); result = singletonMock;
            dummyApiClient.doNegative();
        }};

        unit.methodUnderTest();
    }
}

Full Part 4 Code on GitHub

package.json scripts

When using npm (commonly with nodejs), we can pre-define command line commands in the package.json file, using the scripts option.

Use Case: Pre-Standardized Commands Per Organization

Description: Allow teams to standardize commands to allow easy cross-project development and collaboration.

In our example, we will standardize a command extended-test that should be run by all developers before issuing a pull request.

Example package.json for mocha project
{
  "scripts": {
    "extended-test": "echo \"Doing Custom Stuff First\" && mocha"
  },
  "devDependencies": {
    "mocha": "^8.0.1"
  }
}
Example package.json for jest project
{
  "scripts": {
    "extended-test": "echo \"Doing Custom Stuff First\" && jest"
  },
  "devDependencies": {
    "jest": "^26.1.0"
  }
}
Example Invocation (common)
npm run extended-test

Benefit: Developers can abstract away specifics of processes that are common between multiple projects.

Note: In a more mature CI-CD environment, developers would not need to run these commands manually, it would be part of an automated process.

Use Case: Pre-Define Command Line Arguments for Each Environment

{
  "scripts": {
    "report-local": "node tps-report.js --db=\"jdbc:mysql://local-address\" --dry_run=false",
    "report-test": "node tps-report.js --db=\"jdbc:mysql://test-address\" --dry_run=false",
    "report-prod": "node tps-report.js --db=\"jdbc:mysql://prod-address\" --dry_run=true"
  },
  "dependencies": {
    "yargs": "^15.4.1"
  }
}
npm run report-local

This example package.json demonstrates use of the yargs package to pre-define command-line arguments to a script.

This helps prevent user error and allows invocation parameters to be reviewed as part of a pull request, rather than passed as tribal knowledge.