top of page
  • Todor Kolev

Setting testing standards and why it's important

Updated: Dec 5, 2022



As a software engineer, you most likely spend a fair amount of time writing tests (or at least you should).

In this article, I'll discuss some fundamental principles that I've found useful when developing and testing server-side code, but the general principles should be widely applicable.

Adding a single feature to your codebase might require multiple tests in order to ensure that the use case is fully tested, thus giving us confidence in the newly added code. It's only logical, then, to treat your test codebase with the same level of care and diligence as you would your production codebase. What's that mean? Well, we'd often have certain coding and design standards when implementing features. For example, coding to an interface as opposed to concrete implementations, delegating specific responsibilities to certain types of classes and using certain naming conventions, e.g we might name a class AccountRepository where the "Repository" part tells us that this class is used for writing and retrieving accounts from a database.

Whatever your coding standards are, they add predictability to your projects and help you locate things quickly and easily. Therefore, if we have more test code than production code, it's super important to optimise for readability.

While nothing beats communication and pairing with people that have worked more extensively on a given project, I think it's still worth dropping a few high-level notes on what to expect from people who are about to contribute to our codebase (which includes adding tests). The first place I'd look for that type of guidance, as a contributor, is the project README file.

In general, the things I've found to work well are:


Name the test class after the production class it is testing and place it under the same directory too, e.g. if the class under test is src/accounts/AccountService, then I'd call the test class test/accounts/AccountServiceTest. Though this is something that should be obvious, I've worked on many projects where it simply wasn't obvious where the test should be. This is likely due to the fact that TDD wasn't followed in the first place when developing the code. If TDD were followed, devs would've almost been forced to name the test after the actual production class.


Don't strive for reusability too much; i.e. use lots of high-level abstraction code and generalise things too much. I think some level of repetition should be tolerable as your test code should be descriptive and spell out exactly what the use case under test is. If I am reading a test and I constantly have to jump back and forth between the test and where the setup is defined, then I find that test hard to read. I shouldn't have to be reconstructing the current state of the test in my head, it should be made obvious in the test itself.


Prefer lower-level tests for testing edge cases and different permutations of input attributes.

Say we have the following components invoked when a request is made to an API endpoint to obtain a customer:

HttpRequest->AccountController->AccountService->AccountRepository->Database

Then, assuming the AccountService accepts a strongly-typed domain object e.g AccountRequest, we can create our AccountServiceTest to test the business logic of our service. This will most likely include tests around the validation of the various attributes included in the AccountRequest and their values, as well as any other business domain logic that needs to be applied when the request is received. The goal is to narrow down the scope of the test and use mocks for any external components, in this case - AccountRepository. This will basically avoid the need for having to stand up a database and store the required accounts first before we can test our AccountService. This will make the test faster, easier to read and easier to debug.


Use higher-level Integration Tests to ensure the correct concrete implementations are wired together.

Now that we have our lower-level tests in place, we need to ensure that, when our application is spun up in production, the AccountService is actually talking to the real database and not some sort of a mock or an in-memory version of the AccountRepository. As the only goal of this test is to ensure the integration is in place, a single test exercising a simple round trip to the database and back should suffice. This type of test can also be implemented at the E2E level and used to prove that network routes and database access are all correctly configured in an actual physical environment (not just locally). Again, the point here is to keep these to a minimum and not use Integration Tests for testing use case scenarios and edge cases (which can make them slow and flaky). Rather, we use Integration Tests strategically to ensure that the various components of our service can and do talk to each other.


There's an added benefit that comes with giving preference to narrowly scoped low-level unit tests and only tactically leveraging Integration Tests. Now our domain logic tests are not tied to any concrete framework or implementation of either upstream or downstream components. As an example, we might decide to turn this into an event-driven service and instead of processing RestAPI requests start ingesting events from a Kafka topic. Ideally, we shouldn't need to touch any of our tests in the AccountServiceTest suite as they are completely separated from the AccountControllerTest suite. The former is only concerned with the core domain logic as opposed to any framework-specific implementation details.

67 views0 comments

Comments


Subscribe to this blog. No spam, only quality content!

Thanks for submitting!

bottom of page