The Modernization Roadmap

This roadmap is intended to be a handy utility to help start modernizing your applications and processes. The roadmap has a linear flow for ease of navigation; however, many of the tools and techniques on the roadmap can be done in any order. The one exception is source control: I have generally found that you can’t go about modernizing much in your application without first getting your code under control.

Intended Audience

Do you find yourself thinking “It feels like my organization/application is lagging behind, how can I modernize?” Well good news—you’re not alone. In 2018, a DigitalOcean report found that less than 60% of developers are in orgnazations using Continuous Integration and less than half of developers are in organizations that use Continuous Delivery. These are tools that can be critical to modernizing your applications and workflows, yet a large swath of organizations don’t use them.

This roadmap is aimed at engineering managers or leads, but it can also be used at the individual contributor level to surface strategy shifts in your organization.

The Roadmap

The following represents an idealized roadmap for modernizing.

roadmap

With the individual components being:

  1. Source control
  2. Automated testing
  3. Continuous integration
  4. Continuous delivery & deployment
  5. Telemetry / monitoring
  6. The cloud
  7. Microservices
  8. Containerization
  9. Feature flagging
  10. Failing faster

Navigating the Roadmap

You may be doing some of the items on the roadmap or maybe you’re doing none of the items—and that’s okay. The point is to walk sequentially through the components of the roadmap and, for each component, answer the following questions: (a) what is this tool or technique? (b) am I already doing this? (c) if not, how can I implement this tool or technique in my organization?

In this roadmap document, I will go over what each component in the roadmap is, give some insight into its benefits, and then link to some more detailed implementation information and considerations. This is a living document and will be revised when new information becomes available.

Updates

To receive updates, come back here periodically or, better yet, subscribe to the Modernizing.dev mailing list!

The Components

Let’s dive in!

It’s very possible you already do some of the items on this list, and that’s awesome. If so, simply move on to the next section and see if there are practices that you haven’t implemented yet.

1. Source Control

Source control is one of the major enablers to modernization. As you will see in future sections, other modernization technologies tend to hook into your source control system in order to function.

What is source control?

Simply put, source control is how you track your codebase and manage changes to it. Source control is also referred to as version control, and can be thought of as a versioning system for your code. Source control offers many benefits. It facilitates collaboration between developers, helps resolve conficting code changes between developers, and allows for quickly reverting code to a previous version.

The most ubiquitous code control solution these days is Git. With Git, you generally have one main, stable brach. When developing features (or bug fixes) you create a new branch, work on that branch for a certain amount of time, and then when the feature is complete you merge the branch back into the main branch.

git branching

Generally, you use a hosting service like GitHub to host your Git project. This allows you to push your code to the remote repository on the host, have multiple developers pull down the code from that repository, and work together in near real time. A cloud Git host will also enable other technologies (e.g., continuous integration) by allowing that tech to detect changes in your codebase and take certain actions—but more on that later!

What does a Git workflow look like?

There are many different possible workflows with Git, but what can be helpful is to envision what a day in the life of a Git-enabled developer might look like.

  • 9:00am - Arrive at work, pull down the latest version of the main branch from GitHub.
  • 9:01am - Grab a new task to add a “New Deal” alert to the webpage. Create a new branch called new-deal-alert
  • 10:00am - Complete the “New Deal” alert feature. Push the new-deal-alert branch up to GitHub.
  • 10:05am - Through the GitHub interface, request to merge the new-deal-alert code into the main branch. GitHub and other Git hosts facilitate developer reviews of other developers’ code!
  • 12:00pm - Finish code review, merge the new-deal-alert into the main branch and delete the new-deal-alert branch.
  • 12:01pm - Head to lunch
  • 1:00pm - Grab a new task and repeat!

Note that each developer on the team can follow the same general workflow and generally have the most up-to-date code. If any two developers end up working on the same piece of code at the same time, Git will help facilitate the adjudication of conflicts.

How do I implement Git?

Git itself is open source and can be installed immediately for free! More challenging is deciding where to host your code (GitHub and BitBucket are both great choices) and, even more challenging than that, is convincing any source control-averse personnel to buy in to using source control.

Learn more about source control

There’s a ton more to learn about source control! Check out some of these posts to learn more.

2. Automated Testing

Automated testing is critical to moving fast with confidence. If you’re not using automated testing, you’re likely doing formal (or informal) testing sessions that involve users, or dedicated testing personnel, running through scripts to prove that new (and existing) functionality works. This largely causes you to batch a lot of functionality together before holding testing sessions, which in turn results in deploying your application infrequently.

What is automated testing?

Automated testing is generally a script or series of scripts you can run that will execute various tests on your application on demand. As you add new features to the application, you write new tests to make sure that feature works. You continuously run your application through these automated tests to verify new and existing functionality works. If you have good test coverage, you can be fairly confident you haven’t broken anything.

What does automated testing generally look like?

I generally think of three categories of automated testing: unit tests, integration tests, and end-to-end tests.

Unit tests

Unit tests exercise the functionality of an individual module or function. These are the lowest level of testing. Unit tests allow you to specify the exact output for a function given a certain input. If you have to refactor that function in the future, you can be confident that it still works correctly because its associated unit tests are still passing.

Let’s say our application has a “Sign Up” feature. It allows users to sign up for an account on our application using an email address and a password. Within this feature, we have a function called validateSignup that will basically make sure that the provided email address is valid and that the password meets our password strength criteria.

unit test diagram

Our unit test would basically import this validateSignup function and make sure that it returns the correct result for a number of different scenarios. Here is some pseudocode that shows what some automated tests surrounding this function might look like:

import validateSignup

describe "validateSignup"
  it "returns invalid if the email address is missing"
    expect validateSignup("", "password123") to equal false
  it "returns invalid if the password is missing"
    expect validateSignup("me@example.com", "") to equal false
  it "returns invalid if the email address is a bad format"
    expect validateSignup("notanemail.com", "password123") to equal false
  it "returns true if the email and password meet requirements"
    expect validateSignup("me@example.com, "password123") to equal true

There are certainly more we could right! At any rate, once we feel the function is well-tested and all our tests are passing, we can be fairly confident that this function will give the right output for various inputs. Furthermore, if we ever refactor this function in the future, it will still be tested with these same automated tests, so we can be confident that the function will continue to work correctly. Our automated test runner will let us know in short order if we mess it up.

Integration tests

Integration tests will test how well two or more units of code work together. For example, if you pass some new user information to a registration handler, is the user successfully registered? This type of testing makes sure that, not only do the individual units work well on their own, but they also work well together.

Extending our user registration example, perhaps we have serveral different functions in the user registration process: validateSignup, checkDuplicates, and registerUser. Our integration test might provide an email address and password to the application and verify that, at the end of all of these functions, when integrated together, we either end up with a registered user or an error explaining what happened.

integration test diagram

We can write some pseudocode to see what this might look like. In our example, the UserController has a register method that contains all our registration logic that we’re testing.

import UserController

describe "UserController"
  it "returns an error if the email address is bad"
    userController = new UserController
    response = userController.register("bademail", "password123")
    expect response to raise EmailError
  it "returns an error if the password is bad"
    userController = new UserController
    response = userController.register("me@example.com", "")
    expect response to raise PasswordError
  it "returns an error if the password is bad"
    userController = new UserController
    response = userController.register("duplicate@example.com", "password123")
    expect response to raise DuplicateError
  it "returns success if user information is okay"
    userController = new UserController
    response = userController.register("me@example.com", "password123")
    expect response to be Created

In this code, we confirm that all our units are playing well together: an email address/password combination is passed through the different functions and, in the end, we either have failures or successes depending on what we expect. As with unit tests, this gives us the assurance to refactor the logic in any of these functions and still be confident that they all play nicely together.

Integration tests will additionally prevent us from unknowning messing up the interface for one of our functions. If we refactor a function and change its interface, we could end up with passing unit tests, but if other code that interface with that function aren’t updated, we’ll see automated integration tests failing, which is great!

End-to-end tests

End-to-end tests exercise the entire flow of an application from one end to the other. This is the most similar to how a user might interact with your system, but they can be challenging to write since you can’t really do so until the entire thread you’re testing is in working order.

Extending our example, you could simulate a user visiting your application, navigating to the Sign Up page, entering their information, confirming that their information is saved in the database, and a confirmation email is triggered.

end to end test diagram

Our pseudocode might look a bit different this time as we could be starting the full application and actually having our test software clicking through the app.

start application

describe "user registration"
  it "registers a new user"
    navigate to application
    click on sign up link
    enter "me@example.com" in email address box
    enter "password123" in password box
    click submit button
    expect suggess page to display
    expect confirmatin email to be sent
    expect user to be registered in database

We can see that we’re actually tasting a path through our application with this code! This is basically the next level of integration test—by testing a line through the app, we test various functionality of the pieces that make up this workflow. If we get enough of these end-to-end tests running, we can feel confident that we can refactor the underlying code without messing up the users’ experience.

How Do I Implement Automated Testing

This is where things can get tricky. If you’re starting a green field project (i.e., starting anew), it can be relatively simple to start off by chosing test frameworks for unit, integration, and end-to-end testing and to have best practices about how much automated testing there is. You could use code coverage tools that report what percentage of the code base is covered by tests. Later on when we get to Continuous Integration, you can actually prevent code from being included in your production app unless the app meets a certain code coverage threshold (e.g., “at least 95% of the code is covered by tests”).

But what if you have a legacy code base and there is no automated testing of which to speak? Furthermore, what if the code base is confusing and poorly documented, and it’s not even clear what the individual “units” are?

My recommendation in this case is to start at end-to-end testing. You’ll need to do your best to document as many major paths/user stories through your application. Then, choose an end-to-end testing tool that aligns with your software stack (e.g., Selenium might be a good fit for a Java Spring Boot project whereas Cypress might be a good fit for a JavaScript project). Finally, start creating end-to-end tests that capture paths through your application. The more paths you capture, the more confident you’ll be in future refactors. When you do actually go to refactor a smaller part of your application, sufficient end-to-end coverage will keep you confindent that you haven’t messed up the user’s experience. As you do any refactors, go ahead and start adding in the unit and integration tests.

Test-Driven Development (TDD)

TDD is a great practice in which you write tests first, asserting what you software should do before you write the software itself. These tests start out failing. As you write your software, your tests start passing one-by-one. Once your tests are passing, you can refactor your software as needed and continue to confirm that your tests are passing.

Learn more about automated testing

Having solid automated testing practices is a signficant reason high-performing dev shops can move fast. Check out some of these posts to learn more about automated testing!

3. Continuous integration

Under development - check back later!

4. Continuous delivery & deployment

Under development - check back later!

5. Telemetry / monitoring

Under development - check back later!

6. The cloud

Under development - check back later!

7. Microservices

Under development - check back later!

8. Containerization

Under development - check back later!

9. Feature flagging

Under development - check back later!

10. Failing faster

Under development - check back later!