3 Keys to Enhancing Brownfield Applications
Reengineering an existing brownfield application can save effort, time, and money over rewriting from scratch. However, updating legacy code requires a different approach than new development. The developers are not free to create any structure they want; they must morph the existing code into the required shape in order to keep the system functional throughout the process.
There are 3 key things that must be done differently when reengineering brownfield applications.
Improve the Current Code Base without Logic Changes
The road to better code can be bumpy, but there are things you can do to the current application to help smooth things out. These are changes that make reengineering easier, but have no effect on the current logic of the application. Since they don’t change logic they can be pushed into production with a minimum of QA effort, and at any time (thus fitting nicely into any release schedule).
Change Variables to Properties
Legacy applications typically used variables for storing references to the objects that were needed to implement the business logic. This pattern was fine to use at the time, but current design patterns require more power than a variable can provide. To access this power we must convert the variables to properties.
Properties have the same characteristics as variables and can be used in the same way. The difference is that the compiler implicitly adds “getter” and “setter” methods to properties for reading and writing the values, and these methods can be overridden in normal code to add additional functionality as needed. These methods are used extensively in modern design patterns, so converting to properties is an important step. Fortunately, the syntactical difference between the two is trivial, making this conversion easy to do.
This change should have no absolutely impact on the business logic in the application so it can be implemented with minimal QA required on the affected code. It is also a change that does can be made just in the area being updated, not throughout the application. If a delivery date approaches and the effort to convert variables to properties is still in progress, the application can be pushed live without a problem.
Convert to Interfaces
For applications that were built before the abstraction patterns were introduced, objects that were needed in business logic would have been created with a hard reference to the specific object. We now know that this creates fragile code that is not very testable.
The way to reengineer for abstraction is to introduce interfaces to the existing objects. By definition, an interface has no logic embedded within – it is strictly a list of properties and method signatures that the underlying object must provide in order to implement the interface. The power of an interface is that it allows the use of any object that has the appropriate methods, not just the production code. We will later see how we can take advantage of interfaces to begin building automated tests for the legacy code.
We reengineer to interfaces by first creating the new interface for the object. The interface should contain all the public methods and properties in the object you are working with, thus making it a perfect substitute for the existing code that is using the object.
Once the interface is in place, all references to the original object can be replaced with references to the interface. This change, like changing variables to properties, has zero effect on the existing code and will require very little testing from QA. However, it is a critical and significant step towards making your application testable and more easily updated.
Though you can convert one or many objects to interfaces, it is best to do only one at a time. This will further localize the changes and further diminish the QA effort required. It also speeds the process of introducing unit testing by isolating a single object.
Make Incremental, Low Impact Changes
Once you begin making changes to business logic, it is best to enhance the application incrementally to minimize the potential for introducing defects. By focusing on small areas (preferably areas not involved in any new feature development) the reengineering effort can progress without any effect on other development.
The changes in this section are designed to require less QA than is needed for new development. By designing our updates the proper way, we can make it more likely that defects will appear at build time or application load time instead of runtime. This minimizes the subtle defects – the ones that are difficult to detect and fix. The goal is to make changes so that if the change introduces a bug, that bug is as obvious as possible so it is found and addressed in early stage testing.
Introduce Inversion of Control Container
With the introduction of interfaces, the code is using abstractions to refer to objects. However, the code is still tightly coupled because it still specifically creates the modules it needs. This tight coupling can be fixed by introducing an Inversion of Control Container (IoC).
An IoC container is a class that centralizes the creation of objects, much like the factory pattern. The container also provides injection services, though, so it goes above and beyond the requirements for a factory pattern. Though easy to use, IoC containers are complicated to build so it is best to download and implement one of the many quality open source packages available.
Implementing an IoC container requires the developer to create a mapping between the interface (that we have recently introduced) and the concrete class. This mapping allows client code to ask the container for an object implementing a given interface, and the container creates and returns the mapped concrete class. Though this does not remove our tight coupling to the concrete class, it does localize it to a specific class – the container.
IoC containers are typically created at application start, again resulting in an easier task for QA since any catastrophic errors in creating the container will be found as soon as the application is run.
There is often concern of adding significant time to application startup by introducing the container. This will not happen because container initialization does not create any objects. It only creates the mapping between the interface and the concrete class. The target classes will not be created until they are used in normal business logic, and at this point the container is not used anywhere in the system so there is no opportunity for adding to system startup time.
Service Locator, not Dependency Injection
We now come to a strategy that diverges from common practice. In order to use the new IoC container there are two patterns that can be implemented: Dependency Injection or Service Location. Many developers will argue for Dependency Injection since it can completely remove all references to a concrete class. When reengineering brownfield applications, though, implementing pure dependency injection is very difficult.
The important difference between these two approaches is that Service Location requires a single object (the Service Locator) to be global to the application so all business logic can use it to resolve the objects they need. Dependency Injection, on the other hand does not need the globally accessible class.
The limiting factor with Dependency Injection is the requirement that all objects in the application are created via the IoC container. Our incremental change approach makes this sweeping change very undesirable since it results in high volumes of unfocused testing to ensure no defects have been introduced. In short, a large effort is required upfront to implement Dependency Injection, then a large effort by QA to test the entire application.
Service Location allows a much more focused approach to the conversion, and does not rule out a conversion to Dependency Injection in the future when the new infrastructure is fully implemented.
Refactor Object Creation using OnDemand Properties
Once the variables have been converted to properties and the Service Locator is in place, the process of dealing with the fallout in the legacy code that grew out of using variables can begin.
Any objects that are used in the application must be created at some point. In brownfield applications this was often done at the point of use, resulting in references to objects being found throughout the code. This design creates difficulty in refactoring if the constructor of the object ever changes, makes debugging difficult when a variable is mistakenly used before it is initialized, and can result in diminished performance if the same object is unnecessarily created multiple times.
We can address these problems by using the getter methods on the recently introduced properties. By modifying the getter method to create the required object when necessary, we can localize object creation resulting in easier refactoring of the constructor. This also provides a single place where the object is created, allowing easier debugging should creation go awry. Furthermore, the business logic that uses this property no longer needs to concern itself with whether the object has already been created, again resulting in fewer chances for defects. Finally, the object is guaranteed to be created a single time, thus potentially reducing the execution time of the application.
Introducing OnDemand properties is not as easy as the previous changes. This pattern opens the door for some defects in the way the object is created and used. As a rule of thumb, however, the QA effort does not need to be quite as time consuming as with newly created code. If something goes wrong with the introduction of OnDemand properties, the application will fail in a very noticeable way. In other words, the chances are slim of introducing a subtle defect that is difficult to find.
The previous efforts have updated the infrastructure to make the application more loosely coupled, but no business logic has changed. Though the application is more structurally sound, we have not been able to take advantage of this new infrastructure yet. By introducing automated testing, we will take a step towards an application of much higher quality.
Add Automated Testing
Implementing interfaces allows the developer to replace one class with another, as long as the new class has the appropriate methods and signatures. Adding the IoC container and Service Locator adds a single mechanism for creating objects in the application. Combining these two ideas results in the ability to add automated testing to the legacy application.
Imagine you have a class that needs to send an email to your users. The class will use the Service Locator to create an instance of the object responsible for sending emails (usually encapsulated in another class). By removing the tight coupling of these objects, we can now create a special purpose IoC container meant only for testing that creates a dummy email sender object instead of the real object. This dummy email sender can be coded so that it returns either a success code or any kind of error code we want to test. In this way we can test our main object without needing to have a real email server.
This is where the power of our new infrastructure comes into play. We can now create tests for our legacy application that enforces our business rules, and have those tests run automatically however often we feel necessary (preferably on every check-in).
Add Data Access Layer
One of the more difficult and time consuming pieces to add is the Data Access Layer. Many legacy applications were written with data access begin done within the classes as necessary. Each method that needed data would query the database and process the results itself. Unfortunately for our testing effort, this means that testing any methods that have data access buried inside them requires an active database, probably with test data already populated. This is difficult to maintain and makes the application very fragile.
A Data Access Layer is implemented as a separate class whose only responsibility is querying the database and returning the results. Converting to a DAL can often be done by simply cutting and pasting the query code into the new class, and replacing that code with a call to the newly created method in the DAL class. However, this is a task that can change dramatically among different applications so it is difficult to give general recommendations on how to go about this conversion.
When converting to a DAL, the same rules should be applied. Keep the conversion very localized, convert only small parts at a time, and always use an interface for access.
Once the DAL is in place, testing of the application can be broadly expanded because the ability to return any kind of data for testing purposes is enabled. There is no longer a need to keep various databases with test data specifically tailored for certain conditions. Special purpose DALs can be created that return no rows, a single row, a thousand rows, or throw an exception, all without requiring any server support.
When updating an existing legacy application, reengineering the existing application instead of rewriting from scratch can save time and money. The necessary changes can be implemented, and the application remain available, if developers follow a few rules and plan to incrementally improve the code. Trying to change large pieces of code at the same time can result in missed deadlines and difficulties in integration with other systems.
Click for a downloadable version of this article.[/fusion_builder_column][/fusion_builder_row][/fusion_builder_container]