The key pillars of legacy migration and digital transformation
04 January 2022 • 6 min read
Migrating legacy applications can feel like a daunting challenge to many organisations. Alternatively, it may simply seem unnecessary or indulgent - if things are working okay as they are, why change things?
The reality is that fear or plain indifference could be a huge barrier to business growth and agility. Those organisations that have successfully moved from a monolithic architecture towards a modern cloud-based composable microservices one feel the benefits in a huge number of ways.
Part of the problem with any conversation regarding migration is that it can feel so vague and difficult to articulate and plan. It can also seem overwhelming and impossible to break down into logical components and services. Let’s first, then, outline what good looks like from a migration perspective. Of course, I should preface everything I’m about to say by making the point that migrating legacy systems will look different for every organisation - each one has a unique set of challenges, ways of working, and legacy processes that will dictate where they’re trying to get to.
This list below, then, is a more generic articulation of what good modernisation and best practice looks like, based on industry changes over the last five years or so.
Cloud concepts are not particularly new - but they have evolved and continue to evolve at an ever increasing pace. Its importance in modern architecture can’t be overstated, in that it acts as a foundation for a new way of designing, building and maintaining applications and services. While there’s some debate about just how cost effective it is, when managed efficiently it will not only save you money, but will ultimately help you to power growth in ways that you didn’t think possible before.
There are some fantastic public cloud offerings, and this is not the right forum to discuss the merits of AWS, Azure, GCP et al, or your platform strategy. What I would say is that it is important to have a clear strategy, either select a platform, adopt a hybrid approach or opt for a multi platform architecture.
There is no right or wrong answer, but the architecture, tools and methods need to be consistent with the approach, whether that’s hybrid (cloud and on-premise / managed data centre), multi cloud or a single consolidation cloud platform.
Learn more about effective digital transformation through cloud.
DevOps pipelines for rapid, reliable software delivery
Although DevOps doesn’t need cloud, the two are inextricably linked. This is because developing on cloud inevitably makes it easier to move from the process of development to deployment. Indeed, most of the leading cloud vendors enable DevOps working with a range of features such as continuous integration and continuous delivery, as well as infrastructure automation. Rigorous and robust DevOps (or DevSecOps) engineering principles and execution are the key foundations for the speed and quality associated with modern application development.
Finding the right balance between releasing new features and making sure that they are reliable for users is critical and as systems become more operational a SRE (Site Reliability Engineering) approach can be adopted to automate operational tasks and solve problems.
Containers and microservices
Containers have been one of the most important trends in software development of the last decade. They naturally align with DevOps processes insofar as they allow software to be developed and deployed in a way that is modular and more loosely coupled. In turn, this has helped make microservices a dominant architectural model; where an application is composed of many specific, self-contained and separate ‘services’, rather than being one huge intersecting monolith.
This has profound implications for how we think about business more broadly, as it means we can focus on things like product management, features and optimisations in a more focused way - which is very difficult in large, heavy legacy systems.
Read next: How technology can improve supply chain resilience
Event driven architecture
Event driven architecture hasn’t become a headline in the way that, say, cloud or DevOps has. However, as a way of thinking about your architecture in an age of microservices, it’s crucial. Where monolithic legacy architectures were typically built as procedural code or equest-driven (ie. someone requests a transaction or a piece of information and the system responds accordingly), an event driven model treats interactions with software architecture as things that happen in real-time and asynchronously..
Typical request / response patterns imply a wait between the request being made and the response being received in a synchronous (blocking) pattern, whilst events are sent without waiting for a response, and are consumed by receiving applications when and how required. This naturally aligns with microservices and the loosely coupled nature of the components and services that make up this architectural approach.
It allows components to be designed, developed, deployed and operated independently, which includes changes and scalability.
Automation and infrastructure-as-code (IaC)
Moving towards an event-based microservices architecture can add some additional complexity, even if it removes it in other domains.
This is where infrastructure automation comes in - by treating your entire architecture and infrastructure as code (Everthing As Code) - something that can be developed, changed and automated - it frees you from the limitations and restrictions of monolithic legacy systems, e.g. one of the most significant constraints associated with legacy / monolithic applications is test environments. However, in a cloud environment where everything has been developed as code, a new environment(s) can be created, together with the application, as required. It can be made available immediately and removed once testing is complete.
Finally, at the heart of this modern approach to system infrastructure/architecture are API’s, offering a standard mechanism for offering and consuming services from an ever expanding ecosystem. API’s provide a mechanism for connecting and integrating services and data within an organisation, but also facilitates greater integration with external systems and sources of data.
By using APIs you may be able to power your products and services by orchestrating services and accessing richer data, ultimately delivering more value - either to your customers or to you internally.
The pitfalls: don’t confuse lift and shift with architectural transformationThere will always be different imperatives and priorities driving legacy migration. Sometimes short term tactical approaches may be necessary and sensible, but don’t confuse ‘lift and shift’ to a cloud platform with a broader transformation and modernisation project.
There are often advantages to migrating from physical data centres and servers to the cloud, but without re-architecting the infrastructure and applications to optimise the capabilities that cloud, container, DevOps automation and API-centric applications bring, there’s a limit to how far that will take you.
Many quick win solutions to legacy transformation are simply ‘lift and shift’ and do not unlock the real value, reduce the constraints and power agility and speed to market. Some examples of projects that may provide a stepping stone towards cloud adoption, but are not truly transformational, and do not deliver the benefits of native cloud applications and services, include:
- Virtualising physical servers and deploying to a cloud platform
- Transforming the code base, e.g. COBOL to Java, C# or .Net
- Adding a modern UI/UX veneer to the underlying legacy application
- Adding API interfaces to the existing legacy application
Even when applications are refactored for the cloud, if they are written and architected in the same way as they were previously, they are likely to become tightly coupled and a future monolith. Beware creating tomorrow's legacy applications today!
What should be doneHere’s how I think it should be approached.
- Go back to the requirements and capabilities needed
- Don’t start from the existing code base
- Re-design and architect from scratch as cloud native applications
Gary Ellwood is Technology Consulting Practice Lead at AND Digital.
Talk to us about how we can help your digital transformation in 2022.