[Editor’s note: Tom Grant’s 5-part video series on ALM is a great intro to the subject. Start with part 1 – “ALM – A Strategy for Successful Software Innovation.”]
Application lifecycle management (ALM) has had a troubled history. Here’s the story so far:
- Under the banner of ALM, software professionals have often focused too much on the “M” part of that term, emphasizing centralized management, a means to an end, over the actual end goal, greater capability to deliver software value.
- ALM has tilted far too much in the direction of tools strategy, as if the only ALM strategy worth talking about is solely focused on tools. (The ALM tools vendors have some culpability for that outcome, but they’re not the only malefactors.)
- Any real software development and delivery strategy doesn’t depend on a set of disconnected tactical and operational choices to encompass the entire software value stream. Agile principles and practices, for example, are a great catalyst for change, but ultimately, they focus on the behavior of the team. The latest rage for “scaled Agile” is one expression of the search for a bigger strategy, beyond the partial changes that don’t add up to a grand strategy.
While ALM has had its disappointments, it has also had its successes. The first wave of enthusiasm for ALM focused on developer productivity. Given ALM’s over-emphasis on tools, the strongest expression of these ALM 1.0 productivity improvements were first, commercial tools for core functions like source control and IDEs. Later, open source tools emerged to address these needs.
The unintended consequence of ALM 1.0 was a great deal of tool fragmentation. The bigger the organization, the greater the fragmentation. While some IT organizations have fretted too much about this problem (the tools that a development team uses are the team’s concern, up to a point), the costs of hyper-fragmentation are real. The poster child for this problem is the software development company that I know, who shall remain nameless for obvious reasons, that maintains an expensive data warehouse project, just to build a consistent picture of team progress from dozens of different project management, SCM, defect tracking, and other systems. Tools make it easier or harder to implement particular processes, so tools fragmentation has accelerated process fragmentation. (Which again, isn’t always bad, as long we’re talking about what happens strictly within a team. But not all processes are so neatly compartmentalized.)
The focus of ALM 2.0, therefore, is integration. Rather than butt heads with people over their tool choices, many organizations have relaxed their heavy-handed demands for tools standardization, choosing instead to allow for some level of fragmentation if ALM tools and processes could remain integrated, at the key points between people and tools. A variety of different project management tools is fine, as long as you can see burndown charts or other progress indicators across teams. Teams share an interest with the larger organization in pursuing integration, for reasons as basic as getting a bug automatically filed in the defect tracking system when the build management system fails.
ALM tools vendors are still hacking their way through the dense jungle of integration technologies. Whether they’re the tools vendors themselves, building integration APIs or connectors to other tools, or third-party integration specialists like Tasktop, the story of ALM 2.0 is still being written.
However, ALM 3.0 is rapidly approaching. The need to become a learning organization is universal. At the end of the day, none of these investments, in tactical ALM changes, or integrations among these tactical solutions, matters if the organization isn’t demonstrably improving. If the problem delivering value to internal and external customers lies in a lack of understanding of what the customer really values, then a new requirements tool might profoundly improve that situation — but only if it’s the right tool, implemented the right way. Cycle time for the development team might improve, with a continuous integration server — or the new tool might become a costly distraction. The PMO might have a lot of hypotheses about what’s gumming up the works in an IT organization — but it might have no empirical basis to make that claim, weakening its ability to identify issues and help make demonstrable improvements.
All these expensive investments in new processes and tools, at both the ALM 1.0 and 2.0 levels of concern, are hard to justify in the first place, and then sustain, without an all-too-frequently missing piece: data. If the face of ALM 1.0 was the developer’s IDE, the face of ALM 3.0 will be the report.