Agile vs. Agile
Hmm...yes. |
Since I've just finished my disparagement of meetings and many of those meetings referenced the agile software process directly, I thought it'd be a good time to point out the benefits of agile (a movement created by a group of software practitioners in 2001) vs. the harms of Agile (the various business incarnations capitalizing on the agile movement and distorting the core principles it stands for).
The movement follows an agile manifesto and its corresponding principles. I won't enumerate everything, but I would encourage anyone to read the core concepts - it requires about 2-3 minutes of your time. For the sake of this post, I'll distill the principles down to -
- Follow the simplest path possible.
- Engage your users and other members of your team on a continuous basis.
- Change your course when new information presents itself that indicates your current path is heading in the wrong direction.
The movement itself was an answer to heavy-handed planning techniques that assumed all aspects of a software engineering project could be planned out prematurely and prescriptively leading to projects that were over-budget, past deadline, and still incomplete.
As the movement took hold in Tech circles, consulting firms saw the opportunity to sell their vision of Agile to companies, thus resulting in heavy-handed planning techniques that are over-budget, past deadline, and incomplete, but look much more dynamic in their inefficiencies.
Below are a few examples where Agile (or sometimes Big Agile as it's occasionally referred to) misses the mark in direct contradiction of agile principles. Many of these sins are the result of a sub-discipline of agile known as Scrum. Scrum, like anything else, can be used well if it's not taken to its extreme and strip-mined for profit but has, unfortunately, turned into the poster child for Agile bureaucracy. Big Agile likely latched on to Scrum because it's the most opinionated of the agile sub-disciplines and thus the easiest to sell as a descriptive cure to all planning woes.
Big Agile latches onto the idea that planning and estimating can be partitioned into small, discrete packets that can be reassembled for a complete project estimate. Often the planning phase is broken down into defined estimate values following the Fibonacci Series (numbers that are sums of the previous two numbers in the series - 0,1,1,2,3,5,8,13,21,34,...)
I'll give kudos to the fact that there's more variability to the estimates as they grow larger - the longer things are in duration, the harder they are to estimate correctly. You can make fairly strong estimates about your life tomorrow and a month from now, but a year is much cloudier and a decade even more so.
Though there's no formal statement equating the two, the numerical estimates are often assumed to be days. Those estimates are then packed into a sprint period, usually two weeks with an expectation that the workload for those two weeks is now set.
There's one massive wrinkle here - software engineering (or development, which is a more appropriate term here, because a lot of software work is research and development on a continual basis) is highly variable from project to project.
It's not like choosing one of four house types that will reside on a flat, Midwestern prairie-scape. It's like choosing a castle, mobile home, or submarine that may be on a mountain, upside down, or on fire. Sometimes it will involve the genetic engineering of a unicorn, and you won't realize you're working toward a mythical goal until you've been doing it for 3 months or longer.
There's significant academic research into moving software engineering toward more traditional engineering (i.e. making it obey some structure - in traditional engineering that's called physics - and standardized processes), but that's not the current state-of-the-art and isn't likely to be anytime soon.
Occasionally, something that you'll estimate at 8 days will collapse into a day or less because the work you planned out is handled easily by the software you'll be using and is the result of one simple function call to the system.
Much more frequently, you'll find that something you expected to take you a day will require 8 days (or in some cases may explode into months if you're extremely unlucky. It's not common, but it's not improbable). This happens because you assumed certain parts of the system were already in place and it turns out they're not, so you need to do work to build them. Or you've got a legacy system that only handles and returns certain information and you need to modify that system, which is a rat's nest of old code, to extract the information you need. Or you're building something to scale and the scale of the system as it exists today isn't capable of accommodating the new functionality, and making it work will require additional capital for hardware or a completely new underlying system that doesn't integrate with your current model.
These are more extreme examples, but more acute versions exist everywhere that make planning the sprint exactly much more difficult. Planning isn't like Tetris, where you just need to fit all of the complementary pieces together with an occasional little space that doesn't quite fit your boundary, but still lets you pack away everything into a little geometric container.
Planning is like a box of rice. Dry, you can pour rice out of the box and back in, no problem. But pour the rice out and try to pour it back in after a particular humid week. You'll notice something's amiss. Then try cooking the rice and pouring it back into the box. Good luck!
The astute among you will say - plan for the cooked rice, then.
I agree! But that means you need a container that's sufficiently large (much larger than the original box) to store the rice. You'll have a good idea what size the container should be, but it's not going to be as precise as the rice box that contains 12.6 oz or some other pre-selected measurement.
And if you're mixing different types of rice (yes, the metaphor's getting wonky now, but it's not that crazy - think of it as a pilaf), then the grains will absorb different amounts of water. This is similar to what happens when you mix legacy systems with new systems. All work outside of start-ups deals with this scenario on a daily basis. Even start-ups often have to code to someone else's specifications to access their services, so they can't escape the legacy black hole.
Then there are the variables outside of the work itself, like sick days, vacations, meetings, lunch, or production incidents that no one ever accounts for even though they happen every sprint. Now your pretty predictability is just shot to hell.
That's just the literal planning part of the work. You then add in the standard meetings and see where things sag even more:
- Daily stand-ups
- Backlog grooming
- Sprint planning
- Envisioning
- Sprint retrospectives
Assuming the standard 30-minute stand-up every day and one hour for each of the others every two weeks, that's 9 hours of ceremony for an 80-hour work week or a bit more than 10% consumed by just those meetings alone. It's not uncommon to have follow-up sessions for some of those meetings if people get sidetracked during the original meeting. And, since they're meetings, of course people get side-tracked.
I've talked about how daily stand-ups can go awry, so let's focus on the other four.
Backlog Grooming
This is typically a meeting where everyone looks at what's in the backlog of feature and bug tickets and argues about whether or not it needs to remain in the backlog. Even if it's at the lowest priority, was created 3 years ago, and no one has commented on it for 2 years, there will be an argument about whether or not it should remain open.
The argument should really be on who can hit the button to close it the fastest. I understand that in some cases, especially customer-oriented bugs of higher priority, people are afraid to close the tickets, because the problem is still ongoing. But, if the ticket's sitting there rotting, no amount of guilt or wishful thinking is going to make it move on its own. The backlog is for work you're gonna do, not work you wanna do.
I also understand that in certain B2B cases, closing the ticket will require account managers to initiate difficult direct conversations with frustrated customers, but it seems to be an odd convention that the ticket remains open simply because the company cannot be honest about its priorities or its resource constraints. Close it.
These are the types of conversations that arise in grooming meetings all the time. As such, the team makes minor concessions, but the backlog continues to grow and doesn't reflect the nature of the work that will be fixed. Until the team decides to throw out the entire backlog and start anew. This happens about once every two years.
Sprint Planning
This, outside of status meetings, may be the most unpleasant meeting held on a regular basis. It wasn't on my hitlist because I stopped holding these in 2014. Others still hold them to this very day:
- What tickets are we going to pull into the next sprint?
- Well, we can't pull anything in because all of the existing tickets for the sprint are still open.
- But we need to work on these 3 tickets next sprint.
- But we don't have the bandwidth for them.
- How close are we to being done on these other tickets so we can pull them in?
- <Non-commital, vague murmurings>
- What if we close out those tickets and re-write them with the remaining time left from the original estimates?
- Yes, because that's how the system is expected to work. Game it rather than let it show the snapshot of your work progress and where the struggles in the effort are.
- Good, so we're agreed!
And yet another sprint passes where no one can agree on what was accomplished or why it was or wasn't accomplished because the metric has subsumed the actual work.
Envisioning
There's a reason I mentioned envisioning for projects on an infrequent basis as a good thing. When held weekly or bi-weekly, envisioning becomes the poster child for over-engineering or analysis paralysis.
The idea in more frequent sessions is to pick a ticket or 7 and walk through the detailed plan of what needs to be done to close the ticket in a prescribed time frame (see bag of rice, not gonna happen, above). Even if there's a detailed plan, the estimates around that plan are the subject of much argument and invoke pretty intense debate. Often you're stuck on ticket 1 or 2 with 5 minutes left in the meeting, leading to yet more envisioning hell as an ad-hoc follow-up meeting.
To add insult to injury, the last ticket discussed from the previous meeting is usually the first one on the docket for the next meeting and, somehow, inexplicably still eats up 45 minutes of the follow-up meeting. Yay.
Sprint Retrospectives
I was very explicit in my previous statement that this meeting should only discuss the highs and lows of the sprint and everyone's faces of pain. In addition to those components - or sometimes in lieu of them - there's a lot of pseudo-stat wrangling and hand-wringing at a retro.
As with sprint planning, there's a lot of back and forth about whether or not certain tickets can be closed to ensure that the team has a "clean" sprint (i.e. that the planned tickets have all been completed and cleared off the backlog board). In the sub-discipline of Scrum this idea that you can plan your sprints exactly and complete all of the work is extremely important, even though, as I mentioned above, this is impossible to achieve without setting your bar extremely low or gaming the system as outlined in the sprint planning section.
Humans are funny - we create arbitrary markers of progress and fret about meeting them, even when our ability to meet them is beyond our control. It's good to set markers - even artificial ones - but if you're stressing yourself, your team, and everyone else around you out because something will be completed on the following Tuesday rather than on the last Friday of the sprint, it's wise to widen your perspective.
After dealing with the ticket racket, the meeting moves on to the most dubious of stats - sprint velocity. Velocity is a measure of estimated points completed per sprint. As a concept, it's fairly practical. If your team of 5 developers can consistently complete 30 points in a sprint, then you know you should only plan 30 points for the upcoming sprint.
Or if, as a team you decide you want to increase your velocity, it gives you data to examine where your choke points are.
In practice, that's not what happens. Once something's measured, it's inevitably managed. There are cases to be made where that's useful, but with a self-referential statistic like velocity, this is not one of those cases.
Managers of course choose the worst possible option and decide that they want to compare velocity among teams. Even though the claim is generally established (at least at healthy companies) that it won't be used for performance assessments, once the numbers are out there, it's hard to look away.
This comparison is bad for several reasons:
- Everyone defines their estimates differently. Not every team. Every. One. This can be averaged out on a team that consistently works together, but is much more difficult once other teams are brought into the fold. One team's 2-point estimate may be another's 5.
- Aha! You say. That's exactly why this needs to be done. To smooth out the averages. Nope. Are you happy with the team's output? If so, who cares if they estimate things in number of turtles per inch or whales per cloud? Tinkering with the internals of a team's behavior - specifically a high-functioning team - has a specific term. It's micromanagement.
- One team may have to deal with older systems that have a tendency to slow everything down (especially when the company is in the midst of a migration, and the company is always in the midst of a migration), which alters their velocity.
- Teams with different responsibilities move at different paces. I don't think development teams do shoddy work, but they can afford to take more risks with some of their development tasks (and thus close out some of the tickets faster) than an infrastructure team can. If an infrastructure team botches a deployment, then it has a greater risk of taking the entire site down, so their pace is slower by design. In addition, infrastructure teams tend to face more production support and handle more legacy systems, which alters their velocity.
- Team composition matters. A team of 1 senior engineer and 4 junior engineers is going to have a different productivity profile from a team with 4 senior engineers and 1 junior engineer.
It's possible that you can correct for all of this, but after doing so, you'll recognize that there is no general rule - just a lot of clustered observations based on similar cohorts. Unless you're trying to perform some analytical inference for reporting purposes, you're not going to derive One Velocity To Rule Them All.
For now, I'll leave my discussion on measuring human behavior as a function of employee performance to a separate blog post, because it's tangential to the agile discussion and merits its own consideration.
Hopefully, you can get a sense of where too much of a prescriptive solution begins to take on a life and meaning of its own. A too-formalized structure around a concept can, paradoxically, wind up contradicting the very concept it's expected to support.
As with most frameworks, it's best to pick a few core metrics to measure (or thoroughly observe and pick a metric after you're comfortable with the data) and let the team run itself. You don't really have other options; you just have the illusion of control at best. It's preferred, as the agile manifesto states, to let the team self-organize and be there as a guide when needed. That's the strength of a good manager - to keep an eye on things but stand at arm's length from the team's daily interactions.
Until next time, my human and robot friends.
Comments
Post a Comment