Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Even with Agile and Scrum waterfall will sneak in (amazingcto.com)
197 points by KingOfCoders on Dec 6, 2021 | hide | past | favorite | 308 comments


To this day I still don’t understand how one can read the agile manifesto and somehow get to scrum.

The whole thing is about processes, and tools (jira)… the complete opposite.

The reason everyone ends up doing waterfall is that it’s not the devs who you need to convince when selling agility, is the chain of command. Being agile means having no long term plan. If the C-suite can’t handle that, you’ll never be agile.


Indeed - the principals of the manifesto are very easy to read:

https://agilemanifesto.org/

and

https://agilemanifesto.org/principles.html

The problem is people take the main points and use them to justify their own wants and needs.

These are the main points:

* Individuals and interactions over processes and tools

This does not say you should not have any processes and tools.

* Working software over comprehensive documentation

The problem with this is you now have people who don't like writing documentation claiming that agile says "You shouldn't write any documentation" or "I don't need to comment my code as the agile manifesto says you shouldn't"...

* Customer collaboration over contract negotiation

This does not say "you should not have a contract".

* Responding to change over following a plan

People interpret this as "We shouldn't plan anything. If you are planning then you aren't doing agile".


It's the next paragraph after those bullets:

> That is, while there is value in the items on > the right, we value the items on the left more.

Also keep in mind the manifesto came as a response to a "document-first" software design method, where basically the complete product was documented, planned and contracted before the first code was written.


* Individuals and interactions over processes and tools

* Working software over comprehensive documentation

* Customer collaboration over contract negotiation

* Responding to change over following a plan

And then you grab any Scrum guide and you’ll see:

* roles and processes: (Scrum master, product owner,…)

* tools: Scrum now is almost synonymous with Jira.

* Documentation mandates: “how to write proper user stories”, “how you should name, split and classify tasks”,…

I know a lot of people don’t consider it this way, but the backlog IS documentation.

The second most popular tool after Jira in Scrum teams is drumroll Confluence.

* Contract negotiation: Scrum defines a specific role (PO) with the main task of handling contract negotiations.

* Responding to change: Even the tools will complain if you change things mid sprint (because you’ll ruin the all important metrics!).


> And then you grab any Scrum guide and you’ll see:

> * roles and processes: (Scrum master, product owner,…)

> * tools: Scrum now is almost synonymous with Jira.

> * Documentation mandates: “how to write proper user stories”, “how you should name, split and classify tasks”,…

This is all perfectly in line with the Agile manifesto, as long as teams are empowered to customize all of them as they see fit.

For example, if the team decides that for them long daily scrums that are also an opportunity for code review is better, and so they spend 1h every day doing mostly code reviews during the daily scrum and are happy and productive with this, perfect: we valued people and communication over processes and tools. If, conversely, they get told that Daily Scrum must be a stand-up meeting taking 15 minutes or less, then they're not actually being Agile (someone is valuing the Scrum process more than people and communication).

Any team of people will follow a process and use some tools to do so. Scrum is a half decent starting point for a team moving to Agile, as are Kanban or XP or others. Ultimately you have to start with some process, then keep tweaking until the team, management, customers etc are all happy.

Having no process at all (how do we decide if we should work on X?) is a tool for disaster. Having a very rigid process (don't talk to Jim directly, wait till Daily Scrum), or following tools slavishly (I can't work on this until you log a ticket and pull it in the sprint), is as well.


https://martinfowler.com/articles/agile-aus-2018.html

Our challenge at the moment isn't making agile a thing that people want to do, it's dealing with what I call faux-agile: agile that's just the name, but none of the practices and values in place. Ron Jeffries often refers to it as "Dark Agile", or specifically "Dark Scrum". This is actually even worse than just pretending to do agile, it's actively using the name "agile" against the basic principles of what we were trying to do, when we talked about doing this kind of work in the late 90s and at Snowbird.



Now I am curious about the background image. And a little disappointed that there is no principle page.


I love that!

Thanks for sharing it!

- A refugee from the Corporate Machine


>>> "Being agile means having no long term plan." I believe the angst against agile is it doesn't have a definitive viewpoint but is a reaction against what it sees as the evils of waterfall. Without definite processes, we get scrum.

From my viewpoint, the issue is that scrum has a flawed understanding of Toyota Kaizen which it based itself on. Yes, in a Toyota factory, team members are expected to be able to perform any tasks required on the assembly line. But, the architecture Toyota uses, to enable this agility is limiting. For example, Toyota's TNGA architecture says there will only be five modular platforms to choose from; instead of the 100+ choices in the past [1]. Furthermore, Toyota is known for "boring" but reliable cars. It avoids all bleeding edge technology and prefers to make incremental improvements to its existing tech stack. That is to enable interchangeable workers; Toyota limits its car platforms and standardizes "cross cutting concerns".

Toyota Kaizen is diametrically opposite of agile in IT. Toyota standardizes, simplifies, and makes incremental changes to its tech stack. Agile in IT means ditching today's platforms/architecture/frameworks/toolsets and jumping on the latest fad du jour.

[1] https://en.wikipedia.org/wiki/Toyota_New_Global_Architecture


The conservative Toyota you're describing is the Toyota that exists today, not the one that developed the Toyota Production System in the first place. Maybe they weren't bleeding edge but their engines had fuel injection, variable valve timing, and were computer controlled. Meanwhile the American automakers' idea of "futuristic" was putting LCD screens in the dash and making the cars talk when you left the door open. Through the 1980s and 1990s Toyota was putting out cars like the original LS 400 which embarrassed the German luxury automakers, the supercharged MR2, the turbocharged Supra, and high revving Celica, all while making cheap but great econoboxes. Its really in the 2000s where Toyota switches to an "only conservative cars" business model when the Toyota developed sports cars disappear, engines start staying mostly the same, and generations get longer.


> Its really in the 2000s where Toyota switches to an "only conservative cars" business model when the Toyota developed sports cars disappear, engines start staying mostly the same, and generations get longer

And then 2020s Toyota came out with the GR Yaris, as if to prove that every rule has an exception :)


Yeah it seems like they moved all the good stuff over to TRD and switched to "boring cars only" mode in the '00s


I feel this is true, but more to the philosophical point. I remember reading an article where the Toyota factory employees are giggling because the American companies that try to study them are so obsessed about the actual manufacturing process instead of the central idea. To me, the central idea in Kaizen revolves around trust and flexibility instead of efficiency and "metrics" tracking.

This creates what seems to be opposing practices while in actuality is consistent. They abstract platforms, but have SWAT-like teams to provide custom tooling for the assembly workers for incremental improvements (Directly on the floor, <1 day feedback loop[0], not supplier/consulting tenders with >1y timeline). Every worker are encouraged to improve the process at least 2 times a month.

Another example of flexibility > efficiency is the Toyota practice of having flexible factories that's responsive to demand (to cover for peaks of certain models) instead of every factory has to be a hyper-efficient factory for select models[1]. Having buffer and flexibility (spare capacity) > fragile efficiency.

Software development can't map to assembly line, of course. Agile (the manifesto) maps very closely to the initial principle of trusting the developer and customer/user/domain expert and being flexible (deliver incrementally and iterate many times). Derivatives like Scrum seems to miss the point from me.

If you're interested to see how it works day to day in a modern manufacturing company, take a look at FastCap and their YouTube channel[2]. It seems obsessive around incremental improvements, but I think the Software industry need to take note and develop our own way to make the SDLC be less painful.

[0]: https://www.youtube.com/watch?v=o5GzHHE3udU

[1]: https://www.thedrive.com/tech/26955/inside-toyotas-takaoka-2...

[2]: https://www.youtube.com/watch?v=EqtKKkastWk


Dammit! so all this Kaizen training my employer paid for me to take 20 years ago, is actually Scrum?


Scrum is the bastardization of Kaizen and Agile.


> Being agile means having no long term plan.

Bingo, and there lies its power — but only in the correct domain.

For consulting, where the customer doesn’t quite understand what they need and you need to work your way forward in an exploratory fashion (and for a limited period) it’s great. And that was the environment it was designed for!

You have a multi year engineering project? Really a waterfall, or a waterfall/spiral approach will work.

Subtasks handled by a small group can be agile, and that can make the whole thing more efficient, but that need not be the case.

A fetish for One True System is simply a cargo cult.


> You have a multi year engineering project? Really a waterfall, or a waterfall/spiral approach will work.

I'm not sure a long time frame is a necessary condition for a successful waterfall project - but I am certain it is not sufficient.

What you need - long term project or not - is a rock solid understanding of business requirements. That means - no vague client demands, or startup style search for product / market fit - but a well established, and documented set of business processes that needs to be mapped to code.

I also believe such projects require a leader who has this understanding, but also a talent for product design such that they can recognise and articulate clearly how the product will meet requirements. They need to be able to convince stakeholders.

I do think such circumstances are rare for many organisations. But not so rare the prospect should be dismissed out of hand.

I agree with your final statement though I think the challenge for many orgs would be in having the self awareness to recognise which approach is best for them.


Fully agree, agile is fine.

The real problem is that Scrum is not agile at all: Scrum is a fat "process" that enforces "following a plan" (regular, rigid meeting structure), "creating comprehensive documentation" (user stories, specs, mocks, task board) and "contract negotiation" (estimation meetings, planning poker). In that way, it's the exact opposite of the original agile Manifesto: http://agilemanifesto.org/

The only thing Scrum has ever been great at (and why it's continually chosen despite proven harmful to development productivity) is that it's great at simulating continuous progress and control to management. Other than that, it's simply fostering architectural mess and a political, dogmatic attitude from and towards everyone.

It all starts with a clear product vision from the top and defining how succeeding on it looks like.

If you don't know where to go, Scrum will get you there faster and in even more directions at once.

Or, to cite one of my all time favorites:

"Building intuition on how to make good decisions and cultivating a great relationship with your team will get you 95% of the way there. The plethora of conceptual frameworks for organizing engineering teams won’t make much difference. They make good managers slightly better and bad managers slightly worse."

https://www.defmacro.org/2014/10/03/engman.html


> The real problem is that Scrum is not agile at all: Scrum is a fat "process" that enforces "following a plan" (regular, rigid meeting structure), "creating comprehensive documentation" (user stories, specs, mocks, task board) and "contract negotiation" (estimation meetings, planning poker). In that way, it's the exact opposite of the original agile Manifesto:

None of those things are Scrum.

Scrum talks about defining bits of work to do on your product, but doesn’t mention User Stories. (But really, what’s the beef with User Stories? They’re short descriptions of what you hope to achieve! You gotta make some kind of plan sometime or you have no cohesion. If you don’t like the formality of User Stories, drop ‘em).

The meeting structures are not defined by Scrum. The intent and topics are defined, but the structure (apart from vague suggestions for rhythm and timing). They only exist on a cadence so you can predict some parts of the system (like ensuring the client turns up to see the work done). You can change most of the details.

Scrum doesn’t define any planning or estimation practice, it simply recommends you use some system so that the right people are in sync. You have to know if part A takes an order of magnitude longer to make than part B, so that they can hooked up properly.

Specs? Mocks? Task board? None of that is Scrum.

Scrum has two lists: the backlog, and the sprint backlog. The former grows as you learn what to make; the latter comes into being at the start of a sprint and disappears at the end. If you must put it in swim-lanes, that’s your loss. Don’t blame Scrum.


What you're describing sounds almost agile... joke aside, as I tried to point at, the original idea was pretty good, but it's certainly not what the snake oil people made of it and will sell you: https://www.scrumalliance.org/

The original Scrum also talks about servant leadership, when in reality in 99% of wannabe agile organizations, everything is just about enabling some manager's power trips.

I've found that most software development methodologies don't consistently work, mirroring https://news.ycombinator.com/item?id=15885171


The C-suite only hear "fast delivery" and ignore the other parts while also demanding the waterfall parts that suit the ways they need to work such as committing devs to deadlines one financial year out when funding bids have to be submitted.

There is something ironic about being ordered to estimate a 12 month work order while also being told make sure "it's agile" and delivered on time and on budget by ++currrentYear.


yes, non-agile organizations try to bolt on agile at the 'bottom' of the hierarchy, and that's a sure recipe for failure because of the significant impedance mismatch that causes. the c-suite wants to hand down directives, and the agile teams want to listen to the customer first, leading to conflicting interests.

an organization has to be receptive to turning the marketing function upside down (product being one of the 4 P's of marketing) to be really successful at agile. that's why consulting firms tend to be more successful than ordinary product/service firms at it, because they're already inverting the marketing function (to lead with what the customer wants and then trying to execute on that).

executives don't tend to want successfully implemented agile, as it undermines their control of (customer/market) information, and the organizational power derived from that.


In my mind, this sorta proves that most companies should not have in-house engineering divisions and instead most of us should be working for software engineering firms much like actual architects. This would likely also solve the career development/engineering pedagogy problem that is so lamented since firms would have better control over time internally.


So true, and some even have budget boards that decide on allocation of money to the teams according to the central strategy, so the POs need to follow those priorities and justify what they do based on that instead of 'increasing value of the product'.


Faster delivery should always be corrected to 'faster to fail'


I worked a couple of teams that were using Extreme Programming back in the day (early 2000s) and took a lot of lessons from that. Then in the intervening years I worked on teams that didn't have much of a process at all, and when I came back onto teams that were attempting agile, it was in the form of Scrum, and I just... to me it just doesn't click. It's not the agile I was taught, it's just a formula applied without much thought. Usually just a Pivotal Tracker type system + daily standups, going through the motions. I see no point in it.


In Scrum, the team should be self-managing. The Product Owner and Scrum Master are not managers or bosses, they're a customer representative and a secretary. I can see that working with "people over process".

But then came certification, existing managers et cetera and that was lost.


Many (most?) of the early Agile promoters were owners or partners in a consultancy doing work for clients. These clients often couldn't really specify in enough detail what they needed from the software that was to be built. In that scenario, the structures of Agile make a lot more sense. These days, when a company decides to start using some form of Agile methodology (usually scrum) they need to do something with the people who were previously managing the team and so they often assign them the Product Owner role or something similar.

Obviously, the power balance between such a manager (who often retains power over raises and promotions as well) and a line employee team member is completely different than the power balance between a Product Owner as representative of a company and a team of consultants who together own their own consultancy business. Doubly so if the consultants have many more clients and don't exclusively depend on this one Product Owner to pay their bills.


> Many (most?) of the early Agile promoters were owners or partners in a consultancy doing work for clients.

This is a huge point that people who weren't in the industry back then completely miss: The people behind the Agile manifesto weren't trying to figure out how to improve software development as a whole...they were looking for ways to improve their consulting business. What works for them as consultants selling consulting services doesn't make any sense for a company dev team trying to develop software.


Also, virtually all of them had primarily worked in enterprise software. What works for consultants on internally developed software, with a captive user base and key stakeholders who write all the checks, is not necessarily what's going to work on customer-facing products, embedded, etc.


> In Scrum, the team should be self-managing. The Product Owner and Scrum Master are not managers or bosses, they're a customer representative and a secretary. I can see that working with "people over process".

Except those are roles and processes, not people.

Putting people before processes and tools means treating people like adults and letting them work how they’re more comfortable.


"The team" is not a living entity that can manage things. The team self managing means the most dominant person becomes dictator, until there is revolution or power struggle with another wanna be dominant. And occasionally mixed with "no one want to dominate" leading to incoherent direction.


> The reason everyone ends up doing waterfall is that it’s not the devs who you need to convince when selling agility

Nah, agile is pushed from top to bottom, whether developers want OT or not.

> Being agile means having no long term plan. If the C-suite can’t handle that, you’ll never be agile.

While the devs are mostly OK with no plan, they do tend to want some planning once it is completely absent.

What I see now is developers asking for damm analysis and documentation and C-suite not wanting to pay them. But it is really missing and we are wasting a lot of time due to not knowing the system.


this. I built a prototype, put it in front of the relevant users, gathered the next set of changes, and... have to wait for the next sprint to implement them (even though they will take about a day, tops) because we can't change a sprint halfway. How is this Agile?


Fine if you are a one-person team working on a small project, but when you have a team of several people doing uncoordinated frontend and backend changes simultaneously, chaos can often ensue. Also a lot of 'urgently needed' changes don't survive the few days until the start of the new sprint, so you would actually waste a lot of time chasing wild geese if you are just jumping straight in every time. Two weeks maximum wait (with a week or less very likely) seems like a reasonable tradeoff to me.


There's nothing preventing you from throwing it into the sprint. But then you gotta exchange it for something else you promised the customer at the start of your sprint.

And agile doesn't just mean Scrum and sprints. If your primary way of working is what you described and you are able to get this feedback within hours then Kanban (another agile way of doing things) might suit you better.


> If the C-suite can’t handle that, you’ll never be agile.

My experience is that the problems are typically:

1. Finance. Market targets, multi-year budget projections, "you promised business benefits from project delivery in Q3." Finance teams do not love Agile.

2. Companies who are collections of warring fiefdoms. You need to have an organisation which is mature enough to have reasonable conversations about changing priorites, delivery timelines, and funding in a reasonably dynamic way.


> The whole thing is about processes, and tools (jira)… the complete opposite.

I don’t know how you can read the Scrum guide and get to that comment.

JIRA is not Scrum and vice versa.

The Scrum system is barely a system: 2 lists; 3 roles; 4 meetings; an exhortation to continually adjust (aiming at improvement); a recommendation you measure; guidance to include the “client” fairly often. That’s about it.

This JIRA thing is just one tool among dozens of optional options. Use scraps of paper if it suits you.


> To this day I still don’t understand how one can read the agile manifesto and somehow get to scrum.

You don't. I mean, literally, the relationship is inverted: Scrum predates Agile; the creators of the former later went on to contribute to the Agile Manifesto.

> Being agile means having no long term plan

No, it doesn't.


> No, it doesn't.

More like ignoring long term plans when you see something good is in front of you that wasn't planned. Think Slack team working on the video game an noticing their chat app is great...


> More like ignoring long term plans when you see something good is in front of you that wasn't planned.

I would say more like “long term plans are not high specificity and all plans are subject to rapid adaptation with new information” (though I admit lots of “agile” workflows have no long term plans but extremely rigid short-term plans.)


> To this day I still don’t understand how one can read the agile manifesto and somehow get to scrum.

It's almost like scrum pre-dates the agile manifesto. (Which it does)


But the manifesto was written by many of the same people. The expectation that they would be compatible seems reasonable. And, the "Scrum Alliance" happened very shortly after the manifesto.


I know, but it’s not how it’s sold.

The typical presentation is “we need agility => Scrum”. One would assume that if Scrum doesn’t fit the bill, one would discard it.


But, unfortunately, scrum has become synonymous with "Agile" in the corporate mind. Everything, including SRE and ops work, gets force-fit into Jira "Agile" (really FrAgile) methods and meeting-heavy rituals.

But scrum easily devolves to rapid cycle waterfall death marches. This has been going on for at least nine years that I know of. It sucks.

When a company says "We do Agile with Scrum" I wince and am very, very skeptical.


And add DevOps to it, because then one can get rid of run == cost savings.


Scrum tries to sell a set tools, while the originals (Toyota manufacturing system) were methodologies, a set of processes to build those tools. Scrum is a different abstraction layer


I've never quite understood, over 30 years of experience in software development, what problems Waterfall presented that warranted a complete refactor of software development processes such that things are now very much in cult-/cargo-cult territory.

There is a natural process that every issue goes through: Analysis, Specifications, Requirement, Design .. then the Developer takes in the materials of each of these steps, does an Implementation and Testing phase .. then the issue has to be validated/verified with a Qualification step, and then it gets released to the end user/product owner, who sign off on it.

This is a natural flow of steps, and it always appears to me that the new-fangled developer cults are always trying to 'prove' that these aren't natural steps, by either cramming every step into one phase (we only do "Analysis" and call it Scrum) or by throwing away steps (Qualification? What's that?) and then wondering why things are crap.

Too many times, projects I've seen fail could've been saved if they'd just gotten rid of the cult-think.

Its nice, therefore, to see someone else making the observation that Waterfall is a natural process and all the other -isms are just sugar/crap-coating what is a natural consequence of organizing computing around the OSI model - something that has been effective for decades now, and doesn't need improvement-from-the-youth antics.


The problem with waterfall to my experience is not immediately visible to developers or directly concerns developers, but rather the stakeholders and managers. Almost always, your customer doesn't know what they want - they think they do, but they don't and will only realize this after you finished development and have a visible system demo/field test.

Additionally it is really hard for project management to get metrics/early warnings if something is not going properly (no project - waterfall or agile - is without problems during its lifetime). How would you asses the quality/progress of a project based on a couple of a specification or requirements documents 1-2years into the project with no (immediately usable) code being available? How would you asses if the approach taken after 3years in a project will be accepted by your endusers?

Waterfall is just a big black box to management and stakeholders with the hope that after 5years something usable will come out of it.


Additionally, as a start-up, we rarely know what the product should be. We have a first vague idea of the product but don't know what it exactly is. Commonly, the ideal product is different from the initial design, and we only know that after the actual design, investigation and development. If it's the nature of software development, the development framework should be capable of design change in short terms. Agile development styles are trying to solve it. Unfortunately, many projects failed to adopt it because they don't understand the purpose of agile soltware develoment.


"Agile" just means "analyse the situation until you have properly fleshed out specifications, great requirements that will solve the problem as described, and a proper plan for development".

Still, "Analyze until Completion" doesn't need to be couched in fancy new-age terms that someone will get paid to teach everyone ..


In startup land, it's often quicker to ship software than analyze, and this tends to win in the market.

The "analyze" worldview assumes that the easiest and most accurate format for the output of an "analyze" process is some natural language text and maybe diagrams and a powerpoint or two. What if the friction of authoring software was reduced to the point that the easiest output of "analyze" was .. working software?

The absolute extreme example of this is of course SpaceX: classical control systems theory can't quite deliver a hoverslam landing that works first time, so they had to iterate by crashing rockets.


>What if the friction of authoring software was reduced to the point that the easiest output of "analyze" was .. working software?

Well, the software world is certainly working hard - on one side - to reasonably attain this goal, while another big portion of it actively resists this advancement in human social services, such that it represents.

Perhaps that is the key thing to all of this: "Until all of us are using waterfall, many of us will have a hard time using waterfall." I'm not sure I like where this leads, so I'll just agree with you that sometimes you just have to crash rockets.


> actively resists this advancement in human social services,

> "Until all of us are using waterfall, many of us will have a hard time using waterfall.

Now you've lost me, I've no idea what you're talking about with the first statement and the latter just sounds like cultism? People have valid reasons for iterative processes!


It goes like this: for as long as we are incompletely applying waterfall, someone else will attempt to refactor the natural process and call it something else, instead of just recognizing and then completing the waterfall process. This will be the state of things until either a) everyone abandons waterfall and/or calls it something else or b) waterfall is just so well understood as a natural law of the industry.

So its really more of a self-fulfilling prophecy situation, and yes in that regard, we are in a cult.


This is just "communism can never fail, it can only be failed" but for Waterfall.


Inasmuch as software development and communism are both social services, yes.

Bad software fails to deliver on the social promise. As does any particularly bad political philosophy.


Over time all political philosophy has failed to deliver on promises, because people. Same is probably true of software because, people.


Wait, are you saying that you started a company without a clear idea of what your product is, and if a market exists for it?

Is that common in the software start-up world?

If so, I can see how you'd need Agile processes, just to keep your VCs from heading to your offices with torches and pitchforks.


This is the basic idea behind the "lean startup" methodology. Seed VCs are totally on board with this strategy.


>The problem with waterfall to my experience is not immediately visible to developers or directly concerns developers, but rather the stakeholders and managers. Almost always, your customer doesn't know what they want - they think they do, but they don't and will only realize this after you finished development and have a visible system demo/field test.

This just indicates a failure to perform a proper Analysis/Specification/Requirements phase, with relevant qualifications steps. It doesn't matter if you're a Manager or a lowly Developer - if you can't adequately qualify the requirements and specifications, the analysis is simply not complete.

Pushing the problem to or away- from Managers is just one way of saying "I'm too lazy/incompetent to adequately complete the Analysis phase, resulting in poor requirements, lacking or contradictory specifications, and no planning". Its the Analysis that needs attention, i.e. maybe nobody actually knows how to wear the Analysts hat any more ..


>This just indicates a failure to perform a proper Analysis/Specification/Requirements phase, with relevant qualifications steps. It doesn't matter if you're a Manager or a lowly Developer - if you can't adequately qualify the requirements and specifications, the analysis is simply not complete.

But that view of requirements is not borne out by the reality for most projects. You're presuming that it's possible to gather the "ideal requirements" when the project first starts, with enough due diligence - and also, that those requirements are fixed.

In my current job, we're producing a service that has to court a handful of very large clients. Even if there was a well-defined idea of what the service should eventually look like in 5 years, a lot of feedback is required to discover how it should look now. Which client needs more attention? Where is the biggest opportunity for additional value? How are users actually using the service, ignoring what they said they were going to do with it?

That last part is the most important - requirements are in reality a feedback process for which the existing product is one input. You cannot analyse how users will empirically interact with a product that does not yet exist. Abstract analysis is no substitute for data.


If you can't formulate actionable requirements, you're either not the domain expert, or not communicating properly with the domain expert.

What your service looks like now and what it looks like in five years are obviously two different questions, but a proper analysis will divide the issue between now and 5 years from now and come up with requirements that fill the gaps. This doesn't mean things get set in stone and aren't adaptable to changing needs - when this condition is identified, the manger/developer need only apply the workflow again, and revise the specifications with the updated data, and a new development plan can be formulated. Maybe this is 'agile', but again - it speaks to the fact that waterfall is a naturally occurring phenomenon in engineering/technical matters, and thus should be applied consistently, completely, in order to provide fruitful results.

> You cannot analyse how users will empirically interact with a product that does not yet exist.

I don't agree with this, as I believe it is very, very glib. You can of course empirically interact, by wearing the user hat. Too often the developer/manager/user hats are considered adversarial - but when the decision is made to be flexible about which of these hats one is wearing, during analysis, makes all the difference between whether your product is successful or not. Software is a social service - rigid developers who cannot put on the user hat, are not providing the most intrinsic aspect of that service in their field.

>You're presuming that it's possible to gather the "ideal requirements" when the project first starts, with enough due diligence - and also, that those requirements are fixed.

I make the claim that this ideal can be attained, by ensuring that the early steps in the waterfall process are actually applied. You are correct in noting that "when you don't do things right, right things don't happen", however ..


>but a proper analysis will divide the issue between now and 5 years from now and come up with requirements that fill the gaps

The issue with the Waterfall model is that this approach doesn't work. Aiming for "now" means you come out with an outdated product in the future. Aiming for "5 years from now" means speculating on what users will eventually want, which is very error-prone. Trying to adjust course midway through is a nightmare - and completely defeats the point of trying to get requirements "right" the first time.

>You can of course empirically interact, by wearing the user hat.

This is not empirical: it does not involve observation or measurement of the real world. Speculating about user behaviour is no substitute for concrete data about how users actually behave.


Still you need requirements for what to build today. Often that is brushed aside as "details" in 2-week sprints, just start coding!


"Software is a social service - rigid developers who cannot put on the user hat, are not providing the most intrinsic aspect of that service in their field."

All the more reason not to rely on the proper wearing of a user hat (how are you going to know, by the way?), but actually work with the users instead and spend time creating the necessary artefacts to capture their perspective as soon and as well as possible.


> If you can't formulate actionable requirements, you're (...) not the domain expert

Yes, fine. So there is no (true Scotsman) domain expert. Grandparent's point is also just that actually building the application and seeing how users interact with it is a valid method of analysis either in a prototype phase or actual evolving production software.


Is it valid tho? Deploying an app to get feedback is totally backwards. Make an interactive demo with no-code tools, record it and iterate on that based on stakeholder feedback.


You're paying your most expensive employees to build something twice, and initially in a lower-fidelity tool that won't quite mimic the functionality or the look the of the final product. So you end up with stakeholders that don't understand it's just an interactive demo, and start wondering why things don't quite work. That or they buy into the demo, you build the real solution and then they're questioning why it doesn't quite work the same, or looks different, because when does an interactive demo in a mockup tool every really get it 100% right?


> If you can't formulate actionable requirements, you're either not the domain expert, or not communicating properly with the domain expert.

I want to highlight this, because it's almost universally true on any given project that there are no complete experts. But I'll be looking at domain not as a business domain, but actually considering if you can find a domain expert in the technology stack a project is planned in.

When requirements are written up, there is never a domain expert on all the technologies to be used that is so experienced in the technology that they will not get things wrong. Software development is so intertwined and so fast-paced that we are at the mercy of tech stack we use too.

Imagine you are well-versed in Postgres, but your new project requires you to use Postgres FTS extension. Without learning the ins-and-outs of Postgres FTS extension, you'll make grave estimation and "planning" mistakes. And then you are supposed to deploy on RDS instead, where there is another set of considerations. And this being a new project, you are asked to use asyncio psycopg for the first time, so there will be more gotchas as well.

Basically, the number of "tools" we shift between is so vast that nobody can be an expert in them. The rate of change is huge as well. Just count all the Kubernetes tools today, and imagine having to use any one of a dozen of mini-k8s set-ups for local development. Just exploring the tech stack is a full time job for a dozen people, and you wouldn't be getting anything done. While others in the market would.

So, if you are in a world where you've got your tech stack pre-set, you've got plenty of experience with it, and you are a domain expert on the business problem you are looking to solve, sure: waterfall-like approach will not put you at a disadvantage. But introduce any one of the newer components (cloud scale-out), and an approach to gain expertise first will have you run-down by the competition.


Do you work in software development or IT operations? Of if you did but no longer do, how long ago was it?

Having been in the field for some time, and reading HN and talking with people in the field for as long, I believe your general attitude is one of someone who has an idea for how things should work in a stable, ideal world with unchanging toolsets and unchanging technologies. This is not the world of technology.


the challenge I've always seen with putting the user hat on on so to speak is the difficulty in behaving as though you don't know how it works. I've seen enough user research where things that were obvious to the tram were not at all to the user.


> > You cannot analyse how users will empirically interact with a product that does not yet exist.

> I don't agree with this, as I believe it is very, very glib

This implies that all A/B testing is worthless, which is .. surprising.


> > You cannot analyse how users will empirically interact with a product that does not yet exist.

You can certainly refine the software over time (A/B testing, if you will) to more closely attain the ideal, which you may not have well defined at the beginning of the project if you don't perform an adequate review of the needs of the user.

But you can certainly also complete a user analysis that produces requirements, which when fulfilled, solve the problem in its entirety for the user. These two extremes are not absolute - sometimes, if the analysis is incomplete, A/B testing can rescue the project regardless of how poorly the first analysis was performed - but A/B testing, it could be argued, is applying waterfall properly: iteratively, as intended... but by all means, call it 'agile' if that makes the difference to the team involved.


Wait a minute, you're a proponent of iterative approaches? In modern parlance most people put <non-iterative approaches> under the heading "waterfall".

Can you specify which book/process/document(s) you are using in your day-to-day? This could be very interesting in future discussions with clients, to say the least!


You might be interested in [1]. Supposedly it has the diagram that became known as the "waterfall" diagram. I don't think the author is advocating for what we normally associate with waterfall, though, as there are some additional steps they advocate for. They start with a simplified model, criticize it, and then add some more complexity to the model.

Also to note, the paper is titled "Managing the Development of Large Software Systems."

The author outlines 5 steps they believe need to be used to improve the success of a project. One step is called "do it twice" in which the author advocates the development of a prototype well in advance of delivering the final product. I think this is iteration. It's not indefinite iteration, but it's definitely not showing the big bang single release waterfall approach either.

It also advocates to involve the customer throughout the process instead of only at the end.

The summary says to refer to Figure 10 as the process that incorporates the author's steps. This final diagram is pretty far from the simplified and flawed waterfall picture that has been pulled from the front of the document.

[1] http://www-scf.usc.edu/~csci201/lectures/Lecture11/royce1970...


Thank you!


Waterfall is indeed a decidedly non-iterative approach. Anyone calling an iterative approach "Waterfall" is, well, confused about the terms. As soon as you add iteration, you're doing something else. Something intelligent. Often some variation of Iterative & Incremental (of which Scrum is an extreme form), Evolutionary, or perhaps the V-Model.


When I studied the SDLC formally, many years ago, “iterative waterfall” was called the spiral model.

Waterfall is not iterative. That’s the whole point. You go over the waterfall once. If you’re iterating then, by definition, it’s not waterfall.

> you can certainly also complete a user analysis that produces requirements, which when fulfilled, solve the problem in its entirety for the user

This is almost always false, for any non-trivial project, within the usual commercial constraints of money and time.


>This just indicates a failure to perform a proper Analysis/Specification/Requirements phase, with relevant qualifications steps. It doesn't matter if you're a Manager or a lowly Developer - if you can't adequately qualify the requirements and specifications, the analysis is simply not complete.

Not really. It indicates a general inability to predict the future.

That's really the core of the issue. Requirements analysis can be done badly but even when done well it doesnt make you good at predicting the future.

The longer the feedback loop the more the ground changes under your feet.

>Pushing the problem to or away- from Managers is just one way of saying "I'm too lazy/incompetent to adequately complete the Analysis phase

I used to believe precisely this in my early 20s.

My "aha" moment came when I built software solely for myself and even then I realized that this didnt stop the requirements from changing because of surprises that kept being thrown my way and problems that I uncovered only in retrospect from actually using the software.

For sure not all problem spaces are this chaotic but most software problem spaces are.


When I was a kid, I remember we topped over this hill on I-15. Way down there in the distance, there was an overpass across the road. I remember wondering how my dad could aim the car accurately enough to go under that overpass from that far away. As an adult, I know that he didn't even try to do that. Instead, he steered the car as a continual process, not just once.

That's the same kind of problem with waterfall. You want them to do the analysis perfectly, and then you want the world to not move while the implementation happens. Neither of those is possible. And then you blame the analysts for failing to do the impossible.

The analysis phase is never complete enough. The conditions are never the same by the time the implementation is complete. That's just reality. What are you going to do about it? You need to steer during implementation, not just once at the beginning.


Any analysis of a problem, requires understanding WHAT problem needs solving. The issue in real world is, buisnesses and markets (as well as technology) are changing very fast, and any in-depth analysis done two years ago might not have the same outcome as if done today again. This is a fundamental realization of agile. If you have a problem domain which is not subject to change, waterfall might be the right choice. But you almost never have that. I know it is hard to accept for us devs, often with backgrounds in Math/Physics/CS etc. because the problems we were trained to tackle there are always the same, and the laws of math and physics tend to never change (or change way slower than in modern markets).

Relevant clip perfectly highlighting the problems you have during requirement analysis: https://www.youtube.com/watch?v=BKorP55Aqvg


This just indicates a failure to perform a proper Analysis/Specification/Requirements phase

No, we're suggesting the analysis/specification phase should be comprised of what are, effectively, many really small waterfalls. And that any effort to obtain specs for an entire system without first building some minimal version of the system is doomed to failure.

For a small enough feature, yes, you end up doing waterfall. The problem is breaking up a massive system into those small enough pieces.


You don’t understand. The analysis *can’t* be completed fast enough.


I consider fleshing out customer requirements part of the engineering process. You cannot expect the customer to properly communicate what they want/need. That doesn't mean Scrum, you have many other tools at your disposal, presenting the customer with scenarios to challenge their requirements, use interactive prototypes or even paper ones. All of them far cheaper than implementing the wrong thing.

I agree about the pitfalls of not seeing something usable until it's very late and how to measure progress. The opposite is also true however, you make some easy non-scalable PoC and it looks like a huge progress when it actually can't or shouldn't be used as a foundation for anything.


I'm inclined to think a PoC (or whatever you want to call it) would be useful for some things: - a tangible and cheap way of showing the customer how you think you can solve their problem - getting concrete feedback on that solution (you're both talking about the in same, tangible thing) - using it as a foundation for the "big" project. Not its code, but the ideas behind its UX, flow, treatment of data, whatever is the main crux of the solution can be made tangible in a PoC and be used as a reference for the next step, ie making a production-worthy application - the PoC can also show that what the customer wants is in fact a bad idea.


This problem is not a problem of project management methodologies, but rather a problem of business strategy and scope management. If you want to fly to the moon with one shot, that is exactly it. Breaking down scope into smaller and better understood objectives is what will make things work and that should happen before a project starts. Afterwards it can be executed both as one big waterfall sprint for each objective or in some agile way with much higher chances for success and lower costs for cancellation.


Perfect example of unclear formulated goal of the project. You say "fly to the moon with one shot" and think it is completely defined. But the whole, year-long project would change in scope whether you mean "Land on the moon unmanned", "Land a man on the moon", "Land a man on the moon AND return him safely to earth" or "Flyby the moon, get close to surface and take some pictures".

And if you now accept that any of those above goals may be changed into another during project runtime, you understood why waterfall is not often the best idea.


>Perfect example of unclear formulated goal of the project

That was exactly my point.

>And if you now accept that any of those above goals may be changed into another during project runtime, you understood why waterfall is not often the best idea.

I think you missed the entire idea of my comment. There is no sense in accepting that those goals will change and trying to start a project that aims to achieve one of them. They are too ambitious and broad in scope and need to be broken into smaller pieces (see Apollo program or SpaceX roadmap - both are focusing first on smaller objectives, that will clarify the next steps).


> Almost always, your customer doesn't know what they want - they think they do, but they don't

I honestly don't understand this statement. Could you provide an example to elaborate on this point? I have read it at so many places but it sound more like the fashionable thing to say.

The "need" have a starting point, may be a very high level problem statement, and then through analysis, back and forth question answering, you discover the need in more concrete terms instead of abstract terms where you started with.


It is very easy actually. We often a business uses Excel for their processes and flow, that is they have excel sheets that they edit, copy, send around and merge back. Now, we all know the problems that exists with this approach, and even the customer knows that. But that doesn't mean they can perfectly describe how any application that gets rid of those excel files should look like.


But isn't the problem statement itself what you need from customer? My idea is that the customer approached you with the problem statement and their expectations is that you would provide possible solutions to it instead of the customer defining the solution exactly and just asking you to "code" it.


You describe the requirements analysis step which itself is a big issue with traditional waterfall projects. You bring someone (or a team) in from either outside the company or inside the company. At that point you are already falling victim to the fallacy that you think you can fully and exhaustively analyse the full problem domain. And even if you think you can do the 100% complete requirements analysis, who says the solution/software you deliver in 3 years from now will exactly fullfil these requirements and - more over - what makes you assume the problems of today are the same problems of the business in 3 years from now?


Well, the moment you talk about 'possible solutions', you've already accepted the original claim. A problem described in vague terms has many possible solutions. Some of them will actually solve the exact problem, some of them won't. Deciding which is which is not trivial.

Ho do you go about it? Do you build and deliver all possible solutions, and then the customer gets to chose one? Do you prototype many possible solutions, agree with the customer which prototype is most promising, and turn that into the final deliverable, hoping you had captured the relevant details?

Or do you start working on a basic solution, let the customer use that and provide detailed feedback on what it's doing well or not, rinse and repeat until the customer says 'good enough, thanks'?


Let’s say the high level problem statement is that a company needs to add a Search feature to their Store Inventory product.

Here are some questions that can arise during this course of this project:

1. What fraction of customers are asking for it, and how badly? How do you know their judgment is correct? What if people who aren’t asking for it may also go on to love it?

2. How deep and fast do you want the Search functionality to be?

3. How much time and money are you willing to invest in it? What if you find out after a month of work that Search is much harder than it looks?

4. Let’s say you discover there’s a bug in the indexing system which leaks personally identifiable information even though it’s supposed to hide it. Will you postpone the project till the bug is fixed, or work around it somehow?

It’s nearly impossible to answer these kind of questions right at the start of the project.

That doesn’t mean there aren’t projects where all the requirements are known upfront in precise detail. Usually that correlates with the associated technologies being very mature and their capabilities well understood by the people involved.


it's an old and widely accepted truism. I don't know how much formal research had been done into it.


Formal research into software development techniques tends to be junk. It's very very hard to conduct a meaningful study of professional-scale development. The costs are too high, the confounding variables too many, and the industry demand too low.

Typically you'll get software researchers with no industry experience conducting studies on "subjects of convenience"—their undergraduate programming classes.


There is a natural process that every issue goes through: Analysis, Specifications, Requirement, Design .. then the Developer takes in the materials of each of these steps, does an Implementation and Testing phase .. then the issue has to be validated/verified with a Qualification step, and then it gets released to the end user/product owner, who sign off on it.

Doing that per issue is still basically agile. The problem is when you do it per project, and try to get everything right the first time, and then it's months or years between analysis and qualification.

Agile really boils down to starting small and keeping your stake holders involved. It is an explicit acknowledgement that programmers are bad at estimating time, and that users don't know how to ask for what they want.

All the stuff people complain about is just companies buying snakeoil, and overzealous organizers justifying their own jobs.


That might be its stated goal, i still haven’t seen a single shop that actually implements that. First thing that happens when implementing agile are estimations, and the justification for that is for “predictability”.


To me it the estimates themselves don't really matter. Everybody already knows if something is gonna be worked on or not (at least kind of). The value I find from estimates is they encourage the team to discuss how something is gonna be built.

The best moments are when everybody estimates that something is "easy" except for exactly one person who says "this is super hard". Then there is a discussion about what the "easy" camp might have missed or perhaps the shortcut the "hard" person didn't know about.

At the end, everybody walks away understanding a little more about what the rest of the team is working on.


Exactly!

The issue is when organizations or bad POs or SMs subvert this and make it all about estimating everything exactly in story points where everybody has to be on the same page on the fact that 2 story point means 2 person days of work. And that is the only thing that estimations are used for at these places and every team has the same scale and get compared on how many points they deliver each sprint. And they press the teams for estimating ever smaller.

That is the kind of environment that most of the devs that hate Agile or Scrum are living in and it's no wonder they hate it. It's completely against the spirit of Scrum and agile so we can't blame them. Unfortunately they blame agile which was a try to make things better for them.


I do it properly, but I also usually work alone, or in very small teams. I have the luxury of setting the goals, and moving the goalposts, when needed (often).

Waterfall is almost required, when you are crossing the streams. Merging separate workflows is a fraught process. You need to know what will be delivered, when. I spent most of my career, working for hardware companies, and saw this every day.

Truly agile processes don't really lend themselves to that; which is actually kind of the point.

But the real deal, is that high-level managers and non-tecchies can't deal with Agile.

Managers have a lot of power. If we don't help them to understand the process, they will make decisions, based on what they know.


> You need to know what will be delivered, when.

It is valuable to have engineers who know how to decouple individual pieces to eliminate (reduce) these dependencies. These dependencies are almost guaranteed points of conflict and risk for the project.


Tell that to a hardware manufacturer that is developing a camera, with firmware, three different processors, drivers, APIs, SDKs, host software, lenses, speedlights, marketing materials, packaging, distribution, documentation, etc., while coordinating manufacturing lines in three different countries, and engineering efforts in two.

It can get ... intense.


> First thing that happens when implementing agile are estimations

I'm usually the first to chime in that all estimates are lies, but small estimates (i.e., per-issue, not per-project) easier than big ones.


It really helps to use vague terms like S/M/L/XL or bike/car/plane or whatever. If you say an epic will take 6 months, it is very likely that in 3 months someone will ask for proof that you're halfway done.


But once you have abstract S/M/L/XL estimates for some tasks, what do you do with that information? How many S tasks are equivalent to 1 M task? If the team completed 3 M tasks and 3 S tasks last sprint, how does that help you plan for your next sprint? While story points have their own issues, at least tasks' relative size differences are clear and different tasks of different sizes can be scheduled.


Story points are relative measures (at least when you do it 'right').

What is so different from estimating things as 1/2/5/8 or S/M/L/XL?

S (or 1) is roughly half as hard as M (or 2).

M (or 2) is roughly half as hard as L (or 5). Notice how 2 is not exactly half of 5. Just roughly.

And so forth.

And the point is that you can't say how many S are exactly equivalent to how many Ls. Estimates tend to get less precise the larger they are. There's usually more uncertainty built into large estimates which means once you do the work it might be much faster or much slower because of things you didn't look at very closely when estimating. While small things are easier to have a complete overview of and the estimates are more accurate.

A recent example from my work. There was something the team unanimously estimated as L. I knew it was an S. I knew the code though and they didn't. I let them put the L estimate on it. When the sprint started and I had some time I did the task myself and it really was an S in the end. But that's fine. They 'priced in' the uncertainty. If one if them had done it, it might very well have taken them longer because if not knowing the code as well.


You don't; you ease your executive/sales/etc partners into the new world where they get better software, faster features, and fewer outages, but they give up release dates and micro-managing the roadmap. Calling an epic "Large" instead of "237 story points" is a way of forcing yourself to accept that you only have a rough idea of how long it will take.


The thing I don't understand about agile is that many projects require a year+ investment before anyone will use it. e.g. say you're building a competitor to Google Docs. How do you get any real user feedback after < a year of work by a substantial team of engineers? No one is going to want to use your prototype that barely does anything. Or worse yet, your backend that doesn't even have a UI yet, and also barely does anything.


In that case you don't have a customer, you have to dog-food it. Most of the time when agile references a customer, customer feedback, and customer interaction it's actually the customer contracting the work. That is a different context than a startup or video game developer or others because there is a clear customer stakeholder who can provide feedback and validation from the start.

This gets to part of the problem in all these discussions. The context that the Agile Manifesto was written in was not in creating Facebook or Google, but in creating software for a defined customer who held a stake in the outcome (had invested capital, along with wanting/needing the results). Once written, something like your Google Docs replacement can get a customer stakeholder, but they won't have it at the start unless you line up early users (and even then, they probably aren't invested).


In that case, I have never worked in the context for which Agile was intended, and neither, I suspect, have a significant proportion of all working software developers. That may explain many of the disconnects in these conversations.


Well, that's not the only case it was intended for. But the language around customers is colored by that context.


Indeed. They way I understand Agile/Scrum in enterprises is basically a dogmatic process around implementing a bog standard CRUD app Nth time. Minor details keep changing from customer to customer.

It should be clear to folks that complex technology products in past were not developed by caricature of Waterfall method that Agile peddlers show on slides. Neither today interesting engineering products are developed following some "methodology industry".


In this case, you do user research with mock-ups and other extremely low effort prototypes. But the work done in that regard is the same for agile and waterfall. If you truly do have a project where the MVP will take a year, and the prototypes would be useless until that year is up, then it probably doesn’t matter which methodology you use in that first year.


I was once explained the interest of quick iteration cycles (the main opposition, IMO, to a waterfall model) in a simple way:

The teacher drew a very simple chart : y = t. "This is, in a given project lasting from t=0 to t=1, the amount of practical information that you have about how to design this project. At t=1.0, you have 100% of the information. When you start, you have about zero information about it, just guesses.

Then another line : y = 1-t. "And this, is the amount of design freedom you have during the project. At the beginning, everything is possible but at the end, big changes can not be made."

"This is what we call the curse of project management."

It has really be enlightening. Make prototypes, be courageous enough to scrape entire designs, and when you have the resources for it : make a total reset for version 2.0 and redo v 1.0 correctly.

Of course, this is highly subject dependent. You don't build a bridge the way you build an experimental robot, but this explained well the interest of non-waterfall models.


Waterfall can be fast. In fact, done properly, it is the fastest technique of them all.


> done properly

And herein lies the rub. If a project is more than 6 months to a year in duration, there is very little chance that what the customer now wants is what they wanted before. Requirements aren't created instantly on day 1 ready for dev, you could easily have several years of requirements, during which time people change, the world changes, the law changes, priority changes and then what?

Even systems that are relatively unchanging like the air traffic control system they built in the UK still had a whole raft of issues that needed addressing and at this point, the documentation becomes out-of-date and changes are extortionately more expensive to resolve.


> If a project is more than 6 months to a year in duration, there is very little chance that what the customer now wants is what they wanted before.

Depends largely on the customer. Just do anything defence, related to aircraft, or industrial control systems, more often than not, requirements are set in stone, and modifications require approval and consequent fees.

Now, those industries throw a lot of money at actual R&D to know what requirements can be requested, which is very different from your usual software shop, where "R&D" is some brief talk during refinement or at most a dedicated user story to look into the subject.


For "small" projects having requirements set in stone (probably?) works fine.

The customer might claim something different, but for a couple of large and failing projects in industrial automation I've had to fight hard to be able to iterate and rescue them from the jaws of defeat.

You've had different experience?


This. If you properly understand the problem. If your team is compact enough to be in the same room. And your customer input during concept/requirement writing is high.

If you stray outside that, both waterfall and agile struggle… but waterfall has the potential fatal flaw of delivering a product no one wants (or doesn’t work).

This is why agile was introduced. There were software products being made that ended up just not working… so people saw that problem and tried to fix it with agile. But agile just fixes the “don’t deliver a product that doesn’t work” problem.


It's more subtle than that. What "agile" does (both Scrum and XP, historically) is _protect the delivery team_. With waterfall-style project management, early errors balloon but often don't surface as something that needs addressing until implementation and testing, so the delivery team gets all the stress for blowing out the project schedule.

Agile techniques both surface those errors early, so they're course-corrected quickly, and provide a set of clear rules for the rest of the organisation to abide by which should mean that the delivery team can't get overloaded into a deathmarch.


> done properly

"You're doing it wrong" is not a compelling argument.

A competent, motivated, invested, and empowered team don't need any particular methodology to succeed. Methodologies are adopted precisely because many people don't do things properly much of the time. A methodology which doesn't account for this and doesn't self-correct when done improperly isn't worth a whole lot.

Waterfall done properly produces great results. Agile done properly produces great results. Both of them produce bad results when done poorly. Proponents of one tend to compare their methodology done properly to alternative methodologies done poorly. That's a pointless conversation to have.


At t=0 you rarely have 0% of the information and usually you can get more information ahead of starting to jump into the water. The problem is that stakeholders often pressure for early results which prevents enough planning and information gathering.

Of course you can also waste too much time on planning and information gathering, but that's not something I have ever witnessed. Usually time is wasted by starting to develop without making a proper plan.


> make a total reset for version 2.0 and redo v 1.0 correctly.

That's how I usually work (in personal projects). I wonder what it would take to convince management to do the same. In theory it should "just" duplicate the budget of any given project (which doesn't sound totally wrong).

It sometimes happens to me, though, that I need to reach v3 in order to go back to v1. I just feel this way of doing software to be the most natural way.


> I wonder what it would take to convince management to do the same.

Build the redo time into your estimates.

Non-technical management has no concept of what's involved in building software. They don't need or particularly want to know. They care about things being "done." You don't need to convince them of anything, you just need to consistently and reliably show progress in a way that they can comprehend.

It's very irritating to see no progress for a month, then see a demo of something that appears to do everything you asked for, then hear that it won't be "ready" (whatever that means) for another month because the whole thing needs to be redone.

It's very reassuring to see demos every two weeks, each one showing an obviously incomplete product, but with clear progress between each demo culminating in a completed and delightful product after two months.

The actual development process for the two scenarios above can be exactly the same, it's just how they're messaged.

This is "managing up."


"Agile" processes still have all these steps, they just repeat them in a loop on a much smaller timescale. This allows for a lot more flexibility, and for feedback from the qualification of the first iteration to go in as input to the analysis stage of the next iteration.


I agree with that.

But it's worth noting that we need to spend more time on task management with a shorter development cycle. In waterfall or scrum or whatever, one development cycle often comes with listing tasks, prioritizing, planning, developing, retrospection, or something similar. If the cycle is long, we can spend more time on each process. But if the cycle is short, we rush to complete each process and can't spend enough time understanding each process's meaning. So I focus more on consuming as many stories as possible. I never think about the product's goal and how I can contribute to it.

So we have to spend more time teaching the purpose of each process. Or we end up as a failed agile development.


Sure, but that's kind of tangential isn't it? You can just churn through tickets thoughtlessly in both methods, no?


I've been doing this a long time. As far as I can tell, it comes down to senior leaders in software development wanting to make some sort of success story to further their own careers. It's much easier to buy into somebody else's canned success story plan. Hence the popularity of stuff like "Scaled Agile". Which is funny to me, because in practice, it's heavier than old school MS Project and Waterfall development. Even the diagram they use to summarize the concept is laughably undecipherable and complex[1]. Same reason companies latch onto 6-sigma, Clifton strengths, TOGAF, Total-Quality-Management, and so on.

I do recognize there is some actual "good stuff" in there. I'm a fan of the textual agile manifesto.

[1] https://19yko92jjsfl3euk0g1esoja-wpengine.netdna-ssl.com/wp-...


> what problems Waterfall presented

These are presented in the article.

> Analysis, Specifications, Requirement

These tend to be inadequate, incomplete, or ..

> Qualification

.. at this point the customer discovers that what they asked for does not actually solve their problem.

You cannot do any kind of exploratory or innovative work in waterfall, because you can't specify that upfront. You can't easily co-evolve solutions by discovering what works and what doesn't. You can't even do A/B testing really, because every time you do your GANTT chart bifurcates.

There is a history of big, spectacular IT project failures, often commissioned by the public sector, but not always. The evolution of non-waterfall systems has been driven by trying to prevent or mitigate these failures.


>> Analysis, Specifications, Requirement >These tend to be inadequate, incomplete, or .. >> Qualification >.. at this point the customer discovers that what they asked for does not actually solve their problem.

I disagree. If you are properly isolating your work into an "Analysis" phase, you will properly flesh out the issues and the subsequent specifications and requirements will be complete - sufficient to the task of formulating a development/implementation plan. But if you don't take care to complete a proper analysis - then no, you won't formulate specs or req's properly, either.

The important thing is that the individuals involved know the value of a proper analysis - whether they are developers or managers or just paying the bills. Too often, software developers don't realize that they are really providing a social service, and thus they don't deliver proper analyses that can result in action. Alas, that is something other than a tooling/methodology issue and lays more in the motivations of the developer.

Its a lot easier to say you can't be responsible for the mess as a developer if you don't allow the mess to be adequately described, in the first place .. so the Agile vs. Waterfall debate is really more of an ethics issue than anything else.

Good developers do Waterfall Complete. Mediocre developers do it, and call it Agile. Terrible developers find excuses for not doing the key things, and end up with a failed project on their hands.


> If you are properly isolating your work into an "Analysis" phase, you will properly flesh out the issues and the subsequent specifications and requirements will be complete - sufficient to the task of formulating a development/implementation plan. But if you don't take care to complete a proper analysis - then no, you won't formulate specs or req's properly, either.

But what if that doesn't happen?

(This seems to parallel the "C is safe if you don't write bugs" argument. Processes that require infallibility are bad processes.)

The level of dysfunctionality that can be achieved is baffling. How can someone fail to deliver a payroll system? https://www.smh.com.au/technology/worst-failure-of-public-ad... - it's one of the oldest applications of business computing, and yet somehow it wastes a billion Australian dollars?


>Australian dollars

Perhaps Waterfall (and its cousins) needs an addendum: Check for and Remove Corruption.


Honestly it's mostly incompetence that is the problem and waterfall magnifies incompetence.


Incompetent waterfall magnifies incompetence.

Competent waterfall results in finished software projects.


Of course no one is doing the Analysis properly. It's impossible. You plan how to implement something using a library method. But only in the qualification phase you find out that this method actually has a bug on your target platform or is incompatible with another part of your software.


I dunno, I've seen many, many examples of a properly done Analysis phase. But it does require a great deal of competence, a lack of hubris, and plenty of self-doubt/verification/validation on the part of technical contributors to make sure they do actually know what they're talking about .. a Qualifications step usually reveals the nature of this impact on the project - or lets just say, omitting this step is usually the first part of project failure.


If the Analysis has been done properly and we are sure that it has left no room for error, and if we demand similar quality from the Specification and other phases, then why do we need a Qualification phase at the end at all?

Conversely, if we accept that each phase is fallible so Qualification is crucial for any chance at a good working product, what is the recourse for errors in the Analysis phase? Basically if there was an error in the Analysis phase, we will, by definition, only find it in the Qualification phase, requiring us to scrap all of our work and start over from a new Analysis phase.


Your comment is the most accurate assessment of the history. It’s good for people to know.


Royce argued that only in small software systems can you do a little bit of analysis then a little bit of coding then be done with the project. He proposed waterfall as a way to build larger software systems.

He argued that these 2 steps were the only direct value steps in developing software - figure out what the problem is then write a little code to solve it - he acknowledged this was the perfect ideal since the two steps directly benefit the output product but he also argued this can’t scale to larger systems.

So various analysis and testing steps were added (none of which directly contributed to output, all were drags or costs on delivery - with the idea that they catch more problems earlier thus paying for their overhead), with feedback between them to catch earlier errors, and thus waterfall.

The agile mindset revolves around the idea that the other steps added are BS. The agile mindset solves for the impossibility of a 2 step analysis then code process at scale by reducing scale. It does this by using very small iterations and removing handovers between different parties.


Agile has failed the industry precisely because the other steps are NOT BS.

Unfortunately, de-programming Agile-adherents is more work than actually just doing those natural, proper steps.


The problem is even more fundamental than that. The point of these extra steps in Royce's eyes is to ensure the developers deliver exactly what the customer ordered. Page 335 - "Step 5: Involve the Customer" - Royce wants to push design risk to the customer by committing the customer to sign off specific requirements along the way.

This leaves the project open to delivering the wrong thing. It doesn't matter that the customer specified the wrong thing and everyone else implemented exactly what was asked for, because in the end you all failed to meet the original goal - to solve a specific business problem through the introduction of a software system.

Agile recognises that no one knows exactly what's needed ahead of time - Royce kind of recognises this but only for the software developers with his idea to build then throw away the first system then deliver the second ground up re-write to the customer.

Agile says we'll all learn together as we build, we'll minimise risk at every step by only taking such small steps each iteration that we're completely happy to throw the iteration in the bin if we got it wrong.


Agile is just 20-30 small waterfalls instead of big one. How small a waterfall needs to be to fail?


Honest opinion: Its not a matter of size. Its a matter of completeness. Agile or waterfall processes fail if you omit the Qualification step, or if you don't ensure that the analysis produces actionable requirements, or clearly formulated specifications that lead to requirements, user/technical/or otherwise.


The whole point of agile is qualification through working software and getting feedback. There is no amount of analysis or requirements gathering that can compete with delivering an understanding of the requirements to serve as a model for how to move forward.


That Qualification step was the challenge I had when working on scrum projects in the past. I might finish implementing a feature in the current sprint #1, but QA won't get to start testing until sprint #2. When QA inevitably finds bugs, when do I fix the bugs? Should I have reserved an unknown amount time in sprint #2 for bug fixing? Or should I schedule time in sprint #3 to fix those bugs? If so, then the separate implementation/QA/bug fixing stages are each adding extra latency.


> I've never quite understood, over 30 years of experience in software development, what problems Waterfall presented that warranted a complete refactor of software development processes such that things are now very much in cult-/cargo-cult territory.

It's refreshing to see someone else make this point.


I hate to admit it but, In my experience, agile has brought productivity gains.

When practiced correctly what agile seeks to do is, not skip the steps you mentioned from the traditional waterfall model, but to apply them iteratively to individual feature sets in sprints, versus across the entire project.

I have seen many products fail to meet requirements and dramatically slip schedules under the waterfall approach. The project becomes too large and unwieldy, and too many assumptions are made that create dependencies for future features.

The reason I hate to admit agile's success is because it comes at a cost--namely there are frequently externalities such as lack of documentation and developer burnout.

Many agile implementations treat developers as cogs on a production line, responsible for essentially participating in automated CI/CD processes as a piece of software might be expected to do. And the relentless unending sprints can also easily take a toll.


I joined the industry only 20 years ago so missed the heyday of waterfall. I did witness the rise of the RUP though. And I must say that to jr developer me it was very educational despite its rather intimidating volume.

There's one angle I don't see discussed in the last 10 years or so. There was a time when a lot of attention and literature was dedicated to software design. The whole OOP movement, design patterns, the RUP, entire shelves of software architecture books. I'm inclined to call them systematic approaches to building software. I believe feature-driven development was the last such methodology aimed at developers.

Ever since Agile became popular around 2008 I have not seen anything comparable. The OOP school of thought lost its appeal without really being replaced with another well defined body of knowledge. Anything FP-related is extremely community-specific and nobody is really talking casually about, say, monad transformers. Any Agile methodology has very little guidance for developers in comparison with RUP/FDD. It's all about managerial concerns and time tracking.

But there's no question that software development is drastically different now. Using version control (even local one) was not a given in 1999. Unit testing with jUnit in 2001 was a major revelation. And shipping features constantly in SaaS got popular after 2007 or so.


I was never a fan of RUP. But it had the idea of adapting the process to the project. It gave you lots of choices.

With Scrum, every project is approached the same way, and no one thinks if it makes sense or not. McConnell´s "Rapid Development" also gives you alternatives for you to choose. Now days, it is Scrum or sometimes Kanban.


Traditional waterfall model doesn't allow for iteration between the main steps, and this was followed historically, and caused absolutely massive delays and cost overruns in projects through the 70s, 80, and 90s. Popping iteration onto Waterfall model and still calling it Waterfall model is being disingenuous.


And cost overruns and delays in the 00s and 10s! People here on HN keep telling me I'm imagining things, that Waterfall either isn't bad or isn't real. But a 9-figure project I was (fortunately briefly, though not brief enough, around 1 year) on was several years late and delivered only half the features the users needed. Why? Because they made the plan in the mid-00s, and the circumstances changed but the plan didn't. They did what Waterfall describes, they committed to their initial analysis and plan and never looked back. Fortunately they learned after that, but it was a costly exercise.


> .. then the Developer takes in the materials of each of these steps

IME this is the main problem, the "Developer" must be heavily involved from step 1 (and work in a close feedback loop with QA). Everything else follows automatically in ever smaller iteration steps. Software development is first and foremost experimental research work, not a factory line.

E.g. if a specific software development task feels like boring/repeating factory work, it should have been automated already.


Yes, developers and managers have to be involved in the full workflow. Why is this so hard?


It is hard because more people means more opinions.

* Involving the developers in everything that the product team does as well as all of their own work is too inefficient. * Involving the wrong developer means you might not ask the right questions up-front. * Some developers are too negative and block things early on * Some developers are too positive and will say yes to everything even if it is unreasonable * There is not always a clear authority between product owners and developers * A lot of decisions are based on company priorities that might need someone outside of the product/development team to argue for


> more people means more opinions.

I'm sure 'manager people' have their own processes for this problem.


You can't

- Get Certified in Waterfall.

- Claim to be a Waterfall Master.

- Waterfall Standup sounds like a song.


You absolutely can get certified in various waterfall processes: https://www.amazon.co.uk/Rational-Unified-Process-Reference-...

(I can't believe they chose the "when all you have is a hammer everything looks like a nail" cover!)


RUF was never about Waterfall. Instead was the first effort to progress from there

"(Rational) Unified Process vs Waterfall Model"

https://stackoverflow.com/questions/20560514/rational-unifie...


Automotive SPICE certifications would like a word...


Well, considering that Scrum Master certification can be done in a day, I'm not sure I'd be crowing about it.


> Waterfall Standup sounds like a song

TLC actually did record a (correct) song about it:

https://www.youtube.com/watch?v=8WEtxJ4-sh4


The problem is that the more unknowable the future is, the quicker your plans about the future will be invalidated.

Thus a quick iterative feedback loop where there is a tight lead time between user feedback and users using the software tends to work better because it lets you adapt much more quickly to changing circumstances.

This is why I always aim to minimize that feedback loop as much as is feasible.

Unfortunately you are right that it got bound up in a pseudo cult. Worse, the cultish practitioners tend to do as all cults do and put their emphasis on ritual (e.g. standup) and completely miss everything else.

I kind of hate the term agile now precisely because the movement almost set out to create a cult out of this.


For what it's worth, I've never been on an agile team that did not have some waterfall decision making that was not open to correction, or a waterfall-based process that was totally closed off to testing and making changing along the way. I think we argue about platonic archetypes that don't exist in reality. Most processes and teams I've worked with have resembled each other more than not, if you ignore the terminology they use. I think success or failure of a project comes down to a lot of factors that are hard to quantify, so in some sense our focus on process methodology is a bit ritualistic and cargo culty.


So I'll start by saying I detest the snake-oil salesmen professional scam artists that are "Agile consultants", as they have now begun to leach onto DevOps which honestly is in large part has the same end goal of agile. These people are largely talentless hacks and parasites that extract value from coporations that are making money and then move onto the next victim while the actual engineers deal with the infection their ideas have left.

That being said the issue I've seen and read with waterfall is that it is very slow and cannot adapt very well. Which honestly is a super great thing in certain fields, like avionics, NASA moon landers, those kinds of things, because with this lack of speed and rigour you also get a certain level of assurance and quality and predictability, assuming things retain a certain level of consistency.

However in the world of newly emerging markets and the .com bubble and the new SAAS bubble being slow is a death sentence because at this point it is better to spray and pray because their are plenty of VCs willing to handle you fistfuls of cash to shoot at the wall because they are hoping to get in on the next big Facebook or Google (in reality now they are just hoping you will get bought by FB or Google which is another rant for another time.)

That being said agile offers flexibility and adaptability in return for offering less predictability. This causes a huge rift however when you have in house tech resources, that are part of a larger organization that needs all the assurances and predictability promised by waterfall, but the in house resources have to compete against SaaS opponents that management is all to happy to use a credit card to purchase.

The end result is tech teams saying they are agile by which they mean they respond to whoever is yelling the louadest and success is determined not by value but instead by politics and in the meantime no one is taking care of the customers because and you end up with everyone's data getting resold a dozen times, having more security holes than an Alabama "don't shoot at signs" sign, and companies spending several million dollars a year to keep some rusting old COBOL application running because it is the only thing they know for sure works right now.

Sorry I might've been unloading some baggage in this post.


"Analysis, Specifications, Requirement, Design".

When people hear "waterfall" they imagine doing this for the entire project end to end. It's super risky to do this because nobody knows what they want until some of it is built. And if you just spent a year of a teams time building something and "tada" you show the user... odds are not in your favor that it will do what everybody actually wanted. Plus odds are good it might not work because there was no iteration to shake out the rough spots.

To me, the key for "agile" is rapid iteration. It's a bunch of mini waterfalls where every step delivers something of value (even if that value isn't directly visible to the end user yet). Each iteration forms a feedback cycle that helps make sure a team is delivering something the end user actually wants.

In practice I don't think anybody is actually doing "waterfall classic". It's more of a story we tell to remind us how important it is to get in the habit of rapid iteration.

(Sidenote: there is plenty of other reasons to rapidly iterate. The end project is built on a moving target. Business processes change over time, the competition changes over time, etc... if you have your specs 100% locked in at the beginning you'll find that half of them no longer apply after a year because so much has changed in your environment. There is also inventory costs of keeping so much code out of production for so long.)


Its the symbolism- returning to a earlier stage of planing. Take building a house- the moment you have to redraw the plans, the previous work is usually a tear-down.

Often enough - that is the case in the real world. Berlin Airport, was a waterfall project, were several times additions were added, late in production.

Now that image carries only halfway into the software world, as alot of the other abstractions (classes, modules, interfaces) will survive, even a replanning and rewrite in waterfall.


I am about as old as you. Unless you started at 13. The problem was that projects spent months doing analysis and design and never ship everything. Also agile recognizes that you don have all the answers at the beginning; you need to experiment and see how things should work. There was also a tendency to treat people as factory workers that you could assign any work and they just convert specs into code. We still do that with Scrum. :(


> There is a natural process that every issue goes through: Analysis, Specifications, Requirement, Design .. then the Developer takes in the materials of each of these steps, does an Implementation and Testing phase .. then the issue has to be validated/verified with a Qualification step, and then it gets released to the end user/product owner, who sign off on it.

If you do this feature by feature than you have described iterative/incremental development process. Scrum is one example of such process. Yes, Scrum is many little waterfalls.

But with Waterfall you Do the analysis of the whole project/product, check Specification of the whole.., check Design, check Implementation, check Internal testing, check UAT - this is the first time the customer sees the product, it's too late.

If you think it's stupid to do it this way you are right, that's the point. If you haven't seen this in real world then you probably didn't work on a government project or you were very lucky.


I've worked on all kinds of projects, and every single time the project failed, the tooling and methodology were blamed, not the individuals. This is the #1 cause of failed software projects - the inability for individuals to take responsibility for parts of the workflow, for which they are incompetent, and not subsequently working on improving that competence at an individual level.

But true, breaking larger problems down into smaller units (Agile/Scrum) is an effective way for a "manager" to help their developers - but I argue this is non-optimal for sociological reasons, not technical: Adherence to a cargo-cult is a function of laziness, not competence.


So when waterfall doesn't work, it's the fault of the developers, who are doing it badly? Well, I've seen agile work, too, with really good developers. And plenty of people have seen both fail, if you don't have good enough developers. Hmm, maybe it's not the methodology?


I always saw agile as essentially breaking waterfall into mini waterfalls.


As I call it in our process extreme waterfall. We do the bad bits of both sides of the fence and forget the good bits.


> I've never quite understood, over 30 years of experience in software development, what problems Waterfall presented that warranted a complete refactor of software development processes such that things are now very much in cult-/cargo-cult territory.

I'm going to tell you up front that I'm a CSPO (certified scrum product owner), and that I don't exactly know. I don't mean this sarcastically - it's more that I'm not sure what agile/scrum fix, versus where would _any_ process change naturally cause an org to address its weak points. Example -

For a while I was pretty into scrum - the company I was at transitioned from waterfall and it did give the appearance of faster delivery. In hindsight, I think what it really did was provide explicit authority to a decision-maker (me, or the PO in general), and build a culture of keeping docs updated regularly, and therefore status visible/known. Waterfall didn't break these things, but in the past we were slow to unblock things and nobody really knew where the work was unless they went and asked (and had someone do an adhoc update).

I'm now at an org where a team that isn't mine is trying to be pretty strict about scrum, doing all of the requisite incantations and such. The issue is, they have more management than they do actual developers on the team. Adding this process on top makes the management happy, but it hasn't done a thing to boost anything related to development. It's exactly the cargo cult behavior you describe, when IMHO agile is best thought of as a toolkit that you borrow from selectively to fix the things that need fixing. I think going all-in isolates engineers from the people and processes they're trying to help and reduces them to basically short order cooks pulling tickets off the rack. I get that it's meant to make them more efficient, but I think that isolating your most expensive people whose job description says "solve problems" instead of having them engage directly is the wrong move.

Mind you, I don't think I'm necessarily right about everything but I've seen enough broken shit to know I'm not totally wrong either. Now as a fairly senior leader, I discourage my teams from going all-in on agile and push them toward looking at their process, identifying what's broken, and fixing that (with agile principles or not). It's rare that layering on more process has a positive effect and I like folks to be thoughtful about those changes.


> ...There is a natural process that every issue goes through: Analysis, Specifications, Requirement, Design .. then the Developer takes in the materials of each of these steps...

What do Developers do before it's a green light for them to start the dig? What do Designers do before it's "their step"? What do they do afterwards? What about those in preceding stages?

Well, they might as well do whatever else next that falls on their seats, but sure everyone still cares to see the "final output", that is the developed and tested product. Yet it's inevitable for them to switch their context to something else meanwhile.

And this detachment is what makes Waterfall process often protracted and resistant to change. Not speaking that there's also a Customer somewhere in that pipeline, hoping to derive some value and feed some form of input.

Some industries/companies/teams/products may be just ok with waterfall or no methodology at all. Other situations may want shorter cycle or even parallel efforts to stay in line with demand/constraints/risks.

Also, there's a "working software" factor too. Call it a PoC/proto/early demo... It not only tests the ideas, it also motivates to keep going or offers assurances that it will be done "right".

If anything natural there is to any process is that most professionals want to do their jobs well and be proud of their effort, preferrably rewarded for the success and not singled out for blame of failures.

The question is then how to create such an environment to make that possible.

Agile on not, what cargo-cults it is the general lack on internal vitality in some industries/companies/teams/products. And as such it affects the very professional values on all levels, eroding the collective "care" about the result or even the process itself.


Yes, that is the natural process. However, Waterfall (big-W) is a Big-Bang Big Design Up Front approach, each phase is months long and after the analysis you do little or no revalidation (emphasis on re) and a release planned 1-5 years out. That's where the problem comes in. If you revalidate the plan and change the plan, you're no longer doing Waterfall. You're doing something else, you're using your brain.

If you break that natural process down and apply it to subcomponents and have releases every few months, you're doing something like either the classical Iterative & Incremental or Evolutionary models which are longer cycle versions of what Scrum aims for (which as a target tries to get you under one month, maybe down to 1-2 week cycles). I&I and Evolutionary tend to operate with cycles from 3-6 months.


???

It's predictability.

You can plan the construction of a building, because the elements are known. If someone fails to deliver a bag of nails on time, it won't matter.

If you add labour to a project, it will very ballpark accelerate linearly.

1 team, 10 houses = 10 weeks, 2 teams 10 houses, = 5 weeks. Ballpark.

More developers do not mean the software will be finished more quickly.

There are economies of focus in software, generally not scale.

Which is why you want good developers.

The other factor is that 'Requirements Change' in Software more than they do in building, and that's the other #1 issue that causes problems.

The entire ethos of 'small releases, often' is built around that.

Software is evolved a big organically for this reason.

Waterfall is obviously counterproductive.

If there is a 'default' methodology in software it's 'Iterative Waterfall' whereby you break the project down into the smallest reasonable phases. There can be an overarching plan but it has to be nimble.


> 1 team, 10 houses = 10 weeks, 2 teams 10 houses, = 5 weeks. Ballpark.

Concrete needs 4 weeks to cure.


https://www.hunker.com/12002648/how-to-paint-an-exterior-hou... suggests building doesn't need foundations to be 100% cured, and you can start after a week or two

Here's exactly the same article but translated for a UK perspective, it's hilarious

https://www.ehow.co.uk/about_4674117_were-concrete-blocks-fi...

> Do not pour concrete in temperatures below -6.67 degrees C.

> Temperatures of 21.1 degrees C compared to 10 degrees C, for example, reduce the 50 per cent strength curing time from 6 to 4 days


The issue is not cement or how long it takes to be cured, rather the crude and easy determinism of it all.

Home builders don't generally face existential struggle with the timing of the issue.

You don't go 5x over budget and have it take 5x longer because of this.

You can reliably schedule around cement.


Coders just want to code. Waterfall means you have to wait before you code, and plan, and organize. Agile means you just start coding, and code and code and code until you leave and someone else has to clean up the mess.

This is no different than most other activities in life. See the Stanford marshmallow experiment.


So you are saying that agile happened because of coders and not because there were fundamental problems with using waterfall on large projects?

No part of agile says that you get rid of planning and organizing, it is just done in smaller slices in shorter cycles.

Having used both over 25 years, I wouldn't look back at waterfall for any project except the very smallest. No-one allows you to lock some requirements in for 5 years any more, something that was accepted back in the day. I have plenty of examples of waterfall projects that delivered a number of things that simply weren't required any more and that was a failure of a long-tail project plan.

Agile also allows you to work iteratively on a project that is never finished i.e. SaaS, which is not possible with waterfall.


I think this is half of it. From my experience, the other half is that "agile" is a wonderful excuse for managers and product managers not to have to plan anything or commit to any priorities, and for software companies not to hire project managers. I don't think this is particularly agile's fault; no process can fix broken organizations, and most tech organizations are broken. Luckily the magic money machine papers over grotesque inefficiencies so we can all be gainfully employed without having to become real professionals!


The best reference I have so far is from 1970[0].

Waterfall is page 2. Reading past pg 2 you will see attributes of agile and various techniques described. The one thing that comes to mind is the blind men and the elephant[1].

Quoting from the wikipedia link (which quotes another):

O how they cling and wrangle, some who claim

For preacher and monk the honored name!

For, quarreling, each to his view they cling.

Such folk see only one side of a thing.

[0] http://www-scf.usc.edu/~csci201/lectures/Lecture11/royce1970... [1] https://en.wikipedia.org/wiki/Blind_men_and_an_elephant


If it helps you can think of it like this: Scrum is just Waterfall, but with faster cycles.

Traditional Waterfall(tm) is when you spend a year in analysis, a year in specifications, a year in requirement, a year in design...

And then you put out a piece of software that's already outdated and doesn't do what the customer wanted or needed. But the company got paid anyway, so off to the next project.

With Scrum you go through the whole waterfall loop every 2-4 weeks and the customer and the team have a chance to amend any of the phases after every iteration.

This way there's less of a chance the product is completely outdated and unnecessary after the whole project (comprising of multiple waterfall loops) is done.


>...And then you put out a piece of software that's already outdated and doesn't do what the customer wanted or needed.

Or software that was built on outdated foundations, like frameworks and providers which may have been "in-vogue" at the product conception, but deflated or advanced into maintenance-only stage by the product release.

Let's not forget that the competition is not sleeping meanwhile, so Waterfall is equally capable of pushing out half-cooked products just to be ahead or to meet obligations/expectations.


Exactly. I've seen this in military contracts where the rules are REALLY strict (and ancient).

Everyone on the project knows at one point that the resulting product will be outdated and useless, but neither side can call it quits. The one making the product needs to finish or they won't get paid, the one ordering said product can't cancel the project or they'll get sanctions according to the contract.

So they just go through the motions and produce something that'll never get used and is put straight into the bin.


On paper. In reality, Scrum is waterfall with a daily standup for management to keep track of progress.


> There is a natural process that every issue goes through: Analysis, Specifications, Requirement, Design .. then the Developer takes in the materials of each of these steps, does an Implementation and Testing phase .. then the issue has to be validated/verified with a Qualification step, and then it gets released to the end user/product owner, who sign off on it.

I've never worked at a place where all these things actually happened. Everything was responsibility of the developer, who would complain about the absurdity of that.

Then management eventually introduced Scrum, as a way to excuse their behaviour. "We don't need an analyst, we use Scrum now."


Developers who fail to take responsibility for the full workflow, fail.

Its not "Agiles" fault, although this is often used to justify the failure.

Developers have got to realize that they are responsible for the full workflow from beginning to end, and only poor/low-quality developers will work to change that natural law - with negative effect.


Imo analysis needs domain knowledge, the developer can do it on their own but it won't be as good as when a domain expert does. A developer doing their own QA will have the same blind spots as during development. And of course, if he was also doing the analysis, he'll have the same blind spots as during analysis.


Yes, waterfall is a team activity, as all software is necessarily a social service. Developers that fail to understand this - or indeed, resist it as part of their cultural identity - usually get taught this lesson hard in the form of failed projects.


> Developers have got to realize that they are responsible for the full workflow from beginning to end

That is not what the org chart says.


in my 30 years of software development, everything has been a combination of waterfall and agile and projects that attempt to adhere to one or other strictly end up failing.


Here's one way to explain it:

The generalization is that waterfall is a form of organization level open loop control and scrum is a form of organization-level closed loop control of your process. In one case you set a goal and then move towards it (and "damn the torpedos"). In the other case you stop and look where you're going and adjust course at regular intervals iteratively (But "are we there yet?").

In real-time applications, open loop control only works well in narrowly defined situations, but nevertheless can be very useful to get somewhere in a definite amount of time. Closed loop control tends to be more robust in that it is rather more certain to get you to the target in a wider range of situations and in the presence of noise. The downside is that it can be harder to determine when you'll get there.

This way of looking at it, you immediately see the balancing act going on.

Attaching specifically to what you are saying: if you do the Analysis, Specifications, Requirement, Design, Implementation, Testing, Qualification, Signoff just once, your product-objective fit will only be so good. (This is where waterfall stops and calls it a day. It's just a very long "day")

Once a change is signed off; review it in production for a while, take lessons learned, and reapply them to a new Analysis step (and move forward through the steps again).

After the second sign off, in real world practice you're likely to have a much better product market fit than the first iteration.

Wash rinse repeat, after the third sign off, you'll find an even better fit.

This much should stand to reason. You combine your original plan with everything you've learned. Then take the new plan and add in everything you've learned, etc. How could your 5th, 6th or 7th, Nth version NOT be better than the first one?

This is closed loop control.

If I present it this way, it would seem like open loop is always going to be faster than closed loop. Well, in biology this is actually exploited as such: fast processes are often open loop.

Fortunately it turns out that if you're doing closed-loop anyway, the self correcting properties of uberhaupt having a closed loop means that you can get away with cruder/faster steps in the loop. As long as you keep the loop closed. This stands to reason, right? (And else it stands up to empirical testing.)

If you're in a hurry, it might be the case that it's ok to merely hit home close to the target, rather than bang on 100%. Despite some theoretical slowness, you might be able to get away with using a closed loop process with fewer/cruder iterations to get to your target more quickly in practice.

Now what a lot of cargo-cultists do in the case of open loop (and waterfall) , is that they apply it to very long projects, but forget that the world might change while they're still working. Open loop works best for short, well defined projects.

In the case of closed loop cargo cultists: They might end up taking cruder faster steps, but then don't actually close the loop by going back to the start and iterating. Now you have a very bad product.

In practice you'll need to tune your process to your own needs.


This. So much this.

Especially this bit:

> In the case of closed loop cargo cultists: They might end up taking cruder faster steps, but then don't actually close the loop by going back to the start and iterating.

Agile isn't about having standups. It's about having fast feedback, so that you can react to what you have learned. If you don't have that, you don't have agile, no matter what process you have in place. (For that matter, any time you are trying to do agile by a rigidly defined process, that's an oxymoron.)


Maybe it is natural, but instinct isn't always best.

All you need to see is that its obvious that if you don't have continuous design/stakeholder feedback you can spend a lot of resources building the wrong thing. A lot of work is put into preventing this.

The cargo culting and snake oil sales exist but its just a fact of life not exclusive to agile methods.


Planning beyond 90 days out and pretending the plan has any hope of accuracy, that new information won’t change the plans (or design) and that the plan you started with will still even be beneficial when you finish. Brittleness.

Dependency planning has to happen of course. That’s not unique to waterfall, it’s just planning.


I agree with this.

But why not waterfall out 90 days then? Or 75 or something


"Waterfall out 90 days" is not what is derided as Waterfall. The derided form of Waterfall is for large applications and commitments, 90 days is small in context. That's the prototype phase of a project that Waterfall may be applied to.

The basic model of Waterfall is, indeed, fine if you iterate, which then isn't Waterfall. The basic model + iteration + shorter timeframes (probably under 90 days, certainly under 180) is basically just the same plan-do-check-act cycle from Deming (and others). You want it to be shorter so you can respond to new information (including the results of the partially developed system). Waterfall, the derided form, doesn't give you feedback until final delivery, or maybe the phase called "FQT" (final qualification testing). Importantly, until the customer is re-involved (which may happen in FQT or in delivery) you don't get revalidation of the system.

System engineers learned and applied this as far back as the 1950s, at least. No serious large scale system uses Waterfall and reliably succeeds.


You’re basically describing Scaled Agile. They have big planning meetings every 8-12 weeks (called PI Planning) where devs make a plan based on priorities, work out dependencies and get agreement with the business side of the house about the plan itself.

In my opinion, it’s the ideal balance of concerns.


> I've never quite understood, over 30 years of experience in software development, what problems Waterfall presented that warranted a complete refactor...

Because layers of management who need to continually justify their existence can never, ever keep themselves from putting their big fat thumb in the pie and fucking everything up.


I have about one third of the experience you have, but still I ended up in a place that was doing something very close to the mythical waterfall, and it was a horrible system.

We started with a one year long release plan, with a defined set of features that were supposed to make it in. A few weeks were spent defining functional specs for the features, negotiating them with PMs, and finally handing them over to the QA department (which was entirely separate). Then, dev work would start on all these features, while QA would come up with a series of Acceptance tests, Feature Tests, System Tests, Final Tests, Automation test plans etc. - all based on the FS.

Timelines and Gantt charts would be produced, deciding how to fit the features and required testing into the 1 year timeline.

After many months of dev work, the first builds would start making their way to the QA department, who would start tentatively running the Acceptance tests for the features that were supposed to be delivered. Some rapid iteration on any bugs affecting the Acceptance tests would happen here. Once the Acceptance test pass rate was high enough for the feature, QA would start working on Feature Testing, logging Bugs, while the dev teams would move on to other features. Occasionally QA or other devs would complain about blocking bugs that would have to be addressed ASAP, but otherwise bugs would pile up.

This would continue for all features, with dev teams keeping an eye on the total bug counts trying to make sure they would stay below some limits that had been set in a Bug plan (typically this meant keeping below 2-300 bugs for a team of ~5-6 people).

Finally, all features would be delivered to QA and passed acceptance, reaching the Feature Complete milestone. At this time, the dev team would start work in earnest on bugs, while the QA teams would continue with Feature Testing, logging new bugs.

This would continue for 1-3 months typically, until the Feature Testing would reach a decent pass rate, reaching the Feature Test Complete milestone. QA would now start on Performance testing and Early Final Testing. Often this would be the first time multiple features would truly start being used together. Dev teams would still be on full bug fixing mode.

When bug counts were low enough and Early Final Testing had a good pass rate, the product would reach the Code Chill stage - only important and safe bugs would be allowed to be fixed anymore. At this time, Final Testing would start in earnest, with the most complex cross-functional tests. The dev team would be finalizing the remaining bugs, until Code Freeze was reached - the vast majority of testing done, and only critical issues being allowed to be considered for fixing.

Of course, during this while, deviations from the schedule devised initially would be monitored constantly, with pressure coming not only from upper management, but also between QA and dev, as delays on the dev side would easily translate to delays on the QA side.

Customer feedback on previously released versions of the product, or new opportunities, would be very hard to squeeze in, causing panic and chaos in all the well laid plans. More often than not, it would be left to next year's release, meaning the average time between an opportunity and a release containing a feature to address that opportunity would be ~1.5 years.

Dev and QA would often be at odds, blaming each other for schedule slips, complaining about silly bugs, trying to pass the buck.

Of course, overall the whole thing worked, but slowly moving to a more Agile working model was a massive relief to everyone and created much better team cohesion, much less pressure on deadlines and deferred features (in the past, a feature getting kicked out of the current release meant it would only come one entire year later at best), and a much better product. Huge issues in design would no longer be treated as bugs, discovered weeks after the initial delivery of a feature. Tests would be designed by dev and QA together, often being run iteratively days after the code is written, leading to much better quality and corner cases getting caught early, not late. Process adjustments are now local team decisions, whereas before they required VP level involvement and agreement and coordination between all teams.


I noticed this as well. No matter how dedicated the team is to not doing waterfall, waterfall nearly always creeps its way back when anything that even smells like a deadline rears its head.

Waterfall planning tends to get ramped up for four really common reasons:

* Managers who simply cannot conceieve of non waterfall planning.

* As a response to failure (ironic coz it tends to increase failure rate)

* Because it's basically the job of most middle management to do waterfall planning.

* Because if your managers manager wants deadlines, tough shit it's deadlines all the way down.

* Because ultimately, upper mamagement tends to prioritize control over profit even if that control is somewhat illusory.

Waterfall planning is kind of infectious, too - if the team does it in one area it spreads to others.

I would say that 90% of teams experience this kind of "waterfall whiplash".

I worked on a couple of teams where upper management didnt set deadlines or create roadmaps or anything. These rare occasions were the only time when I was entitely free of waterfall and we were way way more productive because of it. Like night and day.

However, 90% of companies can't even envision this, let alone do it. Most people dont even believe its possible - developers included.

I suspect this is one component of the secret sauce of high performing startups that defeat wealthier, more established incumbents.


Waterfall creeps in because it's the most rational and linear way to do anything and we do it in all activities is our life without even thinking about it.

When you build a deck, you 'Waterfall' it.

Iterative development is slightly counter intuitive, until you write software for long enough, then it's definitely the intuitive approach.

I suggest the simplest, easiest, best way to combat the urge to overplan is to break the waterfall down into iterations, do those iterations, assign maybe risk to different iterations given unknowns and to expect change.


I actually started to notice after being a developer for a decade that a lot of people run face first into a metaphorical wall applying waterfall to their personal life where it didnt really work as well.

It's definitely "natural" though. I wonder if it's a cultural hangover from our agricultural past where lack of waterfall planning = dead.


It's not our 'past' it's out 'present'. 97% of our projects in real life are amenable to Waterfall.


I started working prior to Agile becoming a thing but wasn't around for the entirety of the 1990s. I have seen both waterfall and agile products fail or succeed. I don't think the process is ever to blame when they failed or deserves much credit when they succeeded.

My take on it is both processes have their strengths and weaknesses and perhaps a product needs to start Agile and slowly morph towards Waterfall as it matures.

Agile is great for the pre-1.0 phase when no one is sure what exactly the market needs yet and you're working on a software product with low cost to acquire customers. If you're iterating over different ideas trying to triangulate what is going to sell, and you don't need to scale yet before you figure out if you can sell it, then Agile has a great role to play. This is kind of risky startup time for sure though.

But when your product has become huge, mature, and you're signing $1M+ annual contracts with enterprise customers and your new features are getting very complex Agile doesn't work as well at that point as the tech debt it creates become more and more punishing as the project size grows, and the complex features become more and more difficult to fit into sprints without causing disasters. If a feature takes 6 months it can be a disaster to force it into 2 week sprints that just build tech debt in the name of being able to demo.

I've been on the current product I work on for about 8 years and have watched it morph from a pre-1.0 Agile product to a $500M+/yr complex product. We've been lucky to have good management and the process has gradually changed as the needs of the product have changed.

My career has mostly been enterprise B2B software... the best process for B2C, B2B, aerospace, military, etc.. is not necessarily going to be the same.

One thing for sure though is Agile has largely become a religion and less so of a pragmatic process as time has gone on.


Fully agree with this despite a much shorter career. Waterfalling a young project still in its exploratory phase leads to wasted work. Forcing agile onto a very mature project leads to tech debt and shoddy work.

A big part of the problem is when people have been indoctrinated into one of the various denominations of the Agile cult and try to force it upon every organization, product, project, or team they encounter. And of course it goes without being said that almost nobody follows “real agile” since in most companies it’s just a way for management to micromanage and randomize the tasks of individual engineers so middle management can look at pretty burn down charts.


What I've found while working in defense is that when your customer issues a statement of work with clear high level requirements then you're going to need waterfall (even if just at a high level).


Yeah, I think this is uncontroversial. If your problem domain generates strict, well-defined requirements, agile has less to offer.

https://www.fastcompany.com/28121/they-write-right-stuff

There might still be some useful agile practices, and you could have the less defined work (eg internal tooling?) be more agile.


I'm just wrapping up an Agile project, truly agile, and it was actually quite fun. At the start they tried to get us to use Scrum and 2 week sprints and I pushed back. We had basically no fixed meetings except a weekly sync and a couple of weekly meetings with stakeholders.

In the first 2 weeks we were producing analysis of whether we should do the project at all and my week has been pretty much 20% meetings with product since then and it has been fantastic. We used JIRA but only as a log of tasks, sometimes we were asked to report up some more detailed estimates so we made a rought Gantt chart type plan, but otherwise we've been left alone. We've done the project in a way that truly matches the agile manifesto. And we're only a week or so past the original estimate in a 3 month project, so pretty on-track.

It just reinforces for me that Business Agile is a load of crap[0].

[0]: https://d22qefpbdavdoz.cloudfront.net/#agile-is-hell


I find that waterfall comes with technical certainty.

Waterfall is disastrous with erroneous technical certainty, if the customer is convinced that the solution is technically clear and yet there is still uncertainty then what will happen is that a number of good components will be developed and delivered all of which are useless.

If you can communicate (you must) the uncertainty and the potential branch points in your project then one of two things happens:

- it stops (often) because the client does not have any appetite for risk. This is painful, but it is better than the car crash of a failed waterfall (which is what is going to happen). The opportunity cost for the team and you is very high here - kill it now and get something viable.

- you go to agile and make it stick.

As soon as you are certain about the solution the design is frozen and you can plan effectively for delivery. This is now waterfall and everyone is happy.


A version of this is Scaled Agile (SAFe) which somehow manages to combine the worst of both worlds. If you see your organization preparing to go SAFe, run!


A major French bank where I'm currently working at has implemented this method for its development teams... While we do have deadlines there is so few pressure compared to the other teams I had worked with before, that we actually have the luxury to build our features and being actually satisfied with what we're delivering


Last place I worked tried to do SAFe (I think? They kept repeating it and had all the branding on the presentations). To me it seemed like the idea was to replace a dedicated project manager (or team of them) with a week-long meetingfest every 8 weeks where all the different teams tried to self-organize based on what the business' objectives were.

Apart from the colossal waste of time of those week-long "PI-planning" meetings, it was OK I guess...


SAFe seems to be a pyramid scheme to get you to pay for the latest SAFe badge its overly complex and seems to be there to make management feel better about itself


I misread your comment and thought you wrote "best of both worlds" and embarked on a rebuttal! One of the places I am working at is now ditching SAFe, because they have recognised the issues. The process as implemented there both burns vast amounts of time and is widely ignored. OUTSTANDING!


For a taste of SAFe, here's a 2018 GDC talk about Bungie using SAFe and "big room planning" to develop Destiny 2: https://www.youtube.com/watch?v=ndPyhgorOKY


SAFe works just fine when Agile is perfected, i.e. about 5 years after every day practicing of Agile. If you have no Agile to begin with, then you cannot scale it.


Even the name is hillarous. It's like one of those cryptocurrencies that try to be sellable by a pump&dumpy name, like SafeMoon, haha.


There are even companies selling scrumfall consultancy known as SAFE.

In my experience, there are few problems with agile.

- scrumfall that is mostly cause by stakeholders wanting to have yearly roadmaps and KPI tied to delivery of increment. Plenty of organizations have separate engineering and product orgs that create this disconnect.

- groomorgy when teams spend more time on analysis than actual work. Driven by middle managers thinking that doing less is better than not finishing the sprint in time. Middle managers that do not understand tech or business but can track progress of tasks.

- cost-amnesia When feature is build over many increments is hard to estimate total cost and long term maintenance cost. You might have teams re-inventing a wheel and having a massive amount of slowly rotting code.

- conway-architecture. Because people think that agile is all about small independent teams, it naturally leads to service oriented architecture. In many places, SOA is a bad fit problem or organization. Small company with limited resources doing micro-somethings will create death trap.

- teflon-promotions You will have teams and projects doing terribly over a longer period of time, but in initially phase they delivered value. You will see people promoted in the organization based on early wins, even if they left behind dead bodies. It often encourages tech dept over long term maintenance.

I still feel that agile is great. Most of these problems are people problems, not agile itself.


In 2015 Dave Thomas, whose name is on the Agile Manifesto, gave a talk named “Agile is dead - long live agile”, addressing the intent behind the manifesto and the Agile Industrial Complex / cult-like movements which spawned from that.

https://youtu.be/vqz8ND-N1hc


I've watched this video a few times, and I recommend the part at 25:43 which describes agility. To repeat it here:

1) Find out where you are 2) Take a small step toward you goal 3) Adjust your understanding based on what you've learned 4) Repeat

I think this is useful as a high-level guide, and it's both easily memorizable and generally memorable. I use a silly mnemonic "FI-TA-AD-RE", and I come back to it whenever I'm working and get to a point where I have to ask myself "What the hell am I doing?"


The extremes were bad, not the essence of the idea. Stay lightweight and you can have the best of everything.

When anything is non-trivial to implement, i.e. it has dependencies, then it benefits from _some_ waterfall planning. Meaning to think before the work begins about the approach being taken, all of those dependencies, and who you need to engage and when... tada, you've made a GANTT chart in your head and are now waterfall because you put some thought into the dependencies and the order of execution.

Within that, each epic / task can be SCRUM or agile or whatever... all you really need here is an idea of a "definition of done", to prioritise the work so that you hyper-focus on just doing what needs to be done to get you to done. You probably want to continuously remind yourself of those priorities... tada, you've created an agile process.

The names are irrelevant, understand the essence of project / program management and do the least you can do execute on that.

It's when the processes become formal, tooled up, etc... this is when the worst creeps in. The status meetings for middle management, the reporting of time to project management tools. But no-one has to do this, lightweight project management can work without any of this stuff.


The "extremes" are good in their appropriate extreme niche: spacecraft flight control software is definitely engineering and it has to be perfect the first time (no exploration or iteration), while the only prudent and efficient way to publish a website with a day's notice requires a rushed MVP and then as many small edits and additions as fit in the available time.

In the middle of the spectrum there are varied tasks, for which an excess of waterfall can force the creation of low-value documents (e.g. hundreds of pages of up-to-date screenshots of complex forms instead of describing in detail only interesting data and interesting workflow and honestly stating that UI layout will be arbitrarily reorganized according to stakeholder feedback after every demo until they are happy) and an excess of agility can be an obstacle to correct design (e.g. publishing a mock incomplete version of some service without taking a couple of days to determine whether the data needed to actually implement it is going to be available).


> then it benefits from _some_ waterfall planning. Meaning to think before the work begins about the approach being taken, all of those dependencies, and who you need to engage and when... tada, you've made a GANTT chart in your head and are now waterfall because you put some thought into the dependencies and the order of execution.

Agile does not mean that you should forego all of this and code first, think later. What you describe does not oppose Agile and Waterfall.


I know :)

But people did take agile to an extreme and they did forego the essence of other processes.

I feel that the line "everything in moderation" applies well to project management. Moderation being the key part.


> We had elaborated waterfall processes like Rational Unified Process (RUP) in the 90s

This is factually wrong. RUP is an example of iterative process, not waterfall at all.


I see as the key point of this post the following point:

> I believe we urgently need a new process for working remote.

I do believe the same. Working remotely forces agile processes to be adjusted. Information radiators are no more the same. I can no more notice that 3 guys are arguing since an hour in their corner. I can no more notice that 1 guy is struggling alone with his compiler. It's more difficult to agree on a design on the whiteboard. The pace of our days has changed too, with more strict alternation between meetings and individual activities.

We need other channels, more written communication, and a bit more of planning. In no case a whole up-front plan. Definitely not. But the minimum of planning that will help to cope with the distance and the changed pace.


For a business stakeholder to understand, and clearly state business requirements, roadmaps to IT. That's a big ask in many organizations, in some cases where the business is evolving in response to customers/economy/random changes (pandemic for example), and given the average business stakeholder who's probably just trying to get thru some crises etc. As I type this, it makes me realize what a shit-show it is, and justifiably so - i.e. welcome to real life. Its messy in its natural state, because you don't control your customer, can't dictate to your customer or the economy or anyone outside your company for that matter - that's almost an invariant.. and is legit the dog that should wag the tail.

One thought that comes up is messy is ideally suited for agile, maybe if everyone pulls together , i.e. sees things the same way (without things being rammed down ). I've had an experience recently when I emerged from the mess, switched jobs, and went into a much more 'orderly' (read large/glacial) organization, I realized how effective messy agile is : No scrum masters, almost no managers, your Kanban and sprint board is the only work record (plus confluence), No word documents, absolutely no PPTs. It just worked .. I did hate some parts when I was in it but those were some social aspects. Taken by itself we shipped, as a flat team, and everyone knew everything, we were agile. This was what it took to ship. In those uncomfortable 18 months, I'd made the transition from waterfall to agile.

Work items have to be brutally meaningful and direct.. self-organized, without any of "lets do an AI project or neat dashboard". So you need capable business leadership, and capable tech leadership - both of whom should be able to cut thru jargon, frameworks, bleeding edge tech toys etc.. and just set priorities. The engineering team executes on those in a self organized way.

As for scrum, safe-agile - if it has a certification for it, that's self serving with some vested interest, causing much overhead, you either need technical engineering people or product owners.. otherwise it just leads to org bloat.


In my mind agile, is about avoiding committing to a particular direction before verifying that it's the right one, and in a lot of cases (particularly in the startup world), verification requires shipping to potential customers.

Agility doesn't promise fastest time to delivery. It's acknowledging that you'll probably build the wrong thing, so it's about making it easier to course-correct.

The failure state of agile (waterfall hidden behind agile ceremony) is usually triggered by leadership/management teams not willing to _truly_ delegate. Once process becomes dominated by accountability and visibility (which I've never seen not happen with Scrum), rather than supporting the people doing the work, all bets are off.


Most project models are sort of useless in the modern office environment.

Waterfall is useless because nobody ever follows it. I agree that it’s sort of the “default” mode, but show me a project that didn’t go back and change something from a previous step.

Agile stops working the moment you need to sign any form of contract with anyone, because nobody is going to sign a contract that doesn’t tell them what they are going to get for X amount of money. Its processes are sort of fine on the smaller scale because it breaks project tasks into neat to-do-lists, but it’s utterly useless for actual project management because estimates are a lie and delivering on time is what is required to drive a project successfully.

Then there is everything in between from the various stage-gate models to UP and what not, and they are all equally useless because nobody ever really follows them and because they can’t really apply to every type or project which is what organisations tend to want.

I haven’t worked in a FANG or whatever you call the big American tech companies, but I have worked in the European Public sector and it’s affiliated private sector companies for years, and nobody has a magic project model in my experience. In many ways the most successful companies tend to be the ones who simply use some sort of Kanban board and refrain from giving estimates in intervals lower than weeks. I fully understand why organisations waste resources on this area of course, I’ve been in the trap myself many, many times and I’ll likely end up in it again, but the simply truth is that every project manager or business process person that you have is one less developer, and these helper functions don’t really add anything to your value chain the moment they start making up work for the sake of having the “right” processes in place. The best way to handle project management is to hire good developers who know how to deliver projects that are build safely and maintainable. That’s not easy, but no amount of project management is going to make up for the lack of them.

And I’m not saying you shouldn’t need project managers or have an organisational strategy, but if you’re hiring more support staff for your developers than you would for the tradesmen you’d hire to build your house then I think that you’re doing management wrong.

We build an entire city hall, a process that took almost a thousand different employees from very, very, differently ones of work and we did it with 1 project manager, 1 architect and 10 lead engineers. We did it on time and under budget, and that is how we build our IT projects as well. It’s sort of waterfall, because it’s what we default to like the author gets into, but maybe that’s because it works?


> Waterfall is useless because nobody ever follows it. I agree that it’s sort of the “default” mode, but show me a project that didn’t go back and change something from a previous step.

The issue is not if they change something from a previous step but When and how often does this "checkpoint happen" You want an example of Waterfall? The SLS: https://en.wikipedia.org/wiki/Space_Launch_System in development since 2011 and most probably will get scrapped once it finishes becasue the market sittuation has changed completly since then.


> The best way to handle project management is to hire good developers who know how to deliver projects that are build safely and maintainable.

The best way to do that is to train your senior devs to act as part time PMs.

In an environmental consulting environment (not software) all projects except for really big, multiyear government contracts were managed by engineers that were the project leads. That is literally what "lead" meant - the person who led (managed) the project. We only had one actual project manager, and he was as much sales/contract development as anything else.

I was a junior engineer, and I still managed small projects. That included costing, tracking hours/labor, and making sure that deliverables were complete.

Then when I started in software development/operations, I had to deal with non-technical PMs and wasted tons of time explaining why what they wanted was either not possible as specified, or otherwise a waste of time.

IMO, if your PM is not a technical person with expertise in the area of your project, you mare wasting money by paying their salary. Get your lead devs to manage your projects as part of their professional progression. Non-technical PMs are pretty useless.


> Agile stops working the moment you need to sign any form of contract with anyone, because nobody is going to sign a contract that doesn’t tell them what they are going to get for X amount of money.

Plenty of companies sign contracts like that, and it's not particularly controversial. Pay $X, get Y developers to work for T amount of time, with no guarantees what those developers are going to deliver.


Which companies? I’ve never seen anyone willing to do that in decades of contracting.

Maybe it’s just different here in Denmark, but people never buy things here without knowing what they buy.


I'm not at liberty to disclose company names, but I live in Finland and I can tell you it's normal here. It's basically equivalent to "renting" workers. I would be willing to bet that "rent-a-worker" industry exists in Denmark as well, and that it also includes IT work.


But rent a worker is small scale. It is not signing a multi million euro contract.

Consultants are a thing here, and they may be the closest companies get to not knowing what they buy.

I mean, I’m not saying that the other way works. All our major IT systems are delayed and over budget, but there are detailed and outlined requirements that aren’t delivered.


I've seen deals where a company sells a pack of consultants to a multi-year project with a price tag of over 1M euros. I don't know if this happens when the scale is 10M+, but it does happen at 1M scale.


> Agile stops working the moment you need to sign any form of contract with anyone, because nobody is going to sign a contract that doesn’t tell them what they are going to get for X amount of money. Its processes are sort of fine on the smaller scale because it breaks project tasks into neat to-do-lists, but it’s utterly useless for actual project management because estimates are a lie and delivering on time is what is required to drive a project successfully.

It's totally normal to do software development contracts with agile processes, and well-understood how to structure them. Estimates also suddenly don't get better just because they're made fixed at the star, and at the same time agile doesn't mean "there is no plan where this is going at all".


They have different purposes and should totally both be used but not onto of each other.

Agile should always be toward application development and the life cycle of rinse repeat improvements on code.

Waterfall should always be used to track initiatives goals with completed deadlines. Things that can be seen to a completed state.

Agile is about micro managing. Waterfall is about progress tracking.

It is about using the tool for the right job not trying to do everything with one tool. You can open a bottle with a screwdriver but should you?


Agile, MBA for, "I would like it immediately regardless of consequences for others."


I am ok doing waterfall - what I can't stand is doing waterfall dressed as agile&scrum.

Agile&scrum are internally inconsistent methodologies which means that a bad faith actor (be it a smart but asshole engineer or clueless boss) can always point out you are doing something wrong (against the theory), regardless of the flavour/school you are using.


Waterfall is just a mapping of the dependency graph back into work-packages.

As all things are hierarchical in some way, large changes and additions, always lead to a larger change allover the graph, making it necessary to go all the steps back.

Change something large enough, and the process forces you back to the architecture drawing board.


People or teams managing software development workflows usually silently tailor them, fitting the reality of their constraints, often outside of Scrum and Agile (despite claiming otherwise in job descriptions or developer posts)! And that's OK! There is nothing wrong with taking the best from waterfall or Scrum, or you name it, and adding a pinch of personal/novel ideas if a precept from an existing framework doesn't apply out of the box. These concepts are decades old; new tech and new constraints have arisen. People have changed. New answers are needed, or at least worth a shot. It's also OK to re-appropriate out-of-fashion concepts when those are a better fit.

The key, in my view, is to remain open to new or old ideas and to interpret and adapt dogma rather than blindly follow it.


Title describes my current job. Basically a thin agile wrapper around a broken waterfall process with gargantuan releases plagued with heisenbugs and shoddy QA. It's fun but I'm still not sure if it's a viable project or a death march.


No matter what process we adopt, we'll always have waterfall as long as the sales side of the house needs predictable features and release dates. There isn't really anyway round that - if we can say with confidence that we'll release features X, Y, and Z in a release 6 months from now, then we don't have room for more than localised iteration.

Agile-ish systems work great when you don't have pre-determined delivery dates, but unless you are an internal-facing team with only a backlog of tech-debt to burn down... who is actually in this position?


Another conclusion from the same input is that product owners in the Scrum world are just waste. In a lot of teams I worked with POs are just mediators between stakeholders and the developers.


Well, if you look closely Scrum is a waterfall system. "Classical" waterfall just works on larger chunks - releases or even whole projects. Scrum focuses on "sprints" - small chunks of functionality - but still applies tiered, waterfall approach: you start a sprint with planning, move to development, run quality controls and deploy.

You should see waterfalls in your Scrum. The only question is whether the chunk size you apply the waterfall on is appropriate


Yup, when people (semi-blindly?) adopt a process but not the principles, this happens. There are a tons of fallacies in product development on multiple sides... also trenches too deep between disciplines sometimes. These are hard to fix.

I've written a short one about some typical fallacies: https://leadership.garden/product-development-fallacies/


>> Waterfall kills iterations because everything needs to be finished the first time for waterfall to work.

Well, if you look at some presentations on waterfall you see the big V and there are loops where testing feedback goes back to the left side. What happens is that A) some slides don't show that, and even if those loops are present, B) The V is flattened vertically into a timeline by some manager and the loops are lost.


Agile tries to create dynamic output with a typically static input (developer salaries), so there will always be drift towards a natural state.


When I read in this article "but we build things that no one wanted in the first place or no longer wanted", then I know that this goes has not a clue of what he is talking about, but just want parrot buzz world.

Otherwise, I agree that waterfall and scrum both sucks!


> Efficiency is the key driver for waterfall. Why work on things together as a team?

Is is often forgotten that the efficiency of the individual is not the same thing as the efficiency of the team. Understand this and the case for waterfall is weakened, perhaps fatally.


In any line of work, things happen because a bunch of people who know what they are doing - and get along reasonably well together - are for some reason motivated to make those things happen. The rest is noise.


It strikes me as impossible that no one realized that projects needed to adjust to unforseen events before a bunch of consultants decided to write a manifesto while skiing in Aspen.


If you read the original waterfall methodology descriptions, even waterfall isn’t supposed to be what waterfall turns into in organizations.


"Scrumfall" sounds like a rejected James Bond movie script.


We call it "Scrumerfall"


In my few years working as a dev, I am absolutely convinced that scrum and agile are the worst possible development practices you can have from the perspective of a developer, even though they are sold as 'developer centric' or that seems to be the impression a lot of people have at least.

It makes no sense at all to me to constantly have work interrupted with customer meetings that produce NO concrete specifications because everything is expected to be done just-in-time, on a whim essentially. It seems to nearly universally lead to dev whiplash, poor product architecture and large amounts of technical debt.

Many companies also seem to end up employing tens of people who don't actually have any role at the company-- they just show up an 'moderate' these 'ritual' meetings. I'm sorry, what exactly is the value add for having a person with no understanding of either development or product, running meetings that center around product development??? It just seems like a scam to create jobs for people with otherwise useless business degrees.

I'll take however many weeks of blistering specification followed by diligently developing against hard deadlines to the insanity of rolling technical debt and borderline abusive micromanagement that agile methods all but enforce. /rant


Ahhhh, I couldn’t have put this better myself. Agile is forcing devs to change how they work to solve a problem caused by poor product and business management and all the problems it creates are swept under the rug as “technical debt”


Scrum and Agile (note: capitalised) can be very painful. Badly moderated "scrum days" are a time sink. Two rounds of planning in order to start a sprint are common. Hours spent grooming infinite backlogs. An entire class of bullshit jobs is created. And all of your work is modelled on a five-year Gantt chart, effectively rendering it waterfall with more meetings.

But it is not inherently so. It can be quite simple. It can be bloated. It's usually somewhere in the middle. At the very least, since Scrum being a thing, continuous integration, integrated teams, and simplified version control flows became a thing, since every sprint is "a releasable increment".

While every waterfall project I ever worked on (granted, at the beginning of my career) felt like it was inherently shite. A cascade of dependencies. This means that every delay has a cascading effect on the duration entire project. This means that early actions are next to immutable later on. You're pulling requirements out of your arse because you have nothing concrete to validate them against. You only know if it works towards the very end. And until then, the product is unreleasable. And I guarantee you, requirements will change. Corona happens. A CTO quits and the new one wants something else. Dotnet core is released and now you're looking at that upgrade. You learn something new but you can't adapt.

Ultimately, bad orgs fester bad methods build bad products. But please don't pretend to know everything upfront because you don't.


> In my few years working as a dev, I am absolutely convinced that scrum and agile are the worst possible development practices you can have from the perspective of a developer, even though they are sold as 'developer centric' or that seems to be the impression a lot of people have at least.

33 years for me, and Agile is by far the worst thing to happen to software development (and developers) that I've seen. That, and open-plan offices.


It makes a lot of sense, which is why a lot of people follow it. Your comment is implying that somehow 1000s of people around the world don't realise they are doing something stupid.

You are portraying a poorly run implementation of agile as the only way and then destroying it. We call it a straw-man.

* My development team are rarely interuppted with customer meetings

* The meetings we have will always have outcomes, even if those are high-level "would this be possible?" * Everything is not expected to be done "just-in-time" "on a whim", it is simply a case of allowing the highest priorities to be looked into in much shorter cycles.

* "Dev whiplash". Nope, not here. Poor architecture and tech debt? I haven't seen any more in the agile projects I have worked on compared to old waterfall projects.

* Tens of people who don't have any role. Nope

* Non-dev/product running meetings? Nope. Shouldn't happen and if it does, it's nothing to do with agile (or waterfall!)

* Borderline abusive?


> Your comment is implying that somehow 1000s of people around the world don't realise they are doing something stupid.

Agile and scrum have now been around long enough that large numbers of devs have never known anything else. So yes, they don't realize they're doing something stupid.


> You are portraying a poorly run implementation of agile as the only way and then destroying it. We call it a straw-man.

Every time there's any criticism of agile, people inevitably show up and say "well, you're just doing it wrong". Maybe there's an issue with the methodology if it's so easy to get wrong, and happens so frequently?

> Your comment is implying that somehow 1000s of people around the world don't realise they are doing something stupid.

If you read any thread on agile, I think plenty of people know they're doing something stupid...


I've seen it done well, so it can be done.

Plenty of people have seen waterfall done badly.

Maybe it's so easy to get software development wrong. It's not just agile.


I'm sure "It makes a lot of sense, which is why a lot of people follow it. Your comment is implying that somehow 1000s of people around the world don't realise they are doing something stupid." is exactly the comment that agile proponents heard when suggesting that waterfall may not be the right thing.

(not arguing for one or the other, just pointing out that this argument does not mean anything).


being agile does not mean "not having a plan", it means having a flexible plan and adapting the plan as things change.


The sooner people realize that agile and scrum are sales pitches with little substance the better off everyone will be.


I don't know if I agree with that. Rather than have every team on earth invent their own process from first principles, they provide adequate starting points on which to grow. Agile, Scrum and Kanban aren't perfect but they can get a team started. Good teams will take them as templates and tweak as needed.


If you look at Kanban specifically, you can tell if it's been implemented well because it's not just affecting the team. It's a tool to improve the entire value stream.


Even that is a bastardized version when compared to what Toyota developed. The reality is software engineers are not factory production line workers. Also, kanban negates the fact that cars are completely spec'd out (aka waterfall) by the time a car is ready to be built on a production line.


Yes, and a vast amount of the design detail that goes into car design is to make building it easier.

You're right that software engineers aren't factory production line workers, but I'm not sure it actually matters. The only detail I can see relevant to whether kanban can work in is the distribution of how long each piece of work takes, and while you'd expect the distributions to be different between "bolt lump A onto tab B" and "add feature X to website Y", I don't think they'd be different enough to completely break the queueing model. I'm not sure that it matters that each piece of work through the team is unique versus the completely repeatable part assembly stereotype of factory work.

The main issue I see with kanban as a software development idiom is that in every place I've worked, supply cannot be effectively constrained, except artificially, at the gate into the team. There is always more work to do than time to do it in. Now part of that is a cultural problem to do with politics and the place of the CIO (or equivalent) in a typical company, but it definitely feels like kanban needs to be driven from outside the tech org to work well.


"You can't produce a baby in one month by getting nine women pregnant” --Warren Buffett (b. 1930)


Waterfall was never that bad anyway. Agile doesn't magically make people more productive. It only solves the problem of overrunning deadlines by pretending they're not important. If anything it puts needless pressure on devs who ideally should be free to think of solutions at depth. Sprints only make sense in that a team should be hustling in the early days. As the software matures, I don't think it's healthy to strive for squeezing your devs for every last drop of productivity.


> As the software matures, I don't think it's healthy to strive for squeezing your devs for every last drop of productivity.

This is why I hate agile, it's incredibly stressful. Daily stand ups to justify your last 24 hours, affirmations in said stand ups which must begin with "Yesterday I committed to... and did/did not achieve this because..." followed by "By this time tomorrow I commit to delivering....".

JIRA burn down charts dropped in our group chat to "remind us" to keep following the planned burn down line, looking for stories we can close out to try and sneak the chart to match the line and somehow try and catch up after they're closed. It's horrible.


All you say is agile not implemented correctly (which is very common). I am a contractor and switch projects and development environments very often. I've seen agile working really well when it is properly implemented, and really poorly when it is wrongly implemented. The biggest problem I see that people never really get educated in agile. I see PMs/POs/Developers etc. that never had any formal training in agile/Scrumm (books, courses, experience) but rather learn-by-observation, i.e. "do what others do". And I have seen scrum master with zero expierience as a developer, just blindly trying to follow written guidelines in the books (scrum manual etc.) without being able to adopt those to a real development process.

Take for example your metion of standups: They are really valuable, and are not meant to "justify" your last 24h - but some managers think that this is the case, i.e. make sure your developers are not slacking and surfing on hacker news all day. Instead, one of the purposes is to detect blockages in any way. I mean if a dev takes very long for a task without it being justified (i.e. no other developer would understand why it would take so long), it might just be due to lack of skill and the dev might need some more training or senior assistance. It could also be that the task is more difficult than originally anticipated, in which case the team should possible act on it and change the scope etc. Sometimes it could also be a very eager junior dev, who just doesn't want to admit struggling with something out of pride or even fear etc. As a developer I can totally say, it actually takes quite some guts to come out in a group of devs and freely admit: "Hey, I struggle with that task, I don't know what I am supposed to do, or am not familiar enough to debug that code, can somebody help me on this?" - but admitting that you are not perfect is something lot of people struggle, especially juniors as they think they must be capable of doing everything.


> I see PMs/POs/Developers etc. that never had any formal training in agile/Scrumm (books, courses, experience) but rather learn-by-observation, i.e. "do what others do".

Ahh, good old cargo cult Scrum.

I've seen a bunch of this when consulting. People use all the right terms, but nothing else really makes sense. There is a "daily standup" done sitting down and it lasts up to 45 minutes. There are "sprints" that last from 2 to 12 weeks with no deliverables. There is grooming, which is just chatting and maybe looking at Jira a bit.

Scrum can be done right, but it requires the WHOLE organisation to be trained to do it properly. Preferably by a very very very expensive Scrum Consultant so that even the C-staff put some weight in their words. (expensive == good)


> "Yesterday I committed to... and did/did not achieve this because..." "By this time tomorrow I commit to delivering....".

99% of the time this is an underhanded way to implement micromanagement.


None of that is a requirement for agile processes. (Indeed I would argue that unless you have decided that's the best tool for your team, you're probably not being very agile. And no, when management of a big-co dictates how you work, they can call it "agile" as often as they want, it's not. Of course still extremely common, sadly)


That is not agile it is SCRUM.


> Agile doesn't magically make people more productive

AFAIK not only was that never a claim/scope of agile, but rather the opposite is deliberately accepted with agile: You accept quite some overhead during development, plan for constant refactors, shift of scope etc. The justification is, that while waterfall in theory is the most efficient and productive, in practice you very often end up with a product that doesn't fit your customers/users need. So working 2years on a product that is thrown away in the end and started over is less efficient and more expensive than working 3years on it in an agile way and getting (mostly) what you want in the end.

Note that is the one of the reasonings of agile, I am not claiming that these statements and premises have actually proven true in the real world.


Agile (if the culture incentivises honestly) does have the benefit of feedback. Rather than a black-box which "could" be done in a month, you can instead see that the team on-average under-estimates by 10 days, has x stories left, so it'll likely be done in 2-3 months at this rate.

The downsides are that it opens up the team to feature-creep, introduced a pile of weird buzzwords, and can be massively wasteful if the team doesn't want it and it's forced on them.


> Agile (if the culture incentivises honestly) does have the benefit of feedback.

Doesn't Waterfall incorporate feedback? In my memory and experience, it does.

My memory of learning Waterfall back in the early 90s is a bit hazy, but I distinctly remember that one of the advantages touted was that mistakes earlier in the process of developing software are cheaper to fix than mistakes later in the process (no matter what the process is).

Finding an error when drawing up the requirements is orders of magnitudes cheaper to fix than finding an error after piloting at the client.

As I remember it, Waterfall was taught (to me) as a way to avoid developing the wrong product, or a product that does not meet the requirements of the end-user (sound familiar?)

My SDLC textbooks from back then were filled with different ways to elicit requirements, because fixing broken requirements after the product was developed was (and probably still is) bloody expensive.

In the same way, fixing bugs during a test phase is a lot cheaper than after a deployment phase (hence various types of testing were introduced by the textbook).

Fixing bugs during a pilot phase is a lot cheaper than after a full deployment, hence piloting was in the textbook as well.

Agile aims to deliver features one sprint at a time. Waterfall aims to deliver the specified system as a whole or not at all. Which one works better is probably contextual, but I cannot see either one being unconditionally better than the other.


Yes, though as originally documented the Waterfall process didn’t have much to say about iteration. It was simply left to the reader to realize that they would be releasing version 2.0 of their software a year or two later, and that they could incorporate feedback from the users of version 1.0.

Most of the data about costs is quite limited too. A lot of the numbers that people quote come from the 60s, specifically a project to develop software for a ground–to–air missile. Certainly in that project fixing a bug after deployment would be very expensive, since it would probably require you to visit all the military bases where the missiles were deployed, disassemble them to some degree, and swap out a ROM chip. These days we can deploy a product with one command. If you find a bug tomorrow, you can fix it and run the command again. These days the only cost to fixing a bug after deployment is the revenue you lost due to the bug, and that might be minimal too.


> Most of the data about costs is quite limited too. A lot of the numbers that people quote come from the 60s, specifically a project to develop software for a ground–to–air missile.

Well that makes sense with my belief that WF aims to "deliver the system as a whole or not at all". There's no point in delivering an MVP G2A missile system that is not complete.

> Certainly in that project fixing a bug after deployment would be very expensive, since it would probably require you to visit all the military bases where the missiles were deployed, disassemble them to some degree, and swap out a ROM chip. These days we can deploy a product with one command. If you find a bug tomorrow, you can fix it and run the command again. These days the only cost to fixing a bug after deployment is the revenue you lost due to the bug, and that might be minimal too.

To be sure, CI/CD pipelines make the fixing of bugs in the code cheap enough to simply deploy when you can. However, bugs in the specification aren't going to be cheaply fixed after deployment, and these are much more common[1] and hard to get correct than any other type of bug.

[1] I.e. the code does exactly what the programmer intended it to, but what the programmer intended it to do is different to what was needed.


>> Most of the data about costs is quite limited too. A lot of the numbers that people quote come from the 60s, specifically a project to develop software for a ground–to–air missile.

>Well that makes sense with my belief that WF aims to "deliver the system as a whole or not at all". There's no point in delivering an MVP G2A missile system that is not complete.

True! :)

Plus, the customer knew pretty much what they wanted from the start. Not much chance they watch the demo and then ask if it can be mounted to an airplane…

> To be sure, CI/CD pipelines make the fixing of bugs in the code cheap enough to simply deploy when you can. However, bugs in the specification aren't going to be cheaply fixed after deployment, and these are much more common[1] and hard to get correct than any other type of bug.

Yes, though in the agile model the idea is that the spec is just what the customer asked for two weeks ago (or whatever your sprint length is), after seeing how the program worked at that time. If there’s a misunderstanding and you correctly implemented the wrong thing, then the cost is at most the two weeks you spent on it.


There is no such quick feedback.

You spend months dreaming up a design spec with plenty of timing diagrams, UML diagrams, classes, etc.

Then, it's reviewed, which means people read it and and try to make comments on it, then everyone pat themselves on the back because it's been 'signed-off'.

Then you start actually implementing it and you quick find out that, actually... And that's not even accounting for any changes on requirements that might have been asked in the mean time.

Feedback means actual, hard feedback, be it from the customer or reality.

Errors get more and more expensive to fix the further down the line they are found and that's exactly why hard feedback (which usually is either actual tests or customer feedback) should be obtained ASAP, which can be achieved through iterations.


> You spend months dreaming up a design spec with plenty of timing diagrams, UML diagrams, classes, etc.

>

> Then, it's reviewed, which means people read it and and try to make comments on it, then everyone pat themselves on the back because it's been 'signed-off'

I'm sure that happens, but I'm not commenting on what happens, I'm commenting on the Waterfall process as I remember it being taught in the 90s. What yysay above is definitely not what was taught.

> Feedback means actual, hard feedback, be it from the customer or reality.

Yeah, which is what the Waterfall process that I remember advocated: that the end-user be involved at all times, that the design be refined iteratively and quickly, and that the requirements.

I'm not sure where this misconception of what Waterfall advocates came from, but you're not the only one with it.


It's not a misconception, it's the way it is, including in the paper that has been quoted in this thread.

For instance, "Step 2: document the design" advocates extensive documentation upfront.

Of course there is feedback but the loop is very large and slow, and the paper emphasises that feedback ought "to be confined to sucessive steps", which, in addition is problematic when requirements change mid-way.

Agile is trying to solve a real issue with the waterfall model in general and, especially, when requirements are fluid.


> Doesn't Waterfall incorporate feedback? In my memory and experience, it does.

No, by definition. Do you see a stream of water back to the beginning of waterfall in any waterfall on the planet? Nope.


Read the original paper [1]. By definition, treating the steps as iterative is a recommended practice.

[1] http://www-scf.usc.edu/~csci201/lectures/Lecture11/royce1970...


If I read the paper correctly, the iterative approach considered as wasteful, and two-stage waterfall is recommended instead: documentation and planing at first stage, to find weak points first, then execution of the plan. IMHO, it was designed for times when computer time was expensive.

I did a few projects in waterfall style about 20+ years ago, but my memories are faint. Our PM used a UML modeling tool to model classes and relationships, then exported it into Java. We wrote a lot of useless documentation. I developed a good habit to write documentation immediately, when memory is fresh, to avoid the pain of writing it later.


I don't get the point of Figure 4 in the .pdf, I see htat and I hear process process process, don't skip process even when it makes no sense.

You can actually tell that the methodology was developed for a different time period, when you would have to schedule time in order to run your program on the mainframe, instead of just hitting build on your machine.


The point of figure 4 is to admit that the neat step-to-step iterations in fig. 3 don't always happens. Sometimes testing reveals flaws that are bigger than the preceding coding step, and you have to back up to design or even requirements. But then once you back up, you still have to go through the remaining steps without skipping them.

Ex: Testing reveals a corner-case that was never accounted for in the design. You can't just re-code, you have to go back up and redesign, then code the new design, then retest.

Royce's recommendations to minimize these problems are a) more and better design, and b) more and better documentation

In contrast, the Agile approach is to try to slice the work into finer and finer tasks, so these same activities can span a few sprints. It works in the small, but loses the forest for the trees.


> No, by definition. Do you see a stream of water back to the beginning of waterfall in any waterfall on the planet? Nope.

I'm not referring to real waterfalls[1], I'm referring to the Waterfall Software Development Process, which is an iterative process that requires almost constant feedback.

[1] Real waterfalls certainly do have the water returning to the top of the fall, only it's not as a stream and it's not immediate :-)


Exactly each waterfall phase has its own iteration loop


> Waterfall was never that bad anyway.

If you’ve never seen a project that wasted its entire time in the scoping and planning phase (and not produce anything), then you’ve never seen how bad waterfall can get.


Just to make a counter point, if you never spent your days in planning poker meetings, endlessly discussing hundreds of tiny scraps of yellow paper stuck to a white board (and never produce anything), you've never seen how bad Scrum can get.


I don't think it's any issue of individual raw productivity. It's about not wasting that work by focussing on building exactly what the customer wants as fast as possible.

With pure waterfall you spend a lot of time writing and reviewing specs, then you spend a lot of time coding, then tests, then deliver. This means a longer lead time to working software and to any sort of feedback, and usually a waste of time in the docs/specs phase. Even trying to write a whole perfect spec before doing any coding prevents you from benefiting from the practical feedback that comes from actually doing.

For some types of software this works mostly fine, but others not so much. Hence why most methodologies since then have feedback loops and try to make then small.

IMHO sprints are not to pressure (though the name may not be well chosen in that respect) but to enforce small-ish iterations rather than spending 6 months developing something in isolation.


> With pure waterfall you spend a lot of time writing and reviewing specs, then you spend a lot of time coding, then tests, then deliver.

(see my reply above, for some context and why I might be wrong :-).

My recollection of Waterfall is that there is always an "acceptance" phase at the end of every "fall"[1] - IOW, there is a sign-off by the stakeholders that what's in the phase is correct.

So your requirements are engraved in stone, your test protocol is engraved in stone, and your UAT is engraved in stone, and your "pilot performance acceptance" is engraved in stone.

If it turns out there is an error in any one of the phases above then that error, and that error alone, is signed off on and goes through all the phases (or "falls"?) again. Since this is more expensive the further in the falls the error is, there is pressure to get as much sign-off as possible for any issue before that issue proceeds to the next phase.

[1] I like the word "fall" to describe each phase of waterfall :-) I'm going to start using it more often and see if it catches on.


Water is a good, predictable process. It's just result bad. If the result is not important, then use waterfall.


"Water is a good, predictable process"

No, Waterfall is bad precisely because of the inherent unpredictability of a given project.

Never in all of history was a large software project speced out and built on time.

Software lives in a dynamic environment, so the process has to be mutable.

Maybe the answer is not 'Agile', but it's definitely 'agility'.

If we set longer term term, rounded goals, but with smaller iterations, then there's a possibility, but not a big Waterfall.


SCRUM is just a bunch of small Waterfalls. In SCRUM, we have 2-week sprints. It's not important when one of the sprints will fail, because we will adjust following one. Moreover, we plan for first 2 sprints to fail anyway.

Why we should use something else, more complex, instead of Waterfall, for a single sprint? Waterfall is good enough and predictable enough for a 2-week sprint.

Yes, the result of a single waterfall will be bad, we plan for that, but accumulation of knowledge will lead us to success.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: