Balancing Waterfall Hardware Roadmaps with Agile Software Development
28 April, 2020
I joined a driverless robo-taxi company as my first major product role. When you’re first building a startup from the ground up, you don’t start by picking a project methodology. You’re focused on building something and focusing less on how you’re doing it. We set organization-level goals and milestones based on demos that were coming up. In other words, the near term goal (of having successful demos) and the long term goal (of building a driverless car) were very clear. It was everything in between that wasn’t well-defined.
We had more of an agile approach early on - we held sprint meetings at the team level and then we would have combined meetings to sync across teams. However, we ran into two big problems.
First, when we started, we were a software-only company. When we started to work on hardware, it was very much hacking things together and that team functioned somewhat separately. We knew this wasn’t going to be scalable and that it wasn’t going to get us to the level of functional safety required, but that was a problem “for later.” The software team would just try and work with whatever hardware was available during their sprint, so things were not as coordinated as they should have been.
Second, we got acquired by a large automotive supplier. They initially said that we would be an independent subsidiary and didn’t want to affect the company’s innovative spirit, but you could see we were being pushed in a different direction after a while.
Our team had a bunch of people that came from the technology world, not the automotive world. Bringing these two worlds together took a while. Combined with our shift towards taking a more methodical approach to hardware, it started to become clear the agile development process by itself wasn’t going to work anymore.
Finally, while we had an agile approach to development, we did not really have an agile approach to collecting user feedback. Initially, we were of the mind that because we were in such early days, the feedback from our team and engineers should be sufficient to get us to some baseline level of performance. When you’re building, getting to the point that internal engineers can’t point out major flaws is a big deal. It’s not a true MVP, but it is in the sense that you can start getting external user feedback on the product. Demos were great for providing continual proof points, but our pilots’ feedback didn’t go right into the development process. Some of us basically just took insights and indirectly fed it in.
After much effort on my part and a few others, we officially formed the product organization in 2018, which was about a year and a half after we got acquired. Within this product organization, we decided to systematize our hardware development mostly using a waterfall methodology, though there were iteration loops built into each release version as we learned more about what the user wanted. Everyone on the software side can then code for the same hardware release versions, and figure out what they wanted to learn from each update.
It is not possible to be as agile on the hardware side, so we scheduled the required hardware iterations before actually doing those iterations. In waterfall, a lot more planning is needed upfront. You may not know a priori what the iterations will be focused on, but you can buffer in the time and start thinking about it, which is something we hadn’t done before that. For things on the software side, it could be more agile within the hardware timeframes set out in the waterfall.
When you’re talking about autonomous vehicle hardware, a lot of it is being driven by safety-critical systems and their certifications. For non-safety critical features, we just had to work within those timeframes. However, anytime one of those critical milestones was set, you wanted to get your code in to test because the final test of any development is always how it works in a real-world environment. You can’t replicate everything in your test systems. Once a version is functionally certified, you can’t change the release, so these were hard deadlines.
It wasn’t until we created the product organization that we were able to collect user feedback more directly and methodically. We ended up doing this through the creation of a product development process. The company that had acquired us was doing a public pilot with Lyft, but the channels of feedback were indirect. We knew a lot about the final product, but because we were all in different functions, some like myself would think about it a lot more than others. We needed to start from the top in terms of what we wanted to build, find out the open gaps of our current product understanding, and translate this into specific tests in public pilot testing and focus group testing.
Before, most of our tests were reactive and driven more by engineers than users, and the product team flipped this on its head. The development process became clear when we were able to think about developing it in terms of the final product we were aiming for. It became clear to everyone that everything flows from our users, so we needed to put their needs first (for us this was in the form of a product requirements document), and derive our engineering focuses and priorities from here.
We as an organization decided that we needed to move in this user-driven waterfall direction, but not everyone liked this. Some developers that had been having their way most of the life of the company didn’t like not getting to dictate development anymore. This ultimately wasn’t a huge issue. Moving towards this method needed to be introduced gradually. We picked a direction the company was moving in, but we didn’t snap our fingers and move there. It took a lot of buy-in from the company and it was a painful year. We had to integrate with the company that acquired us as well, but it was clear afterward that we were in a much better place. Most people stayed and ended up being bought into the new way of doing things.
Peter Fedorocko, Director of Engineering at Workday, discusses if a manager should keep his skip-level one-on-ones and describes how he introduced the Open Doors instead.
Director of Engineering at Workday
Lloyd Holman, Head of Engineering at By Miles, explains why documentation is essential for any company to achieve excellence, particularly underlining its importance in onboarding new engineers.
Head Of Engineering at By Miles
Arun Krishnaswamy, Director of Data Science at Workday, elaborates on how he approached a single point of failure problem while sharing three key tips (or guardrails) on how to prevent it.
Director at Workday
Fraser Carlisle, Vice President and Global Head of Digital Product of iShares at BlackRock, discusses his experience of scaling a product across multiple markets and highlighting the importance of balancing localization needs with a global scale.
VP, Head of Digital Product, iShares at BlackRock
Fraser Carlisle, Vice President and Global Head of Digital Product of iShares at BlackRock, dissects how he got executives to buy-in into his ideas and how he managed to retain them through the whole process.
VP, Head of Digital Product, iShares at BlackRock
You're a great engineer.
Become a great engineering leader.
Plato (platohq.com) is the world's biggest mentorship platform for engineering managers & product managers. We've curated a community of mentors who are the tech industry's best engineering & product leaders from companies like Facebook, Lyft, Slack, Airbnb, Gusto, and more.