Back to resources

Getting Machine Learning Projects Unstuck

Impact
Productivity

27 April, 2021

Philippe Girolami
Philippe Girolami

VP of Engineer at Upflow

Phillipe Girolami, VP of Engineering for Data Engineering and Machine Learning at Dailymotion, speaks of non-engineering factors that determine the success rate of machine learning projects.

Problem

One of my teams does machine learning and we’ve had our share of stuck projects despite having some amazing ML experts on the team and alignment in the company around the need for ML. Many factors determine the success rate of ML projects that are non-engineering in their essence.

Actions taken

The first way one of the ML projects got stuck was by a lack of discussions with Product. If a few key questions aren’t addressed upfront, people weave a net of assumptions and won’t dissect the consequences of choosing one metric or solution over others.

So the first action to take is to talk to the PM for whom we would be delivering features and start asking questions. Here are some arising from our cases; there are surely others that apply to your case:

  • What constraints apply to the product the ML project is for? There are always constraints you can’t wiggle out, whether they are legal, business, or product constraints. If you throw data at a computer, it will find the shortest path, which is not always acceptable. What is the success metric? Everyone should be crystal clear about what we are trying to improve or optimize.
  • What mistakes and error rates are acceptable because an ML model will always make mistakes, and what to do about them?

Once everyone agrees on success metrics, engineers could use them directly, applying them to how they would train the data and measure how the model works, or they would have to find a proxy for it. That would initiate a number of other discussions, but at least nothing would be taken for granted.

Responses about acceptable errors would shape up the kind of solution we would come up with. That would provide engineers with an understanding of tradeoffs and possible alternative solutions. In fact, if something is exceedingly important and errors are not to be tolerated, maybe machine learning would not be the right solution in the first place, and instead, a human should be the one to do it. In some cases, we could accept false positives (something is wrong, but in fact, it isn’t), but not false negatives (everything is fine, but it isn’t). Obviously, no one would accept the same kind of errors if looking at chest X-rays and a recommendation system.

The next step would be to understand how we should remediate mistakes that are not acceptable. There are three most common scenarios: we would reduce them enough to drop them, add a human in the loop, or build the second model. These discussions are often missing, and that is the main cause for projects to get stuck.

The second example of a stuck ML project simply required clarifying the different roles of people involved in the project. Machine learning is a new field, and there can be a lot of confusion about roles and responsibilities. In this particular case, there was confusion about the role of ML engineer as an expert building the model, product analyst, data engineer, and PM.

Lessons learned

  • There is nothing obvious with delivering ML capability in production. It takes learning from anyone involved. You will most likely trip over something but have to learn how to get up and adapt. Discussing constraints is part of that learning and differs for every single project.
  • Be very clear about what you expect different people on the ML team to do. Demarcate clearly the responsibilities of an ML engineer and data scientist/analyst. These two roles are somewhat fuzzy and what they encompass pretty much depends on the company.

Discover Plato

Scale your coaching effort for your engineering and product teams
Develop yourself to become a stronger engineering / product leader


Related stories

"You don't care about quality" A story of single metric bias

3 February

This was not a high point in my career. It's a story of single metric bias, how I let one measure become a 'source of truth', failed to manage up and ended up yelling at one of the most respected engineers in my team.

Product Team
Productivity
Team Reaction
Alex Shaw

Alex Shaw

Chief Technology and Product Officer at Hive Learning

Myth Busting

10 December

Supporting principles on why being data led (not driven) helps with the story telling.

Alignment
Managing Expectations
Building A Team
Leadership
Collaboration
Productivity
Feedback
Psychological Safety
Stakeholders
Vikash Chhaganlal

Vikash Chhaganlal

Head of Engineering at Xero

How to measure Engineering Productivity?

30 November

When you grow fast, its normal to focus on Value delivery aka "Feature Releases". Too many releases too soon will inevitably lead to piling tech debts and before you know, inefficiencies creep in, performances goes down, and ultimately any new release takes too long. Sounds familiar? Then read on..

Productivity
Prioritization
Performance
Ramkumar Sundarakalatharan

Ramkumar Sundarakalatharan

VP - Engineering at ITILITE Technologies

Mindsets of High Performance team

14 October

Teams have tremendous impact on the products on they build. T.E.A.M definition - Together Everybody Achieves More is true. A collaborative and empowered team builds great product versus the good ones.

Innovation / Experiment
Mission / Vision / Charter
Building A Team
Productivity
Feedback
Motivation
Praveen Cheruvu

Praveen Cheruvu

Senior Software Engineering Manager at Anaplan

How I failed at my startup

14 October

There are nine specific building blocks and functional areas every org/company need to work to launch the product and provide services to customers. How effectively founders tackle them determine the destiny of the company.

Mission / Vision / Charter
Scaling Team
Building A Team
Impact
Strategy
Prioritization
Praveen Cheruvu

Praveen Cheruvu

Senior Software Engineering Manager at Anaplan