rw-book-cover

Metadata

Highlights

  • Building AI products & services is not a walk in the park. Projects with apparently good business alignment, a brilliant technical team, and an experienced Product Owner in developing software end up dying on the shore with nothing to show after months and months of work. In the best-case scenario, the project is canceled, and its components reassigned to a new task. In the worst case –nobody wants to admit defeat–, the project will remain in a zombie state: alive in theory but devoured of resources, budget, and plagued with low morale. Even those that get to the finish line do so after battling for many months. If you’ve been there you know it is not funny. (View Highlight)
  • “Building software has never been easier!”. But the data is stubborn: according to an article by Forbes in 2022, somewhere between 60-80% of AI projects fail. “What is your project success rate?” is my favorite question when I meet my colleagues from other companies in meetups & conferences. (View Highlight)
  • “But failure is not intrinsically bad! Aren’t we supposed to fail to learn?”  you might say. You’re not wrong but, in this case, you kind of are. What I’m talking about is the bad kind of failure. The one you never want to suffer. Nothing is learned as your pretty model sits unused. (View Highlight)
  • As Jonathan Smart describes in “Sooner, Safe, Happier: Antipatters and Patterns for Business Agility” (a great book to read by the way) every time you are building a “new” thing, there are no best practices. Just “good practices” –things you should strive to do– and “bad practices” –things you should stay away from as if your life depended on it–. (View Highlight)
  • Let me be clear: thinking about AI in terms of software is a very sane approach. The first generation of Data Scientists were mathematicians, physicists etc: people that knew how to program but struggled with the realities of pushing code in a corporate setting. (View Highlight)
  • However no Data Scientist worth her salt is going to completely accept this software-only-centric view: “Yes, we build software but it is a very particular type of software! We are dealing with intrinsically higher levels of uncertainty! This is more complicated!”. (View Highlight)
  • The argument goes like this: at the beginning of a project you just don’t know if you have the data to build the model you want to build. There is an additional layer of experimentation as the data may not have enough signal. Or maybe it does, but you just cannot find the right model. On top of that, even when you succeed in calibrating a model, the resulting artifact is a fragile piece of code. Software and maintenance go together since the beginning of time, but AI models require a lot more: the world might literally change while you are sleeping and your model is not performant anymore. (View Highlight)
  • the reality is that projects fail for a variety of reasons. Some fail because the data is just not there. Or the data is there but you don’t really understand it. Some (many, in fact) fail because you are building the wrong thing. In some cases you have built the right thing but integrations took forever and your clients just lost faith. And yes, some fail because what you’re trying to model is simply too complex. Just know that this doesn’t happen as much as you think it does. (View Highlight)