Role of Experience in AI era
LLMs are ambitious when it comes to judging whether a task is feasible or not. Because they don’t really reason—they mostly infer likely context based on patterns in their training data—most of what they “know” is how things are supposed to work. But the ability to estimate what is actually possible comes from experience, not from examples. Knowing the underlying science is essential, but what’s even rarer is the ability to take a problem, estimate the best way to solve it, build a plan to tackle it with ML, and confidently execute that plan. This skill usually comes only after multiple overly ambitious projects and missed deadlines.
How do you decide what to focus on in an ML project? You need to find the impact bottleneck: the part of the pipeline that would provide the most value if improved. When working with companies, I often find that they’re not working on the right problem—or they’re not even at the growth stage where that problem matters. There are often issues around the model, but the best way to find them is to temporarily replace the model with something simple and debug the entire pipeline. Very often the real issue isn’t model accuracy at all. Frequently, the product is dead even if the model works perfectly.
Once you have the whole pipeline, how do you identify the impact bottleneck? Imagine that the bottleneck is fully solved and ask yourself: Was it worth the effort it would take to fix it? It’s also incredibly valuable to manually inspect your model’s inputs and outputs. Scroll through a bunch of examples and see if anything looks strange. My department head at IBM had a mantra: do something manually for an hour before doing any real engineering work. Curious whether your project is achievable with ML? Or whether ML is even needed in the first place? Contact me—I’m happy to help you find the right direction.