On AI Product Adoption
How do you know if your AI feature isn’t useful or if users just haven’t discovered that it’s useful?
If you’re building an AI app or feature and it’s not getting much usage, it’s very hard to know why. It could be that your product genuinely isn't useful (option A) or users just haven't discovered that it's useful yet (option B). You can't really know it's option A until you've extensively tried to rule out option B.
There's a lot that goes into ruling out option B. Users need to know the product or feature exists, and they need to be reminded it exists. Furthermore, when they have a task or problem, they need to think of your feature as a potential solution and go use it. And when they do, it needs to work well. If all that happens, they'll have their "oh wow" moment and come back to it.
If you're building a new AI feature into an existing product, you have to remember that people are used to using the product in a specific way and probably have done so for a long time. So just because you're getting low usage up front doesn't tell you much. It could be that the feature isn't that useful, or it could be that users have forgotten about it, or it's outside their normal flow, or they tried it once and it didn't work for what they imagined.
Even looking at a product as successful as Cursor, it was around for a long time and was fundamentally the same product for much of that. It took a lot of bottom-up marketing via people having great experiences, sharing with others, and pushing others to try it before it caught on. Even when I first tried it, I wasn't sure I saw that much value. But over time, I'd remember to give it a try for things that seemed like they might be a good fit, and usually it did really well. As I did that more and more, I began to use it more and more. I’m now in a similar early, low usage pattern with long running agentic coding tools. Yet, I’m confident it won’t be long till I use them almost daily.
And that's for me, someone at the forefront of using this stuff. Your users, unless you're selling to engineers, probably aren't that. You have to remember that this technology and its capabilities are entirely new. They may not even be aware, or it certainly may not be front of mind, what it can do or when to use it. So you really need to be on top of reminding them, making features visible in marketing, and educating them on what these features are capable of. It's only once you've done that and still see persistent low usage that you can truly determine if the product just isn't useful.
There's risk in both giving up too early and not giving up early enough. Your feature might actually be useful and you just haven't made it discoverable enough. Or you might be beating a dead horse, trying to apply technology because you saw something was possible and looking for a problem where there really wasn't one. You should try to be as embedded in your users' workflows as possible to understand their pain points, thats the best way to increase the odds of utility. But AI feature development is also sometimes a case of, “if you do everything, you will win.” That's really what you have to do with AI products—try everything on the discoverability side, but try to do it fast.
It might also be the case that you're really just one slight change away from the product taking off. NotebookLM was a great example of this. It was around for a year, effectively the same product, and it wasn't until they added audio overviews that it really took off. Was that because audio overviews were particularly useful, or because they provided discoverability? Hard to say, but the point is that even small tweaks to existing products or features can totally change the equation. If they had given up too early, they never would have experienced this breakthrough.
Building successful AI products requires patience and persistence in equal measure—but also brutal honesty about what's actually working. The technology is so new that user adoption often lags behind actual utility. Assume low usage means discoverability problems before assuming lack of utility, and invest heavily in user education, feature visibility, and workflow integration. Your job is to bridge that gap through relentless focus on discoverability—while staying honest about when it's time to move on.