ai-implementationb2bchemical-industrybiotechdigital-transformation

AI Works in Demos. Here's Why It Fails in Your Business.

AI demos look amazing. Then implementation hits reality - compliance, legacy systems, resistant teams. Here's what actually works across chemicals, biotech, and B2B industries.

March 23, 2026 · 8 min read
AI Works in Demos. Here's Why It Fails in Your Business.

Key Takeaways

  • The gap between "AI can do this" and "AI creates business value" is where most implementations die - across every industry
  • The hard problems aren't technical. They're about accountability, integration with existing systems, and getting people to actually use the tools
  • Companies that win at AI don't move fastest. They sequence adoption by organizational readiness
  • Implementation needs to put humans in the driving seat first, then gradually hand over control as trust builds

A few weeks ago, Harry Zumaque, who leads AI strategy at LANXESS, published a sharp piece about what Silicon Valley misses about the chemical industry. Around the same time, Dennis Hublitz at Azelis wrote about the digital transformation barriers he mapped back in 2018 and what's changed since.

Both made the same point: the hard part of AI was never the technology.

I've been thinking about this a lot because I see it from a different angle. I don't work inside one company. I work across multiple industries at once. As a fractional CCO for biotech and chemical companies, I handle the commercial side. Through opencream.ai, I build and run customized AI solutions for small to mid-sized B2B companies - including raw material startups, biotech ventures, and right now a damage assessment company. For larger enterprises in the chemical industry, I partner with Kimia.ai, who offer a purpose-built solution for that scale.

Same gap everywhere. The AI works. The implementation doesn't.

The people aren't the problem. The way Silicon Valley talks about AI has almost nothing to do with how AI actually creates value in B2B.

The Demo Problem

Every week I see a new AI demo that looks incredible. Build an app in four hours. Generate a full marketing strategy in ten minutes. Analyze a thousand documents in seconds.

And every week I sit with B2B companies where none of that translates.

Not because the AI can't do the work. It can. But because in their world, the output doesn't go into a browser. It goes into a regulatory filing, through a customer qualification process, past a procurement team that's been doing things the same way for 15 years, and into a relationship that took a decade to build.

Kimia.ai, who I partner with for larger chemical companies, shared a story recently that stuck with me. A European chemical distributor - 400 million euros in revenue - spent six figures on an AI chatbot for their sales team. The technology worked perfectly. Nobody used it. Because the sales reps don't ask structured queries. They ask "What should I recommend?" And answering that requires combining chemistry, regulations, customer history, and commercial judgment all at once.

A biotech ingredient company I serve as fractional CCO had a similar story. They bought an AI tool for market analysis. Impressive outputs. But nobody trusted the data because it couldn't account for the relationship dynamics that drive 80% of their sales decisions.

A friend of mine, a senior executive at a very large German corporation, told me they're now tracking how often employees use their internal AI tool. Essentially forcing adoption through dashboard metrics. But measuring "how many times did you open the tool" tells you nothing about whether anyone got real value from it. It's like measuring sales growth by counting how many visit reports your reps enter into the CRM. Activity metrics without outcome metrics. The classic causality trap.

Different industries, different specifics. But the same pattern: the AI works, the people either don't use it or use it without getting real value.

"Just Start Using AI" Is Dangerous Advice

The most common advice in the AI world right now is: experiment. Spend an hour a day. Just start.

That's fine for individuals learning on their own time. It's dangerous for companies.

When a tech startup adopts AI badly, they ship a buggy feature and roll it back. When a chemical company adopts AI badly, they get a compliance violation. When a biotech company gets it wrong, they damage a customer relationship that took years to build.

The stakes are different. The approach needs to be different too.

I've watched companies panic after reading viral AI posts and throw money at "AI initiatives" without answering the basic questions first. Who is accountable when the AI makes a wrong recommendation? How does this fit into systems that were set up ten years ago? Does this use case actually improve the bottom line, or does it just look good in a board presentation?

Every failed AI project makes the next one harder. The team gets skeptical. Management pulls back. And the window to actually capture value keeps getting smaller.

What Actually Works (Across Every Industry I've Seen)

After implementing AI across chemicals, biotech, raw materials, and damage assessment, I keep seeing the same patterns separate success from failure.

Start with the decision, not the technology. Which specific decision costs you the most when it's wrong? For a raw material supplier, it might be product recommendations. For a damage assessor, it's claim classification. For a biotech company, it's meeting preparation for long-cycle sales. Build AI around that one decision first. Not around everything.

Map how decisions actually flow before you build anything. Most companies skip this step. They buy the AI tool before they understand how their people actually make decisions. Then they wonder why nobody uses it. The AI needs to fit into the existing workflow, not replace it on day one.

Let people verify before you automate. This is the one that changed everything in our implementations. We're doing this right now with a damage assessment company at opencream.ai. The AI analyzes damage photos and generates reports. It could run fully automated from day one - the technology is there.

But we didn't do that. By design, we started with every main step requiring manual confirmation. The assessor sees what the AI decided, checks it against their expert knowledge, and hits confirm. They stay in the driving seat.

After 10 to 15 correct confirmations on the same type of task, that step becomes automatic. There's a visual counter so they can see the trust building in real time. If the AI gets something wrong during those first rounds, we correct it together. The assessor teaches the system.

Could we skip this and go faster? Absolutely. But the result is completely different. The team doesn't feel like AI was done to them. They feel like they trained it themselves. They remain the expert. The AI just gets faster at doing what they already approved.

That emotional piece - feeling in the driving seat - is what separates the 5% of AI implementations that stick from the 95% that get abandoned.

Distinguish between what needs a copilot and what's ready for autopilot. Not every process is ready for full automation. Some tasks need a human reviewing AI suggestions (copilot). Others can run on their own after the trust-building phase (autopilot). The maturity of your data, the risk level of the decision, and the comfort level of your team all determine which phase a task belongs in. Getting this wrong in either direction kills adoption.

The Expertise Window Is Closing

There's an urgency here that most companies underestimate.

The best people in these organizations - the ones with 20 or 30 years of domain knowledge - are getting closer to retirement. In chemicals, in biotech, in specialty manufacturing. The sales director who knows every customer's quirks. The formulator who can troubleshoot by instinct. The claims expert who spots fraud patterns nobody else sees.

That knowledge walks out the door a little more each year.

Companies that figure out how to capture this expertise and build it into their AI systems will pull ahead in ways that are hard to reverse. Companies that wait will lose institutional knowledge they can never rebuild.

This is why I built an agentic CRM for startup B2B raw material suppliers. Small teams of 5 to 15 people, selling specialty ingredients into sales cycles of 12 to 24 months. They need a system that captures what their best people know and makes it available to everyone. Not through a chatbot that searches documents, but through AI that actually remembers and reasons.

And this is why our partnership with Kimia.ai matters for larger chemical distributors and manufacturers. Same core problem, different scale. The expertise gap doesn't care about company size.

The Path Forward

Silicon Valley's vision of AI isn't wrong. Every business probably will have AI agents handling parts of their operations eventually. Zuckerberg talks about more AI agents than people in the world. That future is coming.

But the path from here to there doesn't run through "just experiment." It runs through your specific organizational reality.

Your compliance requirements, your legacy systems, the comfort level of your team, and the customer relationships you've built over decades.

The companies that get AI right won't be the fastest adopters. They'll be the ones who sequence adoption intelligently - starting with one high-value decision, building trust through human verification, and gradually expanding as the organization absorbs each step.

That's less exciting than "walk away for four hours and come back to finished software." But it's what actually works in the industries I operate in.

If you're a small to mid-sized B2B company trying to figure out where AI actually fits, that's what we do at opencream.ai - develop, implement, and execute customized AI solutions for your specific workflows. For larger chemical distributors and manufacturers, I partner with Kimia.ai, who build purpose-built chemical intelligence at that scale.

The acceleration isn't slowing down. But the implementation is where the value lives or dies.

Start with one decision. Build trust. Let the intelligence come.

FAQ

Most failures aren't technical. The AI works fine. Implementations fail because companies skip the organizational groundwork - they don't map existing decision flows, they don't define accountability for AI outputs, and they don't give their teams enough time to build trust with the new tools. The result: expensive technology that nobody uses.

No. I see the identical pattern across every B2B industry I work in - biotech, raw materials, damage assessment. Any industry with long customer relationships, complex decisions, regulatory requirements, or institutional knowledge faces these same challenges. The specifics differ, but the gap between AI capability and AI adoption is universal.

It depends on complexity, but expect 8 to 16 weeks for a single workflow implementation, including the trust-building phase. Some companies want faster. They can go faster. But rushing past the human verification steps usually means the tool gets abandoned within three months.

opencream.ai develops, implements, and executes customized AI solutions for small to mid-sized B2B companies - startup raw material suppliers, niche champions, and specialized businesses. We build the solution around your specific workflows. Kimia.ai is a purpose-built chemical intelligence platform for larger enterprises - manufacturers and distributors with 100 million or more in revenue. I partner with them for clients at that scale because their solution is simply better for that segment. Different scale, same core belief: generic AI doesn't work for specialized industries.

Start with one question: can you clearly describe who is accountable when the AI gives a wrong recommendation? If you can answer that, you're ready to start. If you can't, work on governance first. The model capability isn't the bottleneck - organizational readiness is.

Want to see what AI can do for you?

Tell us about your business. We'll get back to you within 24 hours.

Schedule a Strategy Call

We use analytics to understand how this site is used and improve it. No personal data is collected. Privacy Policy