Over the past year, we tested 23 AI tools across our workflow and our clients' businesses. Writing assistants, image generators, code helpers, meeting summarisers, email drafters, CRM add-ons, scheduling optimisers.
We still use 6 of them. The other 17 are dead to us.
This is not a ranking article. It is a post-mortem. Here is what actually survived daily use and what did not — and more importantly, the patterns that predict which AI tools are worth your time.
What Got Binned (and the Patterns)
The "AI-Powered" Label Trap. Seven of the tools we dropped were existing software that bolted on a chatbot or AI feature to justify a price increase. The core product was fine before. The AI addition was a gimmick. If the tool was useful before AI, the AI add-on is probably unnecessary. If the tool was not useful before AI, the AI add-on will not save it.
The Integration Problem. Four tools were genuinely good at what they did but existed in isolation. They did not connect to our existing systems without manual data transfer. An AI tool that saves you 20 minutes but requires 15 minutes of copy-pasting data in and out is saving you 5 minutes. That is not worth a subscription.
The Accuracy Problem. Three tools — all in the content generation space — produced output that required so much editing it was faster to write from scratch. We are not talking about a quick proofread. We are talking about restructuring paragraphs, fact-checking claims, and removing the distinctive AI writing style that readers can now spot instantly.
The "I Could Just Do This" Problem. Three tools automated tasks that took us less than five minutes manually. Scheduling a social post. Formatting a document. Sending a follow-up email. The overhead of learning, configuring, and maintaining the tool exceeded the time it saved.
“If the tool requires a tutorial longer than 10 minutes, it is solving the wrong problem.”
What Survived (and Why)
We will not name specific products because this changes fast and our recommendations depend on the context. But here are the six categories where AI tools actually earned their keep.
Code assistance. One tool. Used daily by every developer on the team. It does not write our code — it accelerates it. Autocomplete, refactoring suggestions, bug detection. The key: it works inside our existing editor, not in a separate window.
Client communication drafting. One tool. It drafts email responses based on the conversation thread. We edit every response before sending — it is a starting point, not an autopilot. Saves roughly 30 minutes per day across the team.
Meeting transcription and summarisation. One tool. Records calls, transcribes them, extracts action items. We were doing this manually with notes. The AI version is more thorough and does not miss things because someone got distracted.
Image generation for mockups. One tool. Used for generating placeholder imagery and quick visual concepts during the design phase. Not for final assets — for speed during ideation.
Data analysis. One tool. Takes raw data (spreadsheets, CSV exports, database queries) and produces plain-English summaries with relevant charts. Replaces an hour of spreadsheet wrestling with a two-minute conversation.
Document search. One tool. Searches across our internal documentation and past project files using natural language queries. "How did we handle authentication on the Styled Spaces project?" — it finds the relevant document in seconds.
What To Do About It
- 1.Audit your current AI subscriptions. How many are you actually using weekly? Cancel everything you have not opened in 30 days.
- 2.Test before you subscribe. Every tool gets a two-week trial with real work, not toy examples. If it does not prove itself in two weeks, it will not prove itself in two months.
- 3.Prioritise tools that integrate. The best AI tool is the one that fits into your existing workflow without requiring you to change how you work.
- 4.Accept that most AI tools are not for you. That is fine. The market is flooded. Being selective is a feature, not a bug.
https://www.technologyreview.com/2026/02/ai-tool-fatigue
Published: 2026-02-25
https://hbr.org/2026/02/paradox-of-choice-ai-tools
Published: 2026-02-18
https://www.gov.uk/government/publications/ai-adoption-smes-2026
Published: 2026-02-01
GET THE WEEKLY BRIEFING
One email a week. What happened in tech and why it matters to your business.
NEED HELP WITH THIS?
That's literally what we do. Websites, automation, AI tools — one conversation, no jargon.
GET IN TOUCHMORE NEWS
Anthropic reduces cache TTL in March 6th system downgrade
Anthropic implemented a cache TTL downgrade on March 6th that could impact API response times and system performance for developers using their services.
US regulators summon bank executives over Anthropic AI cybersecurity risks
Federal regulators are calling in major bank leaders to discuss potential cybersecurity threats posed by Anthropic's newest AI model to financial systems.
MegaTrain enables full precision training of 100B+ parameter LLMs on single GPU
Revolutionary MegaTrain technique allows training massive 100+ billion parameter language models with full precision on just one GPU, democratizing AI development.