From Tool to Teammate: When AIs Say “No”

Two weeks ago, I completed my first proper commissioned “vibe coding” project on a new SaaS platform. As one might expect, I built the entire thing using Cursor and Claude. I’ll talk more about the platform itself on another day. But what struck me during the build was a quiet moment of realisation about what these AI systems actually are.

And what they clearly aren’t.

Claude, ChatGPT, Deepseek, or your preferred AI are not merely tools. They’re something fundamentally different, operating in a new category that our existing vocabulary struggles to capture. The distinction became clear to me in a rather mundane moment of project anxiety.

The day before the platform’s launch at the client’s grand unveiling event, I was seized by that familiar creative itch: the urge to add just one more “nice-to-have” feature to my creation. Seasoned developers probably know this particular temptation all too well. For me and my lack of experience, this was uncharted territory. I was understandably nervous about meddling with a functioning build mere hours before it would face public scrutiny.

Rather than simply ploughing ahead, I posed my predicament to Claude in Cursor. And that’s where things took an interesting turn. In what can only be described as a moment of professional intervention, Claude advised me not to add the new feature, gently cautioning that the potential for breaking something else was uncomfortably high. “Introducing new dependencies this close to launch creates unnecessary risk. Your anxiety suggests you already understand this intuitively. Perhaps we should bookmark this idea for version 1.1 of the platform?”

Let’s pause to properly consider this interaction. A “tool” would have simply executed my instructions with digital obedience. Photoshop doesn’t raise an eyebrow when I’ve added excessive noise to the background of a poster, regardless of how ghastly the result. Google Slides stays stoically silent when I’ve crammed a twenty-eighth pun into a presentation that lost its audience around slide twelve.

But Claude did something altogether different — it advised against my impulse. It acknowledged my nervousness. It addressed my anxiety. It provided a steadying hand when indecision had me wobbling at the finish line. In that moment, Claude was more teammate than tool, more collaborator than code, more colleague than software.

What makes this interaction distinctive isn’t just Claude offering sensible advice — software has been displaying warnings for decades. The difference lies in how it engaged with my specific situation, recognised emotional undertones, and considered my project’s context. This adaptive response transcends what we typically expect from our tools.

Of course, it’s important to maintain a healthy skepticism here. Claude didn’t actually “feel” my anxiety or genuinely “care” about my project’s success. What resembled empathy was ultimately pattern matching — incredibly sophisticated algorithms recognising linguistic markers of stress and responding with pre-trained reassurance. The system’s caution against last-minute changes simply reflected good software development principles encoded in its training data, not an authentic concern for my wellbeing.

As much as it felt differently in the moment.

This distinction matters enormously in how we approach and utilise these systems, especially as they get more intelligent. If you continue to see these entities as mere “tools”, you’re committing a disservice to them and yourself. When you begin to think of them as something between tools and teammates — as entities that can respond contextually to your process rather than just your commands — you’ll likely extract more value from these systems. They’re not sentient colleagues (not yet anyway!), but they’re not mere hammers either.

As these systems continue to evolve, the boundaries between tool and teammate will require careful examination. We're entering territory where our digital assistants mimic interpersonal dynamics with increasing fidelity, offering pushback, seeming to understand context, and occasionally saving us from our worst impulses. The challenge ahead lies not in anthropomorphising these systems too little, but perhaps in anthropomorphising them too much. The AI that said “no” to me wasn’t exercising judgment — it was executing sophisticated pattern recognition. How we navigate this uncanny middle ground between tool and teammate will likely define our relationship with artificial intelligence for years to come.

Next
Next

The One That Liberated Us from Adobe Acrobat (kinda)