If you've been in London recently, you've probably noticed the ads for Claude AI, a chatbot vying for ChatGPT’s top spot. One billboard reads, "Powerful, fast, or safe: pick three," while another proclaims, "The AI with people skills." I've always found "people skills" to be a vague, overused term, often claimed by those who are either tone-deaf or socially awkward. Thankfully, Claude is neither. He’s reserved, responding only when prompted—like a well-mannered Victorian child who can also plan your holidays and troubleshoot your tech issues faster than you can say "IT."
At least, that’s the promise. According to Anthropic, the company behind Claude, it’s still experimental and occasionally clunky or prone to errors. However, its potential is significant. Built on the Claude 3.5 large language model, it emphasizes self-correction and ethical guidelines (more on that later). With a context window of 100,000 tokens—compared to GPT-4's 32,768—Claude 3.5 can process much longer texts, making it particularly useful for summarizing lengthy documents, crafting fiction, and debugging code.
It’s a tough break for OpenAI, the parent company of ChatGPT, which is already facing an antitrust lawsuit from another rival firm, Elon Musk’s xAI. To rub salt in the wound, Anthropic’s founders – Chris Olah and Dario Amodei – are ex-employees of Open AI, where they worked for three and five years respectively. They launched Anthropic in the Bay Area in 2021, and at the time of writing have received $8bn in investment from Amazon Web Services. Half of this total, $4bn, was announced only last week. Although this represents only a minority stake for Amazon founder Jeff Bezos, it adds another layer to the tech rivalries at the heart of the US “broligarchy”.
In the latest demos for Claude, the chatbot was able to plan and create a calendar appointment for a trip to view the sunrise in San Francisco. It also built a simple website to promote itself. This follows on from a similar product recently launched by Microsoft: the Copilot Studio, which allows companies to build their own autonomous agents. The consulting firm McKinsey and Co, for instance, is already using the technology to see if it can outsource the processing of new clients to AI, rather than rely on human resources. Microsoft also happens to be the chief backer of OpenAI, with $13bn invested since 2019 and a 49 percent stake in the firm.
Unlike OpenAI, Anthropic does not advertise itself as a generative AI firm. It is, instead, “an AI safety and research company” whose “interdisciplinary team has experience across ML [Silicon Valley shorthand for machine learning], physics, policy and product”. Policy is the operative term here, with one of Claude AI’s selling points being its ability to tackle ethical questions – including those surrounding artificial superintelligence and what that might mean for the future of humankind.
I have often used chatbots for menial tasks like shopping (“ChatGPT, please help me find a grey V neck for under £30 – and nothing fast fashion!”) but never for Big Questions. To put a theory to the test, though, I asked ChatGPT for examples of ethical dilemmas. It suggested, “How should governments address systemic inequalities?” – a question I asked both itself and Claude. GPT’s answer was bureaucratic, almost Bruxellian – a ventriloquist for bien-pensant centrism insisting on data collection and analysis (we need to know how inequalities are assessed before addressing them, it said). Claude’s was more straightforward and sounded more human, although the actual policies that were ultimately recommended (progressive taxation, education reform) were much like those of GPT: sane and staid.
It’s ChatGPT for people who went to SOAS, one friend said
Claude, besides offering a more visually pleasing and elegant digital interface, relies on constitutional AI, a system developed by Anthropic to align the chatbot with principles based on those enshrined in national constitutions. The ideals on which Claude bases answers are humanistic, valuing equality, non-discrimination and a commitment to justice. It’s almost as if Claude has a moral compass.
The self-correcting principle underpinning the Claude 3.5 language model means the chatbot will adapt responses to be fairer overtime, avoiding such issues as implicit bias. On X, one user described the “Claude tone” as that of “a person who isn’t really connected to their heart, but is trying to convey care”. A friend, meanwhile, called it “ChatGPT for people who went to SOAS”.
No doubt, some on the fringes might view Claude as the newest armour of the establishment. On X, one user wrote: “now that woke is dead, do we think we can get a version of claude that’s not such a little b**ch about everything? my tolerance for this corpo nanny state s**t is officially at zero.
”Others are more concerned about Claude’s impact on the job market. The chatbot is being sold as a corporate solution to “drudgework”: summarising documents, scanning contracts, making presentations. But these are essential tasks for graduate employees and threatens to upend the structure of companies across all industries. The risk of “widespread youth unemployment” is one that Claude readily acknowledges when asked what risks it may pose to people in junior positions. Another risk is what Claude calls the “elimination of traditional ‘learning by doing’ career entry points”: the idea that drudgework provides the essential building blocks without which more senior skills are harder to acquire.
Most reports so far seem focused on Claude’s impact on computers: there are whispers that the chatbot will render the mouse and keyboard obsolete. I like a voicenote as much as the next person, but I struggle to see the upside of a world where everyone is shouting into a Claude-wired dictaphone, asking the chatbot to plan their holidays or file their tax return. Concerns about digital servitude have existed since the dawn of the web, but with each new innovation meant to herald a new dawn, it feels instead like we entering a hall of infinite regress.