Should AI chatbots have ads? Anthropic says no.

RuntimeFire

Smack-Fu Master, in training
77
I would be willing to bet they go back on their word or find a loophole at some point.

What they say and what they do rarely line up.


Also;
nor will Claude’s responses be influenced by advertisers or include third-party product placements our users did not ask for.
Isn't that literally what SEO or AIEO(bingo was his name) does already? So they are influenced by third party product placement they just don't currently profit from it.
 
Upvote
59 (64 / -5)
Ads in chatbots sound so bad.

Like consider how much trouble LLMs have maintaining coherency even now, it's going to be so much worse if you start adding prompts telling them to push certain products.

Remember when we had that article about Grok randomly bringing up White Genocide constantly even in unrelated topics? Imagine that but it's fucking Aldis or something.
 
Upvote
29 (29 / 0)
I would be willing to bet they go back on their word or find a loophole at some point.

What they say and what they do rarely line up.


Also;

Isn't that literally what SEO or AIEO(bingo was his name) does already? So they are influenced by third party product placement they just don't currently profit from it.
They are shooting for the professional market.

OpenAI's soon-to-be-fatal mistake is that they are trying to shoot for the mass market rather than focus on a niche. Mass market offerings tend to attract the most VC dollars. Nobody cares about a "boring" SaaS offering to pros.

The problem is that there is not much of a use case for non-professional users to pay for ChatGPT, and professionals often have niche focused LLMs that are tailored to their needs.

I'm calling it now: when the AI bubble finally pops, Open AI will be the one that dies. Anthropic, etc, will probably kill the free tier and ramp subscription prices. Google will backpedal on AI and make it more professionally focused.
 
Upvote
47 (47 / 0)

RuntimeFire

Smack-Fu Master, in training
77
I'm calling it now: when the AI bubble finally pops, Open AI will be the one that dies. Anthropic, etc, will probably kill the free tier and ramp subscription prices. Google will backpedal on AI and make it more professionally focused.
Personally I think anyone who isn't already making tons of money from other avenues will go under. So google, meta, x, Microsoft will stay afloat just because they can afford it. I do agree that free plans will go and prices will hike. I can see that killing the vibe coding market as well having a further knock on on the smaller companies like cursor. We will also get to see how many of these AI companies are actually just wrappers of other LLMs.

Given how many companies Anthropic are already selling to they must be reaching saturation whilst apparently not profitable. Unless they can survive the others dying off and eating their share I don't see how they can survive investment drying up.

Once the true pricing so they're profitable comes out I suspect there will be a lot less interest in the products on offer barring a breakthrough reducing resources.


Hopefully it becomes a big boon for local models though!
 
Upvote
13 (14 / -1)
Based on their published Claude constitution, I'm inclined to give them the benefit of the doubt/take them at their word on this.

Would be nice to see some reference to that constitution in this article, as it drives a lot of what the company will do on the Claude roadmap.
Do you believe that document to be in any way legally binding, Miss Whittier?
 
Upvote
0 (6 / -6)

norton_I

Ars Praefectus
5,776
Subscriptor++
I would be willing to bet they go back on their word or find a loophole at some point.

What they say and what they do rarely line up.

I bet they first find a loophole (ads not technically part of the chatbot) and then go back on their word (ads in the chatbot).

But their cash flow and investor sentiment must be such that they can afford to wait for OpenAI to take the blowback for a while.

Remember when Google ran Pixel ads that made fun of the iPhone for ditching the headphone jack. The removed the headphone jack from the next Piexl phone 6 months later.
 
Upvote
23 (24 / -1)

J.C. Helios

Ars Scholae Palatinae
978
This will be an ad we look back at in 5 years and laugh at, either because AI died in the crib or because Anthropic will have ads. Either way, if there is a there, ads will follow.

Reminds me of that Samsung ad making fun of Apple for removing the headphone jack.


View: https://www.youtube.com/watch?v=wQ4ys-R1B_8

Like, no kidding the dongle life was a downgrade, but Apple did it to make money selling high-margin Bluetooth earbuds... same reason Samsung ate crow and removed the headphone jack shortly thereafter.
 
Upvote
13 (16 / -3)
Like, no kidding the dongle life was a downgrade, but Apple did it to make money selling high-margin Bluetooth earbuds... same reason Samsung ate crow and removed the headphone jack shortly thereafter.
Meanwhile, my $200 (free and clear, unlocked) Moto G84 has a big battery, plenty of grunt to run everything except the latest 3D games, and still comes with a headphone jack.

Have fun keeping up with the Joneses, guys.
 
Upvote
-17 (2 / -19)

Resistance

Wise, Aged Ars Veteran
417
This will age as poorly as:
1770246103476.png
 
Upvote
70 (70 / 0)

jasonridesabike

Ars Tribunus Militum
2,175
Subscriptor
Personally I think anyone who isn't already making tons of money from other avenues will go under. So google, meta, x, Microsoft will stay afloat just because they can afford it. I do agree that free plans will go and prices will hike. I can see that killing the vibe coding market as well having a further knock on on the smaller companies like cursor. We will also get to see how many of these AI companies are actually just wrappers of other LLMs.

Given how many companies Anthropic are already selling to they must be reaching saturation whilst apparently not profitable. Unless they can survive the others dying off and eating their share I don't see how they can survive investment drying up.

Once the true pricing so they're profitable comes out I suspect there will be a lot less interest in the products on offer barring a breakthrough reducing resources.


Hopefully it becomes a big boon for local models though!
I think that Google, with it's years of investment in TPU's will pull ahead in terms of cost per inference. Others are already attempting to do the same and/or licensing Google TPU's. The race will be amongst those capable of achieving cost performant inference more than anything else.

The question isn't can useful work be done, or who has the most useful model, it's who will be able to do so at a sane operating cost in 3-4 years or whenever the circular investment begins to fall apart. That race is already on and showing great progress. Nvidia just aqui-hired a company working on exactly that.

Google has the biggest moat. Meta is quietly building their own. OpenAI is scrambling to do so with Broadcom. Anthropic is licensing TPUs from Google with the largest TPU deal in Google's history.
 
Upvote
7 (7 / 0)

Resistance

Wise, Aged Ars Veteran
417
Reminds me of that Samsung ad making fun of Apple for removing the headphone jack.


View: https://www.youtube.com/watch?v=wQ4ys-R1B_8

Like, no kidding the dongle life was a downgrade, but Apple did it to make money selling high-margin Bluetooth earbuds... same reason Samsung ate crow and removed the headphone jack shortly thereafter.

It's amusing how we are using advertisements as evidence in this context.
 
Upvote
7 (7 / 0)
Post content hidden for low score. Show…

Fred Duck

Ars Tribunus Angusticlavius
7,166

Should AI chatbots have ads?​


YES.

Then maybe people will understand they're not unbiased.

This will age as poorly as:
View attachment 127692
No, no, that isn't what it appears. It means famed musician and actress Courtney Love is sharing a password with...her brother. Yes. Probably for an Office360 subscription.
 
Upvote
-8 (0 / -8)

quamquam quid loquor

Ars Tribunus Militum
2,822
Subscriptor++
Oh please, every Online thing gets enshitified with ads at some point.
100% they're just talking their book. They don't have as many free users so they can't monetize meaningfully with ads anyway.

I will say I use Claude Opus 4.5 all day long and it is vastly superior for coding. OpenAI is in serious panic mode for real work and needs to pivot hard into monetizing its userbase and network effect before it loses it to Anthropic or Google.
 
Upvote
7 (8 / -1)

quamquam quid loquor

Ars Tribunus Militum
2,822
Subscriptor++
I think that Google, with it's years of investment in TPU's will pull ahead in terms of cost per inference. Others are already attempting to do the same and/or licensing Google TPU's. The race will be amongst those capable of achieving cost performant inference more than anything else.

The question isn't can useful work be done, or who has the most useful model, it's who will be able to do so at a sane operating cost in 3-4 years or whenever the circular investment begins to fall apart. That race is already on and showing great progress. Nvidia just aqui-hired a company working on exactly that.

Google has the biggest moat. Meta is quietly building their own. OpenAI is scrambling to do so with Broadcom. Anthropic is licensing TPUs from Google with the largest TPU deal in Google's history.
If NVIDIA's new Rubin Platform pulls off its 10x reduction in inference costs you will see huge price/cashflow changes in the next year. Google's target for its TPUs is 100-1000x reduction in inference costs.
 
Upvote
-4 (6 / -10)
I think that Google, with it's years of investment in TPU's will pull ahead in terms of cost per inference. Others are already attempting to do the same and/or licensing Google TPU's. The race will be amongst those capable of achieving cost performant inference more than anything else.
There is only so much that can be done to make dedicated FP16/FP8 math more efficient. Google's TPUs are hilariously efficient in doing exactly that, but it's not like there are orders of magnitude to be had there.

And keep in mind, Google is making TPUs for Google stuff -- they train a huge model once, then infer it a billion times to give you shitty AI "search" results on top of their pages. That's not really where the truly useful stuff, also known as the money, is -- generic models are already hitting walls all over the place. Doing domain-specific things on changeable environments, like code analysis on large codebases, that can be useful, but you can't really do that on a generic model, the context gets out of hand. At some point you're going to have to train, train, train that model on each revision for it to be useful, and Google TPUs can't do that.

And as has been said a million times now, the real fun starts when these companies start charging what their services actually cost. At this point they're just sinking trillions into people becoming dependent on them, but the piper will have to be paid sometime.

To wit, Lee's article about his vibe-coding a syntax coloring Python script should give you a hint: he used day after day after day of "free" Claude credits to do it. I'm willing to bet that the power and amortized hardware cost of those "free" credits might be something like $10/day. Would you still "vibe" as much with doing a random hobby project if it costs you $50 or even $100 for your agent to do it?
 
Upvote
22 (22 / 0)