Amazon employees are “tokenmaxxing” due to pressure to use AI tools

Its f**king nuts. We're getting similar pressure here and people are checking code in with minimal to zero review, then they can't explain why its broken or not meeting requirements. I'm having a real hard time understand why ... that's YOUR name in the git blame and guess who gets pulled into the 3am war room.

Nothing gets pushed with my name on it unless I understand every line of it ! Yes I make mistakes, but once the defect is visible I can understand quickly where it is, why it is and how to fix it.
 
Upvote
419 (420 / -1)
I realize that my faith is weak or something; but what possesses these people to mandate consumption rather than results?

It's well known that Amazon his the delivery guys under the lash; but does it march them in and show them metrics for how much fuel each person burned and threaten to fire whoever used least by the end of the week? No, you just give them a package quota that has them peeing in bottles.

If Glorious Agentic is so good for programming; couldn't you just, y'know, demand more programming per programmer and let them respond to that incentive by botting up?

I can see demanding particular methods if it's someone's job to build internal tooling; or Joe must follow the internal style even if he prefers the elegance of lisp; but this seems like the more expensive version of demanding that at least x keyboards a year show up dead from overuse to prove your value to the organization.
 
Upvote
259 (259 / 0)

DeeplyUnconcerned

Ars Scholae Palatinae
1,125
Subscriptor++
"MeshClaw, make a plan to use enough tokens to keep me in the top 20-30% of users each month. Vary usage over the day and between days, weeks and months in a way that's broadly consistent with the usage patterns of other users with similar token use counts. Do not create patterns that might reasonably be used as indicators of fraud or fabricated data. Use tokens in a way that minimizes the negative impact this will create on other users or the company, while still meeting the other requirements. Execute this plan."
 
Upvote
320 (320 / 0)

DeeplyUnconcerned

Ars Scholae Palatinae
1,125
Subscriptor++
I realize that my faith is weak or something; but what possesses these people to mandate consumption rather than results?
Nobody knows how to measure programmer productivity, whereas token use is a metric that is already available. That's pretty much the whole answer.
 
Upvote
325 (325 / 0)

Dragonmaster Lou

Ars Scholae Palatinae
673
Subscriptor
Nobody knows how to measure programmer productivity, whereas token use is a metric that is already available. That's pretty much the whole answer.
Token use has replaced lines of code as a useless measure of programmer productivity.
 
Upvote
273 (273 / 0)

JohnDeL

Ars Tribunus Angusticlavius
8,837
Subscriptor
I wonder how long it will be before an employee figures out a way to have AI automate a way of using tokens, just so they can get on with their work uninterrupted?

On the bright side, if enough news articles come out about the idiocy of tokenmaxxing, then maybe managers will learn to gauge based on productivity and not on ridiculous metrics. And maybe monkeys will fly out of their butts, too...
 
Upvote
85 (85 / 0)

KrookedRooster

Ars Praetorian
506
Subscriptor
That or "How many lines of code did you write"

2. But they are very good. It took all day to figure it out and apply the correct references and download the modules.

I could have done it in 10 but that is not as robust and prone to errors.


Edit: Missed Dragonmaster Lou's post. Eh. The point still stands. Doesn't more comments boost the metric as well? "How much engagement did your post generate?"
 
Last edited:
Upvote
19 (19 / 0)
If Glorious Agentic is so good for programming; couldn't you just, y'know, demand more programming per programmer and let them respond to that incentive by botting up?

I can see demanding particular methods if it's someone's job to build internal tooling; or Joe must follow the internal style even if he prefers the elegance of lisp; but this seems like the more expensive version of demanding that at least x keyboards a year show up dead from overuse to prove your value to the organization.
I can imagine Amazon higher ups deciding they want the data of professionals making use of AI (in a bid to have better AI training materials) more than they want these people to actually do their jobs well. In that case, even if the task is useless, the "result" they care about is large scale data of the process.

Still super dumb, but I can see people thinking it. Like thinking return to work mandates are a good way to shed employees.
 
Upvote
63 (63 / 0)

Qyygle

Ars Praetorian
500
Subscriptor
Its f**king nuts. We're getting similar pressure here and people are checking code in with minimal to zero review, then they can't explain why its broken or not meeting requirements. I'm having a real hard time understand why ... that's YOUR name in the git blame and guess who gets pulled into the 3am war room.

Nothing gets pushed with my name on it unless I understand every line of it ! Yes I make mistakes, but once the defect is visible I can understand quickly where it is, why it is and how to fix it.
In other engineering fields, a PE seal on a final drawing set means the buck stops with that name. If the bridge collapses, the dam fails, the wastewater overflows, that engineer who sealed the set is on the hook.

Maybe it's time programmers get the same treatment. Maybe it's also long past time these AI companies are treated the same.
 
Upvote
155 (156 / -1)
Post content hidden for low score. Show…

DeeplyUnconcerned

Ars Scholae Palatinae
1,125
Subscriptor++
Why do they need to apply so much pressure? Can't be only inertia, right?
A large proportion of stock market investment capital is being driven by "how correlated is this company's output with the success of AI".

Corporate leaders have most of their pay tied up in stock options, so they have a strong incentive to tell investors that they are heavily exposed to AI outcomes, and a fairly strong incentive not to lie about it (because they'll get sued for securities fraud).

Corporate managers' pay is decided by corporate leaders, so they have a strong incentive to enable those leaders to tell investors that they're heavily exposed to AI without actually lying, which they achieve by reporting objective metrics that show heavy AI use by their interchangeable code drones.
 
Upvote
69 (69 / 0)

d3x7r0

Smack-Fu Master, in training
84
Subscriptor
In my large org, managers are definitely being encouraged to look at token usage and there is a positive association with being a power-user (though I see no evidence yet that people are trying to game it).
My own observation, looking at similar stats for my org, is that I see two types of developers on the top of the leaderboard: the ones that are completely lost and relying on ai tools to do their work for them, and the ones that are being extremely productive because they're juggling multiple things at once.

What does that tell me about the metric? That it's meaningless to measure as a way to correlate to actual productivity.
 
Upvote
39 (39 / 0)

uncle tupelov

Smack-Fu Master, in training
57
Same thing at my work. In the last year all talk of customer success has quietly gone away, replaced by absolutely relentless pressure to use Claude for Everything All The Time (tm).

Management says they want AI used "efficiently", but when the only shout-outs from VPs are for the devs using $30k+ of tokens a month, the message is clear.
 
Upvote
98 (98 / 0)

motytrah

Ars Tribunus Militum
2,972
Subscriptor++
Amazon has Bedrock, the cost to them for this kind of behavior is much different than most other corporate customers. Github CoPilot, which A LOT of large corporate customers use, is moving to billing that takes token use into account. There's going to be a lot of changes in how AI gets used as the real costs start getting pushed to companies.
 
Upvote
42 (42 / 0)
Do they not remember the KLoC?

Steve Ballmer famously described the frustrations of working on OS/2 with IBM, that Microsoft had the small-company attitude of getting things done, and IBM focused on KLoCs, thousands of lines of code as a measure of programmer productivity.

It has happened before and it will happen again.

edit:
View: https://www.youtube.com/watch?v=kHI7RTKhlz0
 
Upvote
89 (89 / 0)

JohnDeL

Ars Tribunus Angusticlavius
8,837
Subscriptor
Management says they want AI used "efficiently", but when the only shout-outs from VPs are for the devs using $30k+ of tokens a month, the message is clear.
As one of my psychology professors used to say "Behavior that is rewarded is repeated".
 
Upvote
57 (57 / 0)
In other engineering fields, a PE seal on a final drawing set means the buck stops with that name. If the bridge collapses, the dam fails, the wastewater overflows, that engineer who sealed the set is on the hook.

Maybe it's time programmers get the same treatment. Maybe it's also long past time these AI companies are treated the same.
That's really only civil engineering, and that's only because the government requires it. The vast majority of engineers are unlicensed.
 
Upvote
-3 (10 / -13)

PhilipStorry

Ars Scholae Palatinae
1,197
Subscriptor++
Look at it like the transition from machine language to a later generation languages like C or C++:

yes, you are less efficient in terms of lines of code and yes, the maturity of the "compiler" probably needs some work until it reaches "not really worth knowing". But the difference in speed of development and level of abstraction is just too great not to work that way. How many developers could write machine language in anger today in relation to C, Java, Python or similar?

It is the next step in software development, it makes sense to embrace it instead of trying to fight it.

This is not analagous. It's not even close to analagous.

Writing a program in a high level language vs a low level one is simply a tool choice. It's a trade-off that a person makes. Usually (but not always) they're trading off time spent developing vs control over the final code outputted.

AI is not the same choice. It's a non-deterministic pattern machine. It doesn't care which language you're writing in. If you give it a pattern (describing the code you want and giving it the code you have), then it will take that pattern and output the next phase of pattern (the code you want).

Whether you write it in C++, C# or the correct assembler for your platform, the act of writing it will hopefully allow you to understand what you have outputted.

Merely handing an LLM a pattern and then accepting a returned pattern does not mean you understand the code in that pattern.

Worse, the non-deterministic behaviour means that the LLM will give different patterns each time, which actually increases the need for proper checking. (You can make an LLM deterministic, but the vast majority of them are not.)

The idea that this should be embraced is at best negligence. You're quite literally saying "Well I don't care how it works, only that it works". And that's a fine position for the boardroom folks to take, but in other engineering realms it would be unacceptable.

It only makes sense to embrace this if you're either looking at changing your career, or if you simply hate software development and want to see it all torn down.
 
Upvote
146 (147 / -1)