Consulting firm quietly admitted to GPT-4o use after fake citations were found in August.
See full article...
See full article...
If an individual did this, they'd be in jail. I don't believe Australia has the concept of corporate personhood but here in the US, if corporations are people, why can't we throw corporations in jail? Cause there are many, many American companies that deserve decades in prison at this point (I'd give Deloitte maybe 4 years for this, they can be out in 2 years 3 months due to corporate prison overcrowding).
but now we can do it with even cheaper less experienced 'prompt engineers' who didn't go to college and increase the profit margin!Don't these management consulting companies run on fresh-out-of-college "consultants" who have zero or close to zero real world experience?
ETA ninja'ed
Too many people don't want to understand that what you describe is foundational to modern civilization. The focus should be less on corporations as legal entities with "rights", and more on drawing a distinction between a corporation shielding an investor for legal liability beyond his or her investment, and the misbelief that the financial shield somehow magically protects its officers, agents, and trustees from any legal consequences for the impact of law-breaking they know or should have known was being perpetrated. The "should have known" there is important: if the law is broken on your watch and you should have known, then by definition you are not capable of being given that type of responsibility and should be barred from such.All the common law legal systems, and many civil law systems, have corporate personhood. It's because o ly a person can own property, enter contracts, owe debts, and so on.
So in order for a corporate entity to function and own property, enter contracts, owe debts, and so on, we gave it personhood and test it in part like a person. Associated to that, we give them rights similar to a person, the idea being that rights that can be exercised individually should be able to exercised collectively.
The issue is when the rights extended to corporations go too far and wonky interpretations of constitutional rights, most prevalent in the USA but by no means restricted to that jurisdiction. I won't get into the problems because they are many, but it's more a sign of a broken legal system that corporations have as many if not more rights than a regular person. Corporations should have only a subset of regular person hood rights, not in some cases more.
The equivalent to jail for corporations would be a consent order or various court orders to restrict their liberty and compel actions. Jail at it's core is seperation from society and a restriction on liberty. So the equivalent for a non physical entity is banking them from operating in certain jurisdictions for a period of time.
Probation or parole would be the equivalent of hiring independent monitors or being banned from certain types of operations or contracts. There is also the corporate death penalty of dissolving the corporation and devolving assets back to creditors and shareholders.
I absolutely can understand why a consultancy would do this.This. I absolutely cannot understand why a consultancy, who's entire business model is "pay us large sums of money for our expert's advice" would rely on a LLM for even as much as grammar advice.
If your expert is ChatGPT, why do I pay you? I can write prompts myself. This is an incredibly fast way to sink your entire business model- if I were McKinsey or one of the others I'd be out there advertising "We know what we're doing, we don't need AI to do it poorly"
Well, Apple tried this with news headlines. We know how well it worked out.ChatGPT openly acknowledges, if you ask it, that it is best at summarizing short, clear content. Since that's the content you least need summarized, it seems to be a use-case in search of a use.
That's just golden. Best. Gift. Ever.The astute reader will guess that yes, they did proceed with reckless abandon, shortly after I departed that company. One of my coworkers kept a copy of the presentation, and as they ran into each issue, annotated that with the date they did run into it, and some little screenshots from slack and emails of people freaking out, and then when it was all done, he realized it was 12 slides long, and printed it as a calendar and sent it to me![]()
My question is why would Deloitte use an off-the-shelf LLM implementation, and not a fine-tuned, custom trained model for their own consultancy needs? Too hard to do, takes too long, or don't they get the need for a Retrieval-Augmented Generation (RAG) database with the citation docs? Not sure how basic their workers implementation was, but damn, they sound dumber than they already are.Your process of 'baselining' exposes a fundamental flaw in many people's use of GenAI - potentially the same attitude that lead the authors of Deloitte's report to inserting a load of random citations.
The fact that you've checked a few summaries and they looked good should give you absolutely no confidence about the likelihood of future summaries being entirely accurate. That's simply not how current GenAI works. The risk of hallucination is inherent and continues to exist even in situations where the output is often correct.
While you can put in the effort to double-check that output is a representation of the text, how can you possibly know that nothing important has been missed without reading the whole thing?
If you need an accurate summary you can't trust GenAI. And if you're happy to accept the risks that the summary isn't accurate, then you don't need the summary.
Earlier this year, Deloitte declared it would start using generative AI for its reports as a way of enhancing the value provided to its clients. I don't remember if they said it in a specific report or not, but I recall seeing it.
The citation issue continues to trip people up across the spectrum, from lawyers to business analysts. It's striking how many supposedly smart people do not understand the limits of the tools they insist will deliver such amazing value.
I sometimes think the only people hiring Deloitte are former employees who owe favors.. Look at the John Oliver episode - all they crank out is shit.
Guess who are former Deloitte consultants? Satya Nadella and Sundar Pichai.. Explains SO MUCH.
While I am in favor of sticking it to people who use AI like this in any way possible, I think you'd probably find it hard to make the case that it's "defaming" her.(snip)
Edit: Due to the number of downvotes, I have to wonder if I just put my point across poorly or if there's just a bunch of AI defenders downvoting. (Snip)
Consultants are most heavily used by managers who are out of their depth, in companies where almost everyone is out of their depth, where most of upper management is clinging onto their jobs for dear life. In such companies, the executives know that's the situation, because they're in the same boat, so they put no credence in the actual judgement of their team and want to see external advice. It's a sign that the organisation is rich on head-count, low on talent, and any trust in staff to make correct decisions has evaporated.
Due to the number of downvotes, I have to wonder if I just put my point across poorly or if there's just a bunch of AI defenders downvoting. My point is that the law as it is written today in most countries requires defaming to somehow damage someone's reputation. That's what I'm saying would be hard to prove in this case. I'm all for the AI companies being charged with fraud for knowing that their software is riddled with bad output.
I know that people cite "Good, fast, cheap. Choose two." as a rule of thumb for project management. But this clearly supports my modification: "Good, fast, cheap. Choose at most two." They chose zero (or maybe one; I bet they 'wrote up' that report relatively quickly.)Deloitte business model: not faster, not cheaper, not better.
Your poem is shit.Claude Haiku 3.5 | Here's a playful rhyme about Deloitte:
There once was a firm called Deloitte,
Where consultants would work day and night,
With spreadsheets galore,
And reports to the core,
Their PowerPoints always just right!
The rhyme pokes gentle fun at the consulting world, highlighting Deloitte's reputation for extensive reporting, long work hours, and meticulous presentation skills. It's a light-hearted take on the professional services giant, capturing the essence of corporate consulting with a touch of humor.
Note: The bold text is the actual humor to me.
Why would [subject of article] do [a thing that the article doesn't say they did]?My question is why would Deloitte use an off-the-shelf LLM implementation, and not a fine-tuned, custom trained model for their own consultancy needs? Too hard to do, takes too long, or don't they get the need for a Retrieval-Augmented Generation (RAG) database with the citation docs? Not sure how basic their workers implementation was, but damn, they sound dumber than they already are.
"A large, very well known, consulting organisation examined our situation and recommended..."The consulting industry is best understood as a way for the executive class to divert large sums of company resources to their friends and younger members of their class in exchange for applying the veneer of rigor to executive decisions. They not only don’t care how much of other people’s money they spend, it’s actually seen as better to spend a lot because when the business decision turns out to be flawed they have essentially prepaid for unlimited BS on demand to prevent accountability for the executives. In addition to the inexperienced recent grads, these companies maintain a stable of respectable senior guys of the right background who’ll show up in very nice suits and swear up and down nobody could reasonably have expected that pivot to blockchain to be anything less than a goldmine and that it’d be a major strategic error to factor the actual negative returns into someone’s bonus calculations. The overhead is paying for that service, too.
Yep, that's fair. I don't know what exact setup they used, so that's up for debate. Maybe more details will come to light thanks to the reporting.Why would [subject of article] do [a thing that the article doesn't say they did]?
I think you're making two wrong and unsupported assumptions. First, about Deloitte's work practices, and second that a properly trained system wouldn't make these errors.
Yeah, it's the seemingly total lack of review by there organizations, rather than their use of an AI tool, that worries me the most.It does worry me too that apparently nobody is sanity checking these reports before their general publication? Like if 10+ citations outright did not exist then Deloitte's analysis must have just been taken at face value with no serious review, and we are presumably using this to justify government policy?!
Even ignoring the generative AI aspect that strikes me as extremely concerning.
Apple said "Hmm. This is not ready for prime time," and turned it off.Well, Apple tried this with news headlines. We know how well it worked out.
I've done it once, but only because it was a real doctor's office and not like a random drop shipper or recruiter or crypto company or whatever. Called them up, and left them a message that whoever they hired to do their marketing was spamming the forums of national publications for an office serving clientele in a single city. Which was both a waste of their money and just going to annoy people instead of being useful advertisement.I almost want to contact the scam service from the scam poster above just to see if I can rustle someone's jimmies but man, it's a monday and I'm not ready for that yet
One of the hardest things to teach in business is appropriate delegation. Matching the demands of the task to the skills of the delegatee is not optional. This remains the core of my disgust for the people selling and shilling for these tools: they are not capable of reliably performing the tasks they are being sold to do.Yeah, it's the seemingly total lack of review by there organizations, rather than their use of an AI tool, that worries me the most.
LLMs have uses, but simply cannot be used for ANYTHING important without being thoroughly checked.
This is not necessarily a damning flaw; after all, whenever it costs less time and energy to check and correct something than it would to do it ourselves, even an error-prone tool could be useful.
But you have to check it.
Well, the complication is that the company is telling its employees to use the CEO's idiot nephew for important work, and then turning around and blaming the employees for the mistakes of the idiot nephew while telling its customers how awesome the nephew is and how lucky the customers are for paying to have access to it.Society needs to start assigning responsibility for these things correctly.
If a human created a report by hand with a bunch of made up citations, there would be some serious repercussions. When that human chooses to use an LLM and gets that result, there should be no difference.
Instead we keep seeing LLM users issuing weak apologies, like there was nothing they could do – "silly LLMs, what ya gonna do, amiright?"
The LLM is a tool, and the user of the tool bears responsibility for the result.
As we've learned from the Brittany Higgins case, the government (or at least the previous LNP government) was run on fresh-out-of-college "senior advisors" who have zero or close to zero real world experience.Don't these management consulting companies run on fresh-out-of-college "consultants" who have zero or close to zero real world experience?
ETA ninja'ed
Do they get their mumbo jumbo from old Dilbert comics mocking them?used as part of the technical workstream to help "[assess] whether system code state can be mapped to business requirements and compliance needs."
But did they have big balls?As we've learned from the Brittany Higgins case, the government (or at least the previous LNP government) was run on fresh-out-of-college "senior advisors" who have zero or close to zero real world experience.
That would undermine the main point of asking those consultancies for a reportSo, Deloitte are going to publish the system prompts used for this "tool chain" so people can check for biases that might have made it into the final "independent" report, right?
RIGHT?
McKinsey have been caught recycling reports and failing to swap out the names of the places in their recommendations.if I were McKinsey or one of the others I'd be out there advertising "We know what we're doing, we don't need AI to do it poorly"
Interviewer: This report that Deloitte produced for Australia's Department of Employment and Workplace Relations this week...
Deloitte executive: The one with the false citations?
Interviewer: Yeah.
Deloitte executive: Yeah, that’s not very typical, I’d like to make that point.
Interviewer: Well, how was it un-typical?
Deloitte executive: Well there are a lot of these reports going around the world all the
time, and very seldom does anything like this happen. I just don’t want people
thinking that Deloitte's reports have false citations.
Interviewer: Did this report have false citations?
Deloitte executive: Well, I was thinking more about the other ones.
...
Strictly speaking, Deloitte wasn't wrong; it delivered the final installment of value back to its client. LOL.Earlier this year, Deloitte declared it would start using generative AI for its reports as a way of enhancing the value provided to its clients. I don't remember if they said it in a specific report or not, but I recall seeing it.
The citation issue continues to trip people up across the spectrum, from lawyers to business analysts. It's striking how many supposedly smart people do not understand the limits of the tools they insist will deliver such amazing value.