Consulting firm quietly admitted to GPT-4o use after fake citations were found in August.
See full article...
See full article...
"If you can't be part of the solution, there's good money to be made by prolonging the problem."Sorry for venting guys, but I have had it with these consultants. Once talked to a few Deloitte guys at a job fair. We found out in a few minutes we were not a match. My God what a bunch of presumptuous assholes. As we say here in Dutch, they were dropped upwards. Corporate speak on steroids to package ordinary ideas a local farmer can come up with. Lots of wrapping, little substance. Why am I not surprised they used an Ai LLM tool to help them wrap things up. It is perfectly suited for that. Maybe they need to buy more expensive suits to hide their not that unordinary intelligence even more.
Yeah, I think there should be an all-new law based on it. Fraud could work, but only for the people who were paying AI for their services. It wouldn't do anything for the people that the AI cited in its output.Perhaps there's a claim under right of publicity or fraud.
It's unfortunate that the law is relatively tolerant about falsehoods, even when they are deliberate or (as here) clearly negligent. That's left us very unprepared for the modern age.
Deloitte, and the other "big 4" consulting companies aren't listed on any stock exchange. As auditors for a vast number of publicly traded companies, being publicly traded themselves creates massive conflicts of interest.... Anybody thinking about shorting Deloitte? For a professional services organization, using AI to do professional work, or the notorious work of its former CEO Cathy Engelbert sure isn't helping.
If an individual did this, they'd be in jail. I don't believe Australia has the concept of corporate personhood but here in the US, if corporations are people, why can't we throw corporations in jail? Cause there are many, many American companies that deserve decades in prison at this point (I'd give Deloitte maybe 4 years for this, they can be out in 2 years 3 months due to corporate prison overcrowding).
The AI does enhance the report. I mean the report content is probably as garbage as the citations but at least it cost the recipient nothing now lol, that is the enhancementEarlier this year, Deloitte declared it would start using generative AI for its reports as a way of enhancing the value provided to its clients. I don't remember if they said it in a specific report or not, but I recall seeing it.
The citation issue continues to trip people up across the spectrum, from lawyers to business analysts. It's striking how many supposedly smart people do not understand the limits of the tools they insist will deliver such amazing value.
Try asking one to provide a recipe for word salad.If you want paragraphs of nonsense, LLMs are a great tool. If you want understanding, they are no substitute for some reading.
Any non-AI report would’ve been littered with just as many errors.Earlier this year, Deloitte declared it would start using generative AI for its reports as a way of enhancing the value provided to its clients. I don't remember if they said it in a specific report or not, but I recall seeing it.
The citation issue continues to trip people up across the spectrum, from lawyers to business analysts. It's striking how many supposedly smart people do not understand the limits of the tools they insist will deliver such amazing value.
Almost half a million for a report? Wow...even if it was crafted by top of the field experts and took a couple months, thats a ridiculous amount of money to pay for a blame deflector.
There is a scene in my head that I'm 95% sure is from the Simpsons where someone commits insurance fraud and the insurance agent sent to pay out the money is so pure and trusting that he doesn't even think to question the payout despite increasingly obvious signs.I do believe there is a word for what Deloitte did there.
Fraud.
Usually, academic papers include an abstract written by the author. So, why would you need AI to supply a summary?I want to be fair about this, because I generally think ChatGPT is a useful tool for lit searches and summaries of papers (as with any summary, some nuance is lost). However, once I asked it for sources on a certain topic and it responded with hallucinated papers. My first clue that something wasn't quite right was when one of the papers (of which I was not previously aware) listed me as the first author...
The idea seems to be they get the "expert advice" for less than they'd pay an actual expert. The problem, of course, is if you're not an expert in the field you can't always recognize when your supposed expert, bet it so-called AI or just some asshat defrauding you, is spewing garbage. The managers and executives all think they're much more intelligent than the vast majority of them actually are, though, so good luck getting them to grasp this basic fact.This. I absolutely cannot understand why a consultancy, who's entire business model is "pay us large sums of money for our expert's advice" would rely on a LLM for even as much as grammar advice.
Not slander, libel. It was published.Perhaps Lisa Crawford has a case for defamation or slander for having these false papers attributed to her. One way to stop the nonsense is to make it hurt. As it airs they are partially refunding the money but clearly all they did is engineer a few AI prompts to get the report. Make them refund it all, make them pay for defamation and send a message that this crap isn’t okay.
Same for the lawyers who submit briefs to the court with fake legal citations.
And that's the easiest mistake to spot. Earlier this year a friend of mine stated that ChatGPT has gotten "really good" at doing literature reviews with references and citations. Tried it, and not just were some references completely non-existent, most of the real references did not support the statements they were attached to.[...] if 10+ citations outright did not exist then Deloitte's analysis must have just been taken at face value with no serious review [...]
Fraud, as in common law fraud in the US, could be at play already because it the knowledge requirement generally (which can differ state by state) is Knowledge or Reckless Disregard: "False representations made recklessly and without regard for their truth in order to induce action by another are the equivalent of misrepresentations knowingly and intentionally uttered." Engalla v. Permanente Med. Grp., Inc Which is likely why they paid a refund to attempt to make Australia whole and to be able to point to it in a fraud suit as evidence that they were not trying to be reckless.Yeah, I think there should be an all-new law based on it. Fraud could work, but only for the people who were paying AI for their services. It wouldn't do anything for the people that the AI cited in its output.
Though I think fraud requires intentional deception. I feel like this is more negligent deception (layperson using the word, so don't take that as the actual legal definition of "negligence.") I think these people really do think the AI would produce accurate output, or they would never pay for it and try to pass it off as legit.
The makers of AI, on the other hand, have more than enough evidence to know that any output by their software is likely to have errors. It feels like there's some possibility of making fraud stick there if they know that and still get people to use it.
Misciting reports to get the conclusion you want is exactly the main job of consultancyIt's striking and worrying. Because a fake citation is easy to verify and fix. And it would be easy to add a "citation corrector" to an LLM that just removed or replaced bogus citations with "real ones". But the real problem is that the fake citations are a minor problem in themselves but are mostly a canary for the other problems that are much harder to verify. If a report like this cited real research papers but mis-stated their results, it would be much harder to detect. Still possible of course, but requires a lot of work that negates much of the benefit of hiring consultants to make a research report, and also may require significant subject matter expertise that the client might not have.
The right small consultant can be a godsend. A good friend was the CFO with a medium-sized company who consolidated divisions and laid him off (he saw it coming--no shenanigans or ill-will involved). As an interim gig, a vendor he used recommended him to another of the vendor's clients, whose business had been growing and was looking for consulting help setting up something more robust than his bookkeeper had been able to handle. My friend finished the analysis and made his recommendations, and the company's owner asked him to stick around and help implement it. "I spelled out the structure you need, and part of the recommendation is to hire a CFO instead of trying to do it yourself," my friend said.The problem with big consulting is they never want to integrate with the clients. They just request a bunch of information then write a strategy document telling the client what they want. Then they implement the plan, provide two weeks of handover and piss off. The clients are left with some piece of shit platform that isn’t fit for purpose and cost 3 times what on prem teams could produce
/small company consultant for the last 15 years
We always joked that we loved big consulting. They gave us endless work fixing their bullshit implementations
No no no, saying they "were made" alludes to a person doing something whereas mistakes actually just will themselves into existence.4. Mistakes were made.
Exactly my question; how can the nonexistence of the supporting evidence for conclusion X have no effect on the veracity of conclusion X?The widespread adoption of AI is revealing how common motivated reasoning analysis is across every domain and industry. I would think fake citations would warrant a reexamination of the entire document, not just the quiet removal of those fake cites.
I mean, I remember doing that as an undergrad banging out meaningless term
papers and thinking “gosh I’m going to get a great grade on this despite realizing halfway through that I’m wrong in my hypothesis and just ignoring those citations.”
But I suppose I shouldn’t be surprised. The entire purpose of those giant consulting firms is to send some 28 year olds who make $300,000/year to go suss out what shitty thing Big Boss wants to do, then write a report justifying why firing everyone, doing a very unpopular thing, or treading heavily on moral and/or legal boundaries is the goal.
Imaginary AI cites fit perfectly into that system when you understand what the real product is. Voila. The Big 4.
Because consultancy companies are not there to provide consultations, they are an expensive exculpation tool. Nobody gets fired for hiring a consultancy company.This. I absolutely cannot understand why a consultancy, who's entire business model is "pay us large sums of money for our expert's advice" would rely on a LLM for even as much as grammar advice.
If your expert is ChatGPT, why do I pay you? I can write prompts myself. This is an incredibly fast way to sink your entire business model- if I were McKinsey or one of the others I'd be out there advertising "We know what we're doing, we don't need AI to do it poorly"
But do you think you could show that just having her cited in a paper was damaging (especially when it was one of numerous citations) as opposed to something like claiming she was a co-author of the paper?From your other post, a common type of damage to allege in defamation suits is reputational harm, and she could likely argue that her name associated with a fraudulent paper is reputationally damaging here in the US. Her higher bar might be showing that it was at least negligence to list her as the author.
"Don't worry -- we didn't write it" is certainly a slogan a company could have.The AI does enhance the report. I mean the report content is probably as garbage as the citations but at least it cost the recipient nothing now lol, that is the enhancement![]()
Because charging for said experts but not having to pay for them is even better greed always wants more for less.This. I absolutely cannot understand why a consultancy, who's entire business model is "pay us large sums of money for our expert's advice" would rely on a LLM for even as much as grammar advice.
If your expert is ChatGPT, why do I pay you? I can write prompts myself. This is an incredibly fast way to sink your entire business model- if I were McKinsey or one of the others I'd be out there advertising "We know what we're doing, we don't need AI to do it poorly"
Having worked in an engineering management role in a company that was heavy on the use of consultants, I really wish this was the case.It looks like AI might destroy the consultancy industry. Why pay millions to a fancy consultant when one can ask an LLM to crank out an equally worthless report? Management pays consultants to justify decisions they have already made and provide a way to deflect blame when they go awry. It sounds a lot cheaper to fire up an LLM, get the nonsense one wants and have a dumb computer to blame.
They consider 10% small?Kyle Orland said:Deloitte and the DEWR buried that explanation in an updated version of the original report published Friday "to address a small number of corrections to references and footnotes," according to the DEWR website.
They mean producing more pages in less time (and with less staff). I can't wait until Detroit & Touch publish a genAI-enhanced cookery book with genAI recipes mixed in!Earlier this year, Deloitte declared it would start using generative AI for its reports as a way of enhancing the value provided to its clients.
Using “AI” toThey consider 10% small?
The problem isn't the size of the correction but the fact that we don't know how much AI was used in the production. Are the actual ideas sound? If they didn't bother to check something simple such as references, did they do any checking at all?
George Touche would never have allowed this to happen.
They mean producing more pages in less time (and with less staff). I can't wait until Detroit & Touch publish a genAI-enhanced cookery book with genAI recipes mixed in!
Retrieval Augmented Generation (RAG) systems used to get data from documents into an LLM’s context are lossy.Genuine question - that makes me question the value of the summaries. How can we know the summaries are correct without reading the paper itself? Is there any research on not just lost nuance, but the hallucinations in AI summaries? I'd be interested in seeing it across approaches, such as NotebookLM and Kagi, which can pin to a set of sources, or requests to summarize a single paper across different models.
I occasionally use AI to summarize things, but I don't trust it past summarizing things where the ultimate goal is to point me to the actual authoritative source when I'm having a hard time finding it, so I can verify the summary. Do you trust the summaries you get? And if so, why?
Except for the very likely case that McKinsey and all other consultancies are doing the exact same thing as Deloitte did. After all, consultancies are simply made up of large number of MBAs that provide advice to industries in which they have absolutely no experience in getting paid by companies' C-suite to absolve themselves of the repercussions of the decisions they take by saying that they were guided by a world-respected consultancy.This. I absolutely cannot understand why a consultancy, who's entire business model is "pay us large sums of money for our expert's advice" would rely on a LLM for even as much as grammar advice.
If your expert is ChatGPT, why do I pay you? I can write prompts myself. This is an incredibly fast way to sink your entire business model- if I were McKinsey or one of the others I'd be out there advertising "We know what we're doing, we don't need AI to do it poorly"
Bullshit.The secretary of DEWR is former Deloitte partner Natalie James. An earlier Senate estimates hearing was told James was not involved in the decision to hire Deloitte to do the report.
It did enhance value for the clients, they got money backEarlier this year, Deloitte declared it would start using generative AI for its reports as a way of enhancing the value provided to its clients. I don't remember if they said it in a specific report or not, but I recall seeing it.
The citation issue continues to trip people up across the spectrum, from lawyers to business analysts. It's striking how many supposedly smart people do not understand the limits of the tools they insist will deliver such amazing value.