Yeah, I think there should be an all-new law based on it. Fraud could work, but only for the people who were paying AI for their services. It wouldn't do anything for the people that the AI cited in its output.
Though I think fraud requires intentional deception. I feel like this is more negligent deception (layperson using the word, so don't take that as the actual legal definition of "negligence.") I think these people really do think the AI would produce accurate output, or they would never pay for it and try to pass it off as legit.
The makers of AI, on the other hand, have more than enough evidence to know that any output by their software is likely to have errors. It feels like there's some possibility of making fraud stick there if they know that and still get people to use it.