There's no IP law that protects model weights when they're connected to exposed external endpoints. They can't be a trade secret, because if the output is capable of exposing the weights then it's inherently not a trade secret, under trade secret law.
And if any form of copyright could conceivably apply to "AI" output, then it's because of "AI" input, and all these companies are screwed because they trained their LLMs on unlicensed content.
And while copying a model directly would possibly violate copyright (if the model itself weren't a mass of derivative work of all the copyright violations it was trained using) as a static collection of information in a specific format (much like copyrights of telephone directories, databases, etc), distilling model weights from running against the working model doesn't: they're functional in that context, and what's being distilled isn't an actual specific copy of the model's underlying data and format, it's a derivation of the functioning the model performs.
So there's no "stealing". At best there's some "unlicensed use". Kind of like "unlicensed use" of all that material they crawl. Boohoo. This whole LLM crap is both unsustainable and turning into a race to the bottom where if you don't use it you're going to get stomped by it, even though the primary thing it produces is same-same mediocrity; please break the process of these commercial thieves burning electricity and all of the chips in existence on modeling stolen content faster so we can get past it. "Stealing" from them to do it is just the icing on the cake.