Blocking scrapers or implementing a sophisticated DoS detection scheme seems a little orthogonal to the security of the api.
If they had an anti scraping layer in there, it seems like the api would otherwise be identical. A bad guy and a good guy and a bad guy are calling the api in the same way. The only difference is the frequency of calls it sounds like.
Nothing Gibson seems to point out seems to be a fundamental api design flaw, but rather just a missing block in the overall service. Most every other web service does this the same way. Blocking 3rd party clients isn't cool IMO, and is only an arms race anyway.
Or did I miss the memo that writing a straightforward scraper is an exploit these days? I guess people have gone to jail now for scripting curl..