I dunno - Coast Guard?Who Validates the Validator?
Nah, it was a comically trivial failure: just a bad file, broken validation code and a lack of testing. Swiss cheese model? There's more holes than cheese.Swiss cheese theory of holes aligning, a bad template instance shouldn't be able to hose the core sensor, but a second bug might make that possible (especially if broken content is typically weeded out at the validation phase).
https://en.m.wikipedia.org/wiki/Swiss_cheese_model
An underpaid contractorWho Validates the Validator?
in some other country making $1/day.An underpaid contractor
Customers will also be able to subscribe to release notes about these updates.
Fun little anecdote about Crowdstrikes CEO George Kurtz.So they admit they deployed worldwide all at once. That has been against best practices for large scale deployments for more than two decades. Sue them into oblivion.
Are there already organisations that will pivot to an A/B setup with two different providers for endpoint security?
Considering that this software runs at kernel level, considering that the operational impact can be huge and considering that this isn’t the first time such a thing happens?
A/B might also cover the case where the security software doesn’t protect (yet) but competing software does.
We have a customer now that's trying to automate all QA to save money.At some point, there was an MBA (or 12) in the decision making matrix. I promise you they determined it would save a few bucks to ONLY use automated unit tests and cut out the actual test deployments to actual systems.
That lab and the staff to run and maintain it would have cost them a few hundred thousand dollars a year!!! Can't absorb those kinds of operating costs in a multi billion dollar company.... what would the almighty shareholders say???
And literally likely some lives, too.this is sloppy devops on two major counts. Either one of these things would've saved millions or billions of customer dollars.
QA/QC doesn't return tangible value on the quarterly report so it must be a waste.At some point, there was an MBA (or 12) in the decision making matrix. I promise you they determined it would save a few bucks to ONLY use automated unit tests and cut out the actual test deployments to actual systems.
That lab and the staff to run and maintain it would have cost them a few hundred thousand dollars a year!!! Can't absorb those kinds of operating costs in a multi billion dollar company.... what would the almighty shareholders say???
From the response:How on earth did they not:
1) Have staggered rollout for such mission critical stuff.
2) Test it on LIVE FREAKING WINDOWS SYSTEMS instead of trusting some unit test content validator thing that's not actually a real End-to-End test ?
It sounds like they test their content updates with a parser. Fine.. that's great.. but it's insufficient. Proper End-to-End systems testing is absolutely table stakes for this type of stuff.. this isn't just some random nodeJS module where you aren't necessarily culpable for downstream effects of breaking changes.
this is sloppy devops on two major counts. Either one of these things would've saved millions or billions of customer dollars.
They should have been doing this already. This is what the lawsuits should hinge on.Implement a staggered deployment strategy for Rapid Response Content in which updates are gradually deployed to larger portions of the sensor base, starting with a canary deployment.
I'm waiting to get some more details on the Uber Eats thing. So far is there is no official statement and just a few people reporting it on twitter. To me it smells like a joke or a scam.tldr "we'll prevent this by using bog standard industry practices that are literally taught in schools."
Then they give out $10 Uber Eats gift cards as a "thank you" and then even those don't work because they themselves cancelled them after issuance. Can't even roll out a fucking gift card.
Just astonishing.
Edit: a little anecdote from me, I recently interviewed there for an infrastructure/platform engineering position. Development pipeline was literally something my team would own (for that specific area/team not whole company). I went through the full round and got no offer, but it was a huge red flag to me that the hiring manager, when asked why he liked working at CrowdStrike, wouldn't shut up about the "stock price." He seemed not very interested in much beyond the fact that it was "fast growing." I suspect many managers were there for the IPO and don't give a fuck as long as they get their payout. To be fair it was an overall good interview process with no BS and people I know do like working there. But still...
Someone help me out here. What is the exact dollar-value salary range where you start failing upwards?Fun little anecdote about Crowdstrikes CEO George Kurtz.
In October 2009, McAfee promoted him to chief technology officer and executive vice president.[13] Six months later, McAfee accidentally disrupted its customers' operations around the world when it pushed out a software update that deleted critical Windows XP system files and caused affected systems to bluescreen and enter a boot loop. "I'm not sure any virus writer has ever developed a piece of malware that shut down as many machines as quickly as McAfee did today," Ed Bott wrote at ZDNet.[6]
Pulled from the Wiki article about him, but verified through ZDNet article and a couple of others I poked at.
So, not his first rodeo of insufficient testing and bad practices in a group he is leading...
I read this more as "they're blaming the testing teams."Yea, you can't just rely on unit tests and validators and sub-module checks, etc. They are good practice but are insufficient to ship. Seems like someone didn't learn that very important lesson in their coding bootcamp.
The embarrassing thing ? I'm a fucking product manager who doesn't write a line of code and I know this. It's so basic that the "dumb product guys" who don't understand all the details of engineering devops get it.
<quote>to allow its software to "gather telemetry on possible novel threat techniques."</quote>
deployed a broadening data collection update at midnight to all devices.. this deserves a deeper dive as well.
This very interesting post goes into the terms of service and basically concludes that "we told you not to use this software on critical systems and if you did, it's on you"Ok so is the company liable for any of the downstream damages? (I feel like I know the answer to this.)
True, but fuzzing would have greatly increased the likelihood of finding the problem before it caused global chaos. The have accepted this as they are now promising to do this in the future....
Even the most impeccable testing can only assure you that your inputs won't cause your driver to misbehave; they can't assure you that you will always remain in control of which inputs your driver ends up chewing on.
I worked at a large "northern European" HQ'ed telecoms company: (not saying which one...)We have a customer now that's trying to automate all QA to save money.
I chuckle. Whatever. At least I'm not dealing with their idiot asses.
This exactly, if they'd tested it on even a single VM/metal windows machine they'd have noticed the BSODs.How on earth did they not:
1) Have staggered rollout for such mission critical stuff.
2) Test it on LIVE FREAKING WINDOWS SYSTEMS instead of trusting some unit test content validator thing that's not actually a real End-to-End test ?
It sounds like they test their content updates with a parser. Fine.. that's great.. but it's insufficient. Proper End-to-End systems testing is absolutely table stakes for this type of stuff.. this isn't just some random nodeJS module where you aren't necessarily culpable for downstream effects of breaking changes.
this is sloppy devops on two major counts. Either one of these things would've saved millions or billions of customer dollars.
No, on this Microsoft is stuck, and beholden to their EU anti-trust agreements from the oughts forcing them to allow 3rd party software to operate like this.As been addressed far too many times, it's laziness and incompetence not a necessary evil. Some that should also be attributed to Microsoft especially OS design.
The worst part is you could very easily automate roll out to the test farm. You just need a test side to deploy the thing to a bunch of common configurations in VM's and maybe some on bare metal.How on earth did they not:
1) Have staggered rollout for such mission critical stuff.
2) Test it on LIVE FREAKING WINDOWS SYSTEMS instead of trusting some unit test content validator thing that's not actually a real End-to-End test ?
It sounds like they test their content updates with a parser. Fine.. that's great.. but it's insufficient. Proper End-to-End systems testing is absolutely table stakes for this type of stuff.. this isn't just some random nodeJS module where you aren't necessarily culpable for downstream effects of breaking changes.
this is sloppy devops on two major counts. Either one of these things would've saved millions or billions of customer dollars.
The option was in the standard deployment management console to have staged deployments of all patches.From the response:
They should have been doing this already. This is what the lawsuits should hinge on.