429
submitted 3 months ago by MicroWave@lemmy.world to c/news@lemmy.world
  • Delta Air Lines CEO Ed Bastian said the massive IT outage earlier this month that stranded thousands of customers will cost it $500 million.
  • The airline canceled more than 4,000 flights in the wake of the outage, which was caused by a botched CrowdStrike software update and took thousands of Microsoft systems around the world offline.
  • Bastian, speaking from Paris, told CNBC’s “Squawk Box” on Wednesday that the carrier would seek damages from the disruptions, adding, “We have no choice.”
you are viewing a single comment's thread
view the rest of the comments
[-] Poem_for_your_sprog@lemmy.world 65 points 3 months ago

Why do news outlets keep calling it a Microsoft outage? It's only a crowdstrike issue right? Microsoft doesn't have anything to do with it?

[-] echodot@feddit.uk 36 points 3 months ago* (last edited 3 months ago)

It's sort of 90% of one and 10% of the other. Mostly the issue is a crowdstrike problem, but Microsoft really should have it so their their operating system doesn't continuously boot loop if a driver is failing. It should be able to detect that and shut down the affected driver. Of course equally the driver shouldn't be crashing just because it doesn't understand some code it's being fed.

Also there is an argument to be made that Microsoft should have pushed back more at allowing crowdstrike to effectively bypass their kernel testing policies. Since obviously that negates the whole point of the tests.

Of course both these issues also exist in Linux so it's not as if this is a Microsoft unique problem.

[-] themeatbridge@lemmy.world 6 points 3 months ago

There's a good 20% of blame belonging to the penny pinchers choosing to allow third-party security updates without testing environments because the corporation is too cheap for proper infrastructure and disaster recovery architecture.

Like, imagine if there was a new airbag technology that promised to reduce car crashes. And so everyone stopped wearing seatbelts. And then those airbags caused every car on the road to crash at the same time.

Obviously, the airbags that caused all the crashes are the primary cause. And the car manufacturers that allowed airbags to crash their cars bear some responsibility. But then we should also remind everyone that seatbelts are important and we should all be wearing them. The people who did wear their seatbelts were probably fine.

Just because everyone is tightening IT budgets and buying licenses to panacea security services doesn't make it smart business.

[-] ricecake@sh.itjust.works 7 points 3 months ago

In this case, it's less like they stopped wearing seatbelts, and more like the airbags silently disabled the seatbelts from being more than a fun sash without telling anyone.

To drop the analogy: the way the update deployed didn't inform the owners of the systems affected, and didn't pay attention to any of their configuration regarding update management.

[-] smeenz@lemmy.nz 2 points 3 months ago* (last edited 3 months ago)

The crowdstrike driver has the boot_critical flag set, which prevents exactly what you describe from happening

[-] echodot@feddit.uk 1 points 3 months ago

Yeah I know but booting in safe mode disables the flag so you can boot even if something is set to critical with it disabled. The critical flag is only set up for normal operations.

[-] cheddar@programming.dev 31 points 3 months ago* (last edited 3 months ago)

The answer is simple: they have no idea what they are talking about. And that is true for almost every topic they are reporting about.

[-] RizzRustbolt@lemmy.world 1 points 3 months ago

But... the BSOD!

[-] Rekhyt@lemmy.world 14 points 3 months ago

It was a Crowdstrike-triggered issue that only affected Microsoft Windows machines. Crowdstrike on Linux didn't have issues and Windows without Crowdstrike didn't have issues. It's appropriate to refer to it as a Microsoft-Crowdstrike outage.

[-] ricecake@sh.itjust.works 28 points 3 months ago

Funny enough, crowdstrike on Linux had a very similar issue a few months back.

[-] eyeon@lemmy.world 3 points 3 months ago

It's similar. They did cause kernels to crash. But that's because they hit and uncovered a bug in the ebpf sandboxing in the kernel, which has since been fixed

[-] jaybone@lemmy.world 2 points 3 months ago

Are they actually shipping kernel modules? Why is this needed to protect from whatever it is they supposedly protect from?

[-] corsicanguppy@lemmy.ca 2 points 3 months ago

They need a file io shim. That's gotta be a module or it'll be too slow.

[-] Poem_for_your_sprog@lemmy.world 4 points 3 months ago

I guess microsoft-crowdstrike is fair, since the OS doesn't have any kind of protection against a shitty antivirus destroying it.

I keep seeing articles that just say "Microsoft outage", even on major outlets like CNN.

[-] SaltySalamander@fedia.io 3 points 3 months ago

Microsoft did have an Azure outage the day before that affected airlines. Media people don't know enough about it to differentiate the two issues.

[-] Dran_Arcana@lemmy.world -1 points 3 months ago

To be clear, an operating system in an enterprise environment should have mechanisms to access and modify core system functions. Guard-railing anything that could cause an outage like this would make Microsoft a monopoly provider in any service category that requires this kind of access to work (antivirus, auditing, etc). That is arguably worse than incompetent IT departments hiring incompetent vendors to install malware across their fleets resulting in mass-downtime.

The key takeaway here isn't that Microsoft should change windows to prevent this, it's that Delta could have spent any number smaller than $500,000,000 on competent IT staffing and prevented this at a lower cost than letting it happen.

[-] echodot@feddit.uk 3 points 3 months ago

Delta could have spent any number smaller than $500,000,000 on competent IT staffing and prevented this at a lower cost than letting it happen.

I guarantee someone in their IT department raised the point of not just downloading updates. I can guarantee they advise to test them first because any borderline competent I.T professional knows this stuff. I can also guarantee they were ignored.

[-] ricecake@sh.itjust.works 5 points 3 months ago

Also, part of the issue is that the update rolled out in a way that bypassed deployments having auto updates disabled.

You did not have the ability to disable this type of update or control how it rolled out.

https://www.crowdstrike.com/blog/falcon-content-update-preliminary-post-incident-report/

Their fix for the issue includes "slow rolling their updates", "monitoring the updates", "letting customers decide if they want to receive updates", and "telling customers about the updates".

Delta could have done everything by the book regarding staggered updates and testing before deployment and it wouldn't have made any difference at all. (They're an airline so they probably didn't but it wouldn't have helped if they had).

[-] corsicanguppy@lemmy.ca 0 points 3 months ago

Delta could have done everything by the book

Except pretty much every paragraph in ISO27002.

That book?

Highlights include:

  • ops procedures and responsibilities
  • change management (ohh. That's a good one)
  • environmental segregation for safety (ie don't test in prod)
  • controls against malware
  • INSTALLATION OF SOFTWARE ON OPERATIONAL SYSTEMS
  • restrictions on software installation (ie don't have random fuckwits updating stuff)

..etc. like, it's all in there. And I get it's super-fetch to do the cool stuff that looks great on a resume, but maybe, just fucking maybe, we should be operating like we don't want to use that resume every 3 months.

External people controlling your software rollout by virtue of locking you into some cloud bullshit for security software, when everyone knows they don't give a shit about your apps security nor your SLA?

Glad Skippy's got a good looking resume.

[-] ricecake@sh.itjust.works 3 points 3 months ago

Yes, that book. Because the software indicated to end users that they had disabled or otherwise asserted appropriate controls on the system updating itself and it's update process.

That's sorta the point of why so many people are so shocked and angry about what went wrong, and why I said "could have done everything by the book".

As far as the software communicated to anyone managing it, it should not have been doing updates, and cloudstrike didn't advertise that it updated certain definition files outside of the exposed settings, nor did they communicate that those changes were happening.

Pretend you've got a nice little fleet of servers. Let's pretend they're running some vaguely responsible Linux distro, like a cent or Ubuntu.
Pretend that nothing updates without your permission, so everything is properly by the book. You host local repositories that all your servers pull from so you can verify every package change.
Now pretend that, unbeknownst to you, canonical or redhat had added a little thing to dnf or apt to let it install really important updates really fast, and it didn't pay any attention to any of your configuration files, not even the setting that says "do not under any circumstances install anything without my express direction".
Now pretend they use this to push out a kernel update that patches your kernel into a bowl of luke warm oatmeal and reboots your entire fleet into the abyss.
Is it fair to say that the admin of this fleet is a total fuckup for using a vendor that, up until this moment, was generally well regarded and presented no real reason to doubt while being commonly used? Even though they used software that connected to the Internet, and maybe even paid for it?

People use tools that other people build. When the tool does something totally insane that they specifically configured it not to, it's weird to just keep blaming them for not doing everything in-house. Because what sort of asshole airline doesn't write their own antivirus?

[-] rekorse@lemmy.world 1 points 3 months ago

General practices aside, should they really not plan anybackups system though? Crowd strike did not cause 500 million in damages to delta, deltas disaster recovery response did.

Where do we draw the line there though I'm not sure. If you set my house on fire but the fire department just stands outside and watches it burn for no reason, who should I be upset with?

[-] ricecake@sh.itjust.works 1 points 3 months ago

Well, in your example you should be mad at yourself for not having a backup house. 😛

There's a lot of assumptions underpinning the statements around their backup systems. Namely, that they didn't have any.
Most outage backups focus on datacenter availability, network availability, and server availability.
If your service needs one server to function, having six servers spread across two data centers each with at least two ISPs is cautious, but prudent. Particularly if you're setup to do rolling updates, so only one server should ever be "different" at a time, leaving you with a redundant copy at each location no matter what.
This goes wrong if someone magically breaks every redundant server at the same time. The underlying assumption around resiliency planning is that random failure is probabilistic in nature, and so by quantifying your failure points and their failure probability you can tune your likelihood of an outage to be arbitrarily low (but never zero).
If your failure isn't random, like a vendor bypassing your update and deployment controls, then that model fails.

A second point: an airline uses computers that aren't servers, and requires them for operations. The ticketing agents, the gate crew that manages where people sit and boarding, the ground crew that need to manage routine inspection reports, the baggage handlers that put bags on the right cart to get them to the right plane, and office workers who manage stuff like making sure fuel is paid for, that crews are ready for when their plane shows up and all that stuff that goes into being an airline that isn't actually flying planes.
All these people need computers, and you don't typically issue someone a redundant laptop or desktop computer. You rely on hardware failures being random, and hire enough IT staff to manage repairs and replacement at that expected cadence, with enough staff and backup hardware to keep things running as things break.

Finally, if what you know is "computers are turning off and not coming back online", your IT staff is swamped, systems are variously down or degraded, staff in a bunch of different places are reporting that they can't do their jobs, your system is in an uncertain and unstable position. This is not where you want a system with strict safety requirements to be, and so the only responsible action is to halt operations, even if things start to recover, until you know what's happening, why, and that it won't happen again.

As more details have come out about the issues that Delta is having, it appears that it's less about system resiliency, although needing to manually fix a bunch of servers was a problem, and more that the scale of flight and crew availability changes overloaded that aforementioned scheduling system, making it difficult to get people and planes in the right place at the right time.
While the application should be able to more gracefully handle extremely high loads, that's a much smaller failure of planning than not having a disaster recovery or redundancy plan.

So it's more like I built a house with a sprinkler system, and then you blew it up with explosives. As the fire department and I piece it back together, my mailbox fills with mail and tips over into a creek, so I miss paying my taxes and need to pay a penalty.
I shouldn't have had a crap mailbox, but it wouldn't have been a problem if you hadn't destroyed my house.

[-] rekorse@lemmy.world 1 points 2 months ago

First thank you for taking the time to type all of that out.

I think I follow your theory well enough but (I know this is 2 weeks later so I won't look up any new information) I was under the impression delta was an outlier in their response compared to other airlines.

And one point about redundancies. Why shouldnt they consider a single operating system as a single failure point? If all 6 servers in the multiple locations all run windows, and windows fails thats awful right? Can they not dual boot orhavee a second set of servers? I do this in my own home but maybe thats not something that scales well.

I'm interested if your opinion has changed now that there has been a bit of time to have some more data come out on it.

[-] ricecake@sh.itjust.works 1 points 2 months ago

You are correct that Delta was an outlier, but it wasn't with regards to the scale of the outage, it was that their scheduling software was down far longer and they handled a lot of the customer side of things significantly less well.

Generally, your protection against operating system issues is the aforementioned restriction on changes and how they go out.
If something is stable, you can expect it to remain stable unless something changes or random chance breaks something.
The operational cost of running multiple operating systems in production like you describe would be high. Typically software is only written to work on one platform, and while it can be modified to work on others, it's usually a cost with no benefit outside of a consumer environment.
Different operating systems have different performance characteristics you need to factor in for load scaling, different security models, and different maintenance requirements.
Often, but not always, server administrators will focus on one OS, so adding more to the mix can mean people are rusty with whichever is your backup, which can be worse than just focusing on fixing the issue with the primary.
OS bugs are rare, and they usually manifest early or randomly. It's why production deployments tend to use the OS as long as it's supported: change means learning the new issues and you've probably already encountered all the bullshit with what you're currently using. That's why the Linux distros tend to have long term support versions, and windows server edition tends to just get support for a long time with terrible documentation.

I'm a Linux guy, so defending windows feels weird, and I want to include that I don't think anyone should use it, particularly for a server, but the professional in me acknowledges that it's a perfectly functional hammer.

As we've learned more, I've become more disparaging of deltas choice to not keep the scheduling system modernized in a way that could recover faster, and not investing enough in making systems homogeneous across different airports. I still think that these issues are largely independent of their actual disaster recovery or resiliency plans.
Inevitably, the lawsuits will determine that the blame for the damage is split between the two of them. My bet is 70/30 crowdstrike/delta, since they can easily demonstrate that the issue was fundamentally caused by crowdstrike and negatively impacted other airlines and businesses in general. Some was clearly deltas fault for just failing to keep a system modernized to handle a massive shift like this, and would have been similarly disrupted by any outage with flight cancellations.

[-] rekorse@lemmy.world 1 points 2 months ago

Would you say that an OS forced update type error like this is so rare that Delta didnt need to plan for it? If I understand you right, its not actually a problem that Delta used Windows for their servers, at least not to the point it would affect liability.

If Delta was the only airline who set up their infrastructure in this way, to the point it was markedly different than other companies, could they argue they essentially didnt protect at all?

I'm still having a lot of trouble figuring out how CrowdStrike would even assess a risk like this if the possible payment is based on how well a company recovers and how much income they lost.

I actually agree with your 70/30 split but unless Delta paid more than the other airlines to justify the pay out in damages, its still confusing to me how the amount CrowdStrike has to pay to some degree does depend on Deltas setup and restoration.

I think theres just not any better of a way to handle this and I'm searching for an answer that doesnt exist.

[-] ricecake@sh.itjust.works 1 points 2 months ago

Only for the sake of specific-ness: Crowdstrike forced the update, not the OS. :) and yeah, that's generally unheard of. Like so unheard of that it's a professional recommendation reversing occurrence based purely on how they could release a product that bypassed user expectations so aggressively and without any documentation that it was happening.
I work in the security sector with computers, and before all this I would have said "yeah, crowdstrike is a widely deployed product and if it fits your requirements it's reasonable to use". Now I would strongly recommend against it, not because of this incident, but because of the engineering, product and safety culture that thought it was okay to design a product this way without user controls or even documentation around any part of it. Their after incident report is horrifying in testing it communicates they weren't doing.

I wouldn't advise someone to use windows for a server, but that's a preference thing, not a "hazard" thing. If they had a working windows setup I wouldn't even comment on it.

What sounds like happened to Delta is that they were set-up roughly like other companies. Maybe a little loose on different setups at different airports. That's a forgivable level of slop. Where they differed was in having a piece of software that couldn't handle being entirely shut off, and then immediately loaded to 100% with no ease in.
Scheduling is a type of computer problem that's very susceptible to getting increasingly difficult the bigger the number of things being worked with. Like exponentially more difficult, but it's actually worse than exponential.
I know nothing about they're system, but I can guess that it worked fine when it was running because it needed to make a small number of scheduling decisions at a time, and could look at the existing state of things as a decided "fact". Start the system fresh, and suddenly it needs to compare the hundreds of airports, more hundred of planes and crews, and thousands of possible routes to each other and is looking at literally billions of possible schedules which it needs to sort through to pick the best ones.
Other airlines appear to have scheduling systems that were either developed using more modern techniques that can find "good enough" very efficiently, or the application was written to fail less easily or had better hardware so it could work faster.

For whatever reason, delta was the only one that had the key bit of software fail to come back up.

Delta has higher costs than the other airlines because there are regulations protecting travelers and ensuring they get appropriate refunds and accomodations if their flights are cancelled. Other airlines were able to shift people around and get going again before they had to pay out too much in ticket refunds, food, or hotels.
Delta is arguing that crowdstrike is responsible for the total cost of the incident, which would include all the refunds and hotels, since they caused it.
Crowdstrike recently responded that they think their liability is no greater than $10mil. They seem to be taking the position that they're only responsible for the immediate effects, so things like diverting aircraft, needing to manually poke systems and all that.

"Yeah I t-boned you when I ran a red light, so I owe you for the damage to your car, but your car was a dangerous piece of crap so I'm not responsible for your broken legs, hospital bills or lost wages".
I think the judge will find that running the red light means they are responsible for the extended consequences of their actions, even if they're vastly in excess of what anyone would have predicted up front, but that the car was pretty dangerous so it was really only a matter of time so it's not all on them.

If there's one thing I've learned from reading about court cases, it's that a civil suit like this will get really complicated with how they assess damages and responsibilities.

And yeah, there's no perfect answer for computer system stability. You can never get perfect stability, and each 9 you add to your 99.9% uptime costs more than the last one. Eventually you have teams of people whose full time job is keeping the system up for an additional second per year. And even with that, sometimes Google still goes down because it's all a numbers game.

I didn't mean to ramble so long, but I have opinions and I get type-y before bed. :)

[-] GroupNebula563@lemmy.world 1 points 2 months ago

oh hey you're the vegan cat guy

[-] Dran_Arcana@lemmy.world 2 points 3 months ago

Competent IT staffing includes IT management

[-] SaltySalamander@fedia.io 1 points 3 months ago

Delta didn't download the update, tho. Crowdstrike pushed it themselves.

[-] Dran_Arcana@lemmy.world 2 points 3 months ago

yes, the incompetence was a management decision to allow an external vendor to bypass internal canary deployment processes.

[-] echodot@feddit.uk 1 points 3 months ago

If you own the network you can prevent anything you want.

[-] corsicanguppy@lemmy.ca 2 points 3 months ago

The key takeaway here isn't that Microsoft should change windows to prevent this, it's that Delta could have spent any number smaller than $500,000,000 on competent IT staffing and prevented this at a lower cost than letting it happen.

Well said.

Sometimes we take out technical debt from the loanshark on the corner.

[-] skuzz@discuss.tchncs.de 11 points 3 months ago

Honestly, with how terrible Windows 11 has been degrading in the last 8 or 9 months, it's probably good to turn up the heat on MS even if it isn't completely deserved. They're pissing away their operating system goodwill so fast.

There have been some discussions on other Lemmy threads, the tl;dr is basically:

  • Microsoft has a driver certification process called WHQL.
  • This would have caught the CrowdStrike glitch before it ever went production, as the process goes through an extreme set of tests and validations.
  • AV companies get to circumvent this process, even though other driver vendors have to use it.
  • The part of CrowdStrike that broke Windows, however, likely wouldn't have been part of the WHQL certification anyways.
  • Some could argue software like this shouldn't be kernel drivers, maybe they should be treated like graphics drivers and shunted away from the kernel.
  • These tech companies are all running too fast and loose with software and it really needs to stop, but they're all too blinded by the cocaine dreams of AI to care.
[-] corsicanguppy@lemmy.ca 4 points 3 months ago* (last edited 3 months ago)

They're pissing away their operating system goodwill so fast.

They pissed it away {checks DoJ v. Microsoft} 25 years ago.

[-] skuzz@discuss.tchncs.de 1 points 3 months ago

Windows 7 and especially 10 started changing the tune. 10: Linux and Android apps running integrated to the OS, huge support for very old PC hardware, support for Android phone integration, stability improvements like moving video drivers out of the kernel, maintaining backwards compatibility with very old apps (1998 Unreal runs fine on it!) by containerizing some to maintain stability while still allowing old code to run. For a commercial OS, it was trending towards something worth paying for.

[-] stoly@lemmy.world 2 points 3 months ago

I don’t know that Microsoft has OS goodwill. People use it because the apps are there, not because Windows has a good user experience.

[-] xradeon@lemmy.one 1 points 3 months ago

I think what I was hearing is that the CrowdStrike driver is WHQL approved, but the theory is that it's just a shell to execute code from the updates it downloads, thus effectively bypassing the WHQL approval process.

[-] smeenz@lemmy.nz 1 points 3 months ago

The driver is wqhl approved, but the update file was full of nulls and broke it.

Microsoft developed an api that would allow anti malware software to avoid being in ring 0, but the EU deemed it to be anti competitive and prohibited then from releasing it.

[-] jmcs@discuss.tchncs.de 2 points 3 months ago

Because Microsoft could have prevented it by introducing proper APIs in the kernel like Linux did when crowdstrike did the same on their Linux solution?

[-] rekorse@lemmy.world 1 points 3 months ago

Its sort of like calling the terrorist attack on 911 the day the towers fell.

Although in my opinion, microsoft does have some blame here, but not for the individual outage, more for windows just being a shit system and for tricking people into relying on it.

this post was submitted on 31 Jul 2024
429 points (98.4% liked)

News

23329 readers
3850 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS