Podcast: What the Equifax Hack Tells Us About Cybersecurity

Podcast: What the Equifax Hack Tells Us About Cybersecurity

The Equifax data breach quickly arose to become one of the most notorious in history. It was large. Over 147 million people had their financial records exposed to hackers. At least as of March 2018 that was the number. It has been revised upward a number of times and there could be more. The data breach was also severe in data terms because it exposed information about individual’s names, birth dates, addresses, driver license numbers and the kicker, social security numbers. All of the information needed to commit financial fraud via identity theft was stolen in a single breach.

Equifax Breached With Known Vulnerability In Apache Struts

Everyone agrees this was a bad breach. However, that is only part of the story. The audit of the Equifax breach revealed that the point of entry was a known vulnerability in Apache Struts. It was a vulnerability that even Atomicorp’s free WAF product protected against and there was a published patch that Equifax had installed…on all but one server. Bad luck? Maybe. The story continues.

Blame it On the Engineer?

Equifax’s CEO then goes before U.S. Congress and says the entire security breach was the fault of a single engineer that inadvertently missed installing the Apache Struts patch on a single server. Everyone needs a scapegoat when the sh*! hits the fan. Atomicorp Mike Shinn says, “not so fast.” He knows this vulnerability intimately since he wrote the original ModSecurity WAF rule to protect against it years ago.

Over Reliance on Patching

The real culprit is a cybersecurity approach overly reliant on patching. Patches always arrive after vulnerabilities exist, so it is only a matter of time before every patching-centric enterprise is exposed. Mike breaks down the Equifax hack, how the breach was conducted and the risks of patching culture in this week’s Linux Security Podcast.

Atomicorp provides unified workload security for cloud, data center or hybrid platforms.Built on OSSEC, the World’s Leading Open Source Server Protection Platform. See our products.

 

Podcast Transcript: What the Equifax Hack Tells Us About Cybersecurity

Bret Kinsella: [00:00:00]  This is Episode 6 of the Linux Security Podcast. Today’s topic: The Equifax hack and what it tells us about cybersecurity today.

Bret Kinsella: [00:00:18]  Hello. Welcome to the Linux Security Podcast. I’m with Mike Shinn. Today CEO of Atomicorp and a Linux security expert. And one of the things that he and I have been talking about is the Equifax hack. So there’s a lot of these hacks out there that target JP Morgan Chase a few that we all hear about there in the news and they really focus everyone’s attention on the fact that there are hackers out there and they’re trying to do bad things to us. But Equifax is a different type of hack than some of the others that we’ve seen. And Mike is going to talk a little bit today about how it actually is an illustration of a cyber security strategy as much as it is it is about a hack. So Mike welcome. Tell me a little bit about Equifax. Give us give us some background on what the hack was.

Mike Shinn: [00:01:05]  Yeah. Well thanks for having me. I find that Equifax is one of these really fascinating hacks because it engenders some fairly strong opinions and people in the cyber security community depending on what they believe. And that’s the important word to take away from actual facts is belief. There are some strategies that we use in cybersecurity that are based on nothing more than BELIEF that they’re effective. And the most popular one one, that nearly everyone is familiar with… I’d be surprised if there is anyone that isn’t… is the idea of patching. We’ve all probably patched at least one thing in our lives. And we’ve become accustomed to the idea that patches will come out to remedy security vulnerabilities. And what this has done is it has created probably one of the worst false sense of security that we’ve seen in cybersecurity.

Mike Shinn: [00:02:13]  There is an almost dogmatic belief that patching is an effective strategy. And what patching really is is it is a symptom of a bigger problem. If you have software that has vulnerabilities in it, that’s just reality. That’s the case for everything. There’s nothing out there that doesn’t likely have some kind of a vulnerability in it that we don’t know about. And if we approach cybersecurity from the perspective of knowing that the things that we’re using are probably vulnerable then we need to implement what we call security controls or stuff around those things to protect them. What has happened though is we have become so accustomed to patching things that we think that is in fact a solution. And it’s about the equivalent of being a terrible skateboarder and falling down and hurting yourself every day and putting on the ubiquitous bandaids on your injuries and not doing anything to improve your ability to skateboard. And being surprised at the fact that you fall down and hurt yourself the next day and the next day the next day. So you’re not doing anything about what’s actually causing the problem. And you’re not doing anything to prevent yourself from getting hurt like wearing pads or a helmet or maybe changing the sport that you play. Maybe you better at something else. So Equifax, with all of that said, happened because of a over reliance on patching as an effective strategy for cyber security. Equifax was hacked because it was using a technology that had a vulnerability in it. Nothing special or unique there.

Mike Shinn: [00:04:04]  They didn’t however have any additional security controls in place to protect that application from attack. And this particular attack was surprisingly not hard to defend against at all. It was a pretty simple thing without getting too technical. Our web browsers when we go to a website, they send a request to that website and they ask that particular web server for something of a web page, a graphics file whatever it might be. And it does that in these things called headers so it sends headers to the remote server. And if you’ve heard me talk about this subject before, you’ll be familiar with a concept called untrusted data which is that if someone sends you data and we’ve all been told don’t click on strange attachments you shouldn’t trust it. That is a still a systemic problem in the way that much of the software that’s out there is written.

Mike Shinn: [00:05:13]  There is too much trust in the data that’s coming in. So in that sense a lot of software out there opens the strange attachment that it gets sent an email. So in the case of Equifax, they’re web servers are sent a request by user which includes headers and those headers will say this is the file that I want. This is the format that I wanted in. Maybe this is the part of the document that I want. Maybe I don’t need the whole thing. Maybe I just need pages 10 through 12. So all of these headers are defined by the bad guy so the bad guy can say anything in these headers. And this particular piece of software that Equifax was using which incidentally a lot of people use trusted these headers. So the bad guys just literally put an entire program in a header and the software ingested the header and executed the program. So Equifax the CEO testified in front of Congress that this happened because one person forgot to install one patch apparently on one system.

Mike Shinn: [00:06:26]  And what that illustrates is that patching is not a strategy for cybersecurity. It is the worst possible thing that we can do when we have no other options. It’s rushing as fast as we can to put our finger in the proverbial dike so that the town doesn’t flood. But we shouldn’t be building dams that have holes in the more we should have some maybe another one in place to prevent the water. Or maybe we should know that the water is rising so that it’s going to cause the dam to fail. So in the case of Equifax they didn’t do anything else to protect this particular application and hopefully nobody thinks that well that sounds like a really complicated thing to defend against. It really isn’t. It’s a trivially easy thing to defend against. People do it every day with free software so web servers get sent headers and web browsers and web servers are not new technology they have been around for more than 20 years. We understand the protocols very well. So there are things out there called Web Application Firewalls and people simply say well this is the kind of stuff that is allowed in this particular header. This header should not have a program in it. Right. This header should only have. It should be no more than thirty two characters long and should have text in it. And it just so happens that in the case of Equifax that’s precisely the nature of the header that was exploited and precisely the nature of the technologies that are used to protect it.

Mike Shinn: [00:07:56]  But Equifax isn’t alone in this patching is one of those sinister things that becomes worse than a crutch, it develops in people this pattern of believing that they’ve done something that is adequately protecting the organization because they did something they know that he did something. There was a patch they installed that they’re done. Whew! Wipe the sweat off your forehand. But how secure were you the day before you got the patch. You were wildly insecure in some cases so it begs the question “If you can be hacked because you didn’t install a patch, are you secure?” No. And real cybersecurity is about embracing the reality of the world that we live in. That the products that we use are made by fallible human beings that make mistakes and aren’t perfect. The technologies are incredibly complicated and therefore it would be really foolish of us to just trust them to be immune somehow that “well all the patches are installed therefore I am currently secure”. I’ve been at this for many more decades than I’d like to mention. I think if I had a dollar for every patch I installed I would be a billionaire many times over. So you’re not getting more secure. We’re just on a treadmill just putting fingers and holes basically and not dealing with the systemic problem. So Equifax… I don’t want anybody to think that I’m sort of picking on them that they’re unique in this regard… this isn’t unique. Everybody. Every single person does this.

Mike Shinn: [00:09:51]  We all install patches based on their fixes something or someone told us that it’s really urgent that we do it to prevent our systems are becoming compromised but it still begs the question. Right. What if we hadn’t installed the patch? Well, bad things will happen. How secure are we really?

Bret Kinsella: [00:10:07]  That’s right you know it.

Mike Shinn: [00:10:09]  One mistake and you’re done.

Bret Kinsella: [00:10:11]  Well I think what you are saying is that the patch that the engineer who missed the patch is not necessarily at fault here and then this is in and the error is the CEO who sits in front of Congress and says it is this one guy who missed this patch when it’s really B.S. because there’s a systemic problem here.

Mike Shinn: [00:10:35]  That’s right.

Bret Kinsella: [00:10:35]  And in it it’s fair to say that people like the idea of patching. Yes because it’s tangible. It’s measurable. You can apply productivity metrics against it. The problem you’re indicating though you’re saying well when there’s a patch it wasn’t like you were safe the day before and all of a sudden you have a patch and now you’re still safe. It was you were not safe and the patch only helps to potentially catch up. And you can’t do all the patches because there’s so many patches in their systems that you can’t patch because there is no patch or because you can’t take an operational system down. And so really Equifax is fault wasn’t missing a patch it was not having other types of protections in place in order to prevent the attack in the first place.

Mike Shinn: [00:11:25]  Yeah that’s that’s putting it succinctly. It is that they didn’t have measures in place to address the reality that all of the technologies they’re using right now at this second while I speak have some vulnerability in them that we do not know about. Right. And if your idea of security is that you will know about those vulnerabilities before a bad guy does you will become the next Equifax because that’s exactly what happened to them. They assumed that they could catch up. If they could stay ahead of the bad guys… who by the way have the exact same products that we all do… it’s not like it’s a secret. They’re looking for their own vulnerabilities.

Mike Shinn: [00:12:13]  And why would they tell you if they found one. There are incredible economic incentives for them to find the vulnerabilities and exploit them. So the idea that somehow we can continue as we have been that we will somehow or our vendors will know about these vulnerabilities before the bad guys do. And this is of course the really funny part and that we will be told that we need to patch something before a bad guy is told and that we can do it before a bad guy can exploit it is just utter folly.

Bret Kinsella: [00:12:47]  Well we have the problem that the bad guys often find the exploit before the good guys do. Yes and then we have a second problem is when the good guys do find the problem they publicly announce the problem in some cases and the patch at the same time very often they may. Because otherwise there’s no way to tell people that there’s a problem out there so that they know to do something like a patch or stretch. How would people know that you need to patch it unless you told them and why they need to patch it. And so then we have this compounding problem where when you know that there’s a vulnerability and there’s even when there is a patch available for it it’s a race because the bad guys then immediately start pinging every system that they know to see if they’ve installed the patch yet and they get it ahead of you.

Mike Shinn: [00:13:34]  That’s right. And you may have installed the patch but then maybe somebody had to restore the system for backup. And so the patches anymore or in some cases maybe the patch break something or maybe you have a reasonable process for testing your patches that causes it to take you longer to get it deployed than it took the adversary to find the opportunity to exploit you.

Mike Shinn: [00:13:59]  And you know one of the challenges for security resource researchers is what do we communicate to the world. And there isn’t a universal consensus on this. There are certainly people who fall into the good guy camp who are of the opinion that we should do what’s called full disclosure which is we’re going to put all the details out there. We’re not going to keep anything a secret. And there are historical reasons why people do that. Very briefly that’s because in the past vendors were not always as… how shall we put it… honest about their vulnerabilities. The phrase that tended to be bandied about at the time was well that’s a theoretical vulnerability which then caused people to write what are called exploits to demonstrate that it was not theoretical. Presumably hoping that the vendor would then patch the vulnerability. So there are well-intentioned people who put all the details out there. There are people who are somewhere on a spectrum of being maybe sociopaths who just like to do that because they enjoy the grief that it causes others to put things out there and as you said there are people out there who are in fact being paid to find vulnerabilities in things to exploit it. And those people work for criminal organizations. They may be doing it for their own purposes. They may work for governments to do that. It is its own industry. It has been around for a very long time. So there are really significant economic advantages to the adversary to find a vulnerability and to keep it a secret.

Mike Shinn: [00:15:39]  And you know we talk about Meltdown and in another one of these discussions in more detail but very briefly Meltdown is a good example of how that process can fail us because we had this vulnerability that is systemic. And it affects most of the computing platforms that people have. It needed to be addressed therefore across nearly everything. The details of it needed to be sufficiently secret that an adversary couldn’t figure out a way to exploit it but with enough details so people could fix it. And it ended up leaking as a result of that because how do you coordinate the entire world effectively to keep a secret. You know as the old saying goes if you want to keep a secret don’t tell anybody. Right. Keep it a secret. So the whole patching thing is one of these necessary evils. It’s a symptom of the reality which is that the platforms that we use are made by people and people make mistakes. Therefore they have to be fixed occasionally and using that or over relying on that to keep organizations secure leads to things like Equifax because you end up with this false sense of security that we have a process in place. We patch everything. We’re good to go. Well, what happens if you miss a patch? What if you do everything right and you make a mistake. Well then you end up being Equifax and inversely what if the bad guys figure it out before you do well then you’re definitely going to get hacked.

Bret Kinsella: [00:17:16]  Ok so let’s put a bow on this. So what what you’re saying is go ahead patch yourself. When you find out, you should do that. But what else should you be doing? I mean what is the prophylactic here approach to cybersecurity to get you ahead of the patching problem?

Mike Shinn: [00:17:35]  Yeah. I mean from an analytical point of view the best way to answer that question… because it’s going to differ from each patch to patch… is to go ahead and do it. Right. The house is on fire right. We need to put it out. That comes first. Right. We can put it out. But then you need to do this… what in engineering is called a root cause analysis… but in other words “Why did this happen?” And in this particular case for any particular thing that you patch, you need to sit down and say “OK what was the nature of this particular vulnerability? How did it work? And if we didn’t have a patch in place what could we have done in the best case to prevent it from being successful. And in the worst case at least detect that this particular thing was exploited?”.

Mike Shinn: [00:18:16]  That’s how you make organizations more secure. And then there are well understood frameworks now that we have you know NIST has a framework, PCI DSS framework. So on and so forth that include this whole range of security controls that give you defensive depth which is supposed to help protect you against these types of scenarios. But it is more important for an organization to actually understand why than to simply do these things. Compliance for the sake of compliance can end up creating just as much false security as doing nothing. So the answer to “How do you move past this patching treadmill?” is to look at these vulnerabilities and to ask yourself a simple question “If we had not patched would we have gotten hacked?” And if the answer is “yes” then whatever you’re doing is not adequate and you need to sit down and analyze the security measures that you have in place and say how would we detect and or prevent this particular thing.

Bret Kinsella: [00:19:25]  And you don’t have to wait until you’re the victim of an exploit to do this. You can actually proactively look at your security posture and you can identify classes of attacks.

Mike Shinn: [00:19:37]  Correct.

Bret Kinsella: [00:19:37]  And identify what your potential vulnerabilities would be. Aside from patching try and take proactive measures to stop those type of attacks before they can take root.

Mike Shinn: [00:19:48]  That’s exactly right. I mean real security is proactive. Real security… the best security is driven by an understanding of how to break into things. Nothing is a better teacher than knowing how to get it. If you know how to crack a safe you’re better at designing safes that are harder to crack so that perspective helps to better inform an organization. But it requires the ability to not game it. If you’re going to determine whether or not your organization security measures are adequate then you have to check your ego at the door and you have to allow the process to be as realistic as it can be. If you if you game it and you put rules around it and you say well we’re not going to test the system because it’s really important and so on and so forth, you really don’t end up understanding what your security posture is.

Bret Kinsella: [00:20:45]  Yeah. So Equifax is a great example of how reactive security posture creates problems and really significant problems. Millions hundreds of millions of people’s personal ID was exposed. A more proactive posture where you think about prevention first about stopping the attacks is a very viable security strategy even today.

Mike Shinn: [00:21:10]  That that is absolutely the case. Real security is proactive. It is driven from the perspective of an adversary. Security is about performance. It’s not about compliance or metrics or anything else. At the end of the day it’s whether or not you’re able to keep the bad guys out. And the best way to do that is to think like a bad guy, to attack your own organization and to be proactive about these things. Like you said for patching these are good opportunities for the organization to stop and consider in that moment in time are we adequately protected. And you can look at something like the Apache struts vulnerability that compromised Equifax as a good example. You know you could have analyzed that and said “Oh wow! We we don’t have anything in front of these servers that is preventing this untrusted data from just being ingested by our applications”. So they have some kind of a vulnerability. They’re going to get compromised and you know Meltdown is an example of the other end of the spectrum which is “Wow there really isn’t anything we can do proactively in terms of you know end users or even you know at the time we didn’t completely understand the nature of this particular vulnerability.

Mike Shinn: [00:22:27]  But if you were looking at this problem more holistically and saying “Can I… for example you know in intelligence agency… put all of my secrets on one computer? Would that be a good idea?” No. Right. Because when we know the system is private some kind of a vulnerability in it that we don’t understand. So maybe we should separate our secrets in to levels and maybe we don’t allow the stuff at the highest level to be accessible by the stuff at the lowest level and it turns out you know in defense of probably the world’s intelligence agencies since they all do that. That was a pretty smart idea. You know if they had put all of this stuff into one computer and there was a time when there was a strong push for this you know to have these multi-level systems that would have top secret, secret, confidential, and unclassified data all in one computer if that had really caught on then it’s possible this particular vulnerability could have been used to compromise those systems.

Mike Shinn: [00:23:24]  So it really does take a proactive approach and to be secure. But it’s not that hard to do. I mean there are organizations out there that do it right. Patching though is important as it can be to patch. It’s also important to recognize that it is simply treating a symptom. There is some other problem. You’re taking aspirin you’re not treating whatever the underlying disease is.

Bret Kinsella: [00:23:52]  Mike will leave it there. Thank you very much.

Mike Shinn: [00:23:54]  Thank you.

 

Learn More About WAFs

 

Atomicorp provides unified workload security for cloud, data center or hybrid platforms. Built on OSSEC, the World’s Leading Open Source Server Protection Platform. See our products.