Cyber Security Red Teams have become a common tool for testing enterprise cyber security. They attempt to penetrate security defenses as if they were hackers. Red Teams are motivated to be creative and determine the best way to circumvent security measures in place, sometimes by any means possible. Mike has been red teaming since the 1990’s, before the term existed. He breaks down how Red Teams operate, their objectives, the difference between physical and digital vulnerabilities and how constraints can limit their value.
Atomicorp provides unified workload security for cloud, data center or hybrid platforms.
Built on OSSEC, the World’s Leading Open Source Server Protection Platform. See our products.
Podcast Transcript: What Are Cyber Security Red Teams and Why They Exist
Bret Kinsella: [00:00:00] This is the Linux Security Podcast Episode 13. Today’s topic: Red Teams. What are they and what do they do?
Bret Kinsella: [00:00:18] Welcome back to Olympic security podcast. I’m Brett Kinsella. I’m here once again with Mike Shinn. Mike say hello to the audience. Hello everybody. Mike is the CEO of Atomicorp and a longtime practitioner of Red Teaming. So we will what we thought we would talk about today is what is a Red Team and all the aspects around them that corporations use to put them into practice. Mike why don’t you get us started.
Mike Shinn: [00:00:45] So Red Teaming very simply is simulating what an adversary can do to you and then having trusted people carry out these activities to see if they can defeat your security program and defeating the program really typically is governed by maybe some goals that are set at the beginning of the exercise you need to try to accomplish this in other cases it may be just let’s see what the Red Team can do right. How far can they get in what can they get access to. What kinds of things can they accomplish depending on whether or not an organization has done a Red Team before or largely governs whether there are any goals to find for the Red Team.
Bret Kinsella: [00:01:31] So is it really more along the lines of the unearthing the unknown unknowns? So we’ve got a security posture that we’re probably vulnerable somewhere we just don’t know where.
Mike Shinn: [00:01:42] The best Red Team projects should have that is their intent. The intent should be to exercise everything that the organization does to keep itself secure or to see if there’s something that that you haven’t figured out and in security, imagination is really the key to success and all things. Bad security is almost always caused by a lack of imagination. You think I’ve got it all covered. We’ve done this. You know… this will stop this and not applying your imagination to that and saying “What could I do to defeat it? Or is there an easier way to get in here.” I think that the first example of that that I remember early in my career was we needed to get into a computer room and it had great you know high security door and all this good stuff. And we looked up and we saw a hanging ceiling above it. And we thought “Huh… I wonder if these guys thought to take the wall all the way up to the next floor?” And this was a general office building and the answer was “No”. So I remember my brother who worked with me at the time climbing up on my shoulders and lifting up the tile and he could get over the wall and we made this stick and reached he reached over the wall and pressed the exit button inside this room and the door opened for us. So you know I think that’s a great example of where imagination sometimes… you know… fails you as a designer of a security system. But when you bring in a Red Team and you give them a goal and they’re motivated then they start to think of new ways maybe to get around things or maybe in that case nobody thought to look it might not have been that they didn’t realize a wall should go all the way up maybe they never thought to look above the ceiling.
Bret Kinsella: [00:03:37] OK. So I’m really glad you brought that up. So first of all it’s a colorful story. I like thinking of you and Scott sort of working together to break into this… this building or into this secure room very Mission Impossible-ish potentially could see you guys with wires dangling from the ceiling breaking into servers. But if we just take this back you brought up a really cool point which is I think a lot of people believe that Red Teaming is really just digital attacks like an attacker in China or Russia or somewhere else in the world. Right. State sponsored or individual doesn’t matter. Just trying to nail you through digital means.
Mike Shinn: [00:04:18] Right.
Bret Kinsella: [00:04:19] But you’re talking about the idea that Red Teaming historically has been about by any means necessary physical / digital however.
Mike Shinn: [00:04:25] That’s right. Yeah. Because at the end of the day if to put this all in context why are you doing this you’re doing it to protect something and to determine if it’s adequately protected and if you limit yourself to just looking at one way to get to that thing you’re not really performing a Red Team action because the bad guys are going to limit themselves if it’s… if it’s easier let’s say for someone to just physically steal something than to try and hack into it they’ll just physically steal it.
Mike Shinn: [00:04:56] I mean the bad guys aren’t dumb you know they’re going to look at the opportunities in front of them and they’re going to make their own rational calculation about what the level of effort is to achieve a particular goal in a digital attacks are popular because sometimes they are easier. The other reason that they’re popular is it’s typically very hard to a trip who the source of a digital attack is the example of the hypothetical attacker in Russia or China or whatever is a great one because.. you know… if you’re halfway around the world and you hack into somebody’s server room and I don’t know steal some valuable stuff it’s very hard to identify who did that and to reach out and apprehend them. Whereas if somebody physically breaks into your server room, you might have video of it. Maybe somebody stops them as they’re doing it and you certainly know that the person was relatively close to where you are now and therefore they may be someone you could potentially apprehend. But it’s sometimes easier to do things physically than digitally. You know. I remember I won’t mention any details but one particular event that we dealt with of a penetration, an employee had placed a webcam in the office so that they could see what their boss was typing on their screen and on their keyboard. And that’s how they stole their password. It wasn’t a digital attack at all in that in the sense that they hacked anything it was the oldest trick in the book. Right. They were looking over the person’s shoulders so doing these Red Teaming activities is really important because it helps you to identify the things that you haven’t thought of were the things that you take for granted. Our physical security is great. Whoops! The wall didn’t go all the way up! somebody was able to go over the wall. Whereas if you don’t do that kind of testing you just take it for granted. The other thing I’ll mention is sometimes there is a tendency to stovepipe elements of Red Teaming. That is you’ll do the digital Red Teaming with one group the physical with another group, maybe social engineering with another group and so on. And the problem with that is you’re not giving your team the ability to use all these things together. So maybe the way that you broke in was you tricked somebody into giving you a badge or social engineering and then you use the badge to physically get into the building and by physically getting in the building that let you plug a USB stick into a computer somewhere that had malware on it that requires three different types of expertise. But when you break them up you don’t really get to connect the dots and say you know what were the consequences of this. And why was this important. Oh well so somebody got a badge. That doesn’t matter. The computer was in a secure room. Well it turns out the room had not great physical security as well blah blah blah. So good Red Teaming is by whatever means necessary.
Bret Kinsella: [00:07:57] OK. So what is the difference between what a Red Team is doing and a hacker for example because they’re really supposed be a proxy for a hacker.
Mike Shinn: [00:08:07] That’s right. The really the if you’re doing a good Red Teaming the only key difference between them is that the Red Team does work for you. So they’re going to be sharing with you what they did. And you can also put limits on what they can do that is to say you know maybe the adversary the actual bad guys intentions are to cause physical damage. Right. They want to get in and I don’t know physically break something the fact that a Red Team can be in a position TO DO that is usually enough to demonstrate that it’s possible. You don’t have to prove it like well. But you didn’t actually break it with a hammer and you probably don’t want them to. If the idea is to prevent damage. So a good Red Team should be able to do or at least simulate the same things that an adversary that you’re trying to defend against can do. Otherwise again you miss the opportunity to exercise the security program and its completeness and one of the other reasons to do it isn’t just to figure out if they can defeat something it’s to see in some cases how the organization responds to things that are occurring. And that is much much harder to do if you don’t bring in people who are trying to simulate what a bad guy is actually doing. If you make these engagements too artificial you end up coming away with a false sense of security.
Bret Kinsella: [00:09:35] Well I mean you bring up something interesting around the rules of engagement and there’s some Red Team programs where the organization will let people know that there’s there’s something going on. And so then you don’t get to understand the response because people are on heightened alert.
Mike Shinn: [00:09:51] Yes. Yeah. And so you know one of my colleagues wrote this this great paper about this and there is value in simulating or constraining certain parts of the exercise and not constraining other parts of the exercise. You bring up a good point that you can constrain these things to the point where you get the result that you want. And I have seen that in sadly too many places because what’s happened over time is that people have become more educated about the… how shall we say… the consequences of failing a Red Team exercise… and as they become a little more aware of what work do you see more and more constraints like what you can’t do this and you can’t do that you can’t do that largely for the purposes of getting the result that you want. And one of the things that will be done sometimes is everyone knows that you’re coming. So their behavior is fundamentally different. Right? Now they’re on guard now they don’t want to get burned. They’re paying extra attention. I remember one engagement we did a few years ago where the lead… you know… the alpha geek basically… you know… actively was trying to disable all of the network ports that the onsite Red Team was using. They they knew a conference room they were and they they knew what ports they were plugged in. So he basically disconnected them from from the network. And you know the problem with that is when you’ve tied the hands of your testers and you said you need to do the work from here and you make it that easy for everybody to just disrupt them it’s completely artificial. You’re not going to have an adversary like that. They’re not going to tell you in advance hey we’re gonna be there on Tuesday at 3:00 o’clock and and we’re going to be trying to hack in. So that’s that’s an example of badly constraining. An example of restraining these things in a good way is if your intention is to determine if the measures themselves you have in place are adequate then it’s useful to let people know that you’re coming and that you’re going to be trying to pick every lock in the building and don’t panic right. Don’t call the police on these guys. They’re just testing the locks. They’re good guys and don’t stop them right. They want to know if the locks that we bought are strong or whatever the case may be. So there’s there there are good ways of artificially constructing these exercises and then there’s bad ways of doing it.
Bret Kinsella: [00:12:25] So I think you’ve been doing this for 20 plus years yes.
Mike Shinn: [00:12:29] At least.
Bret Kinsella: [00:12:30] OK. All right. So more than 20 years will they say. Is that the biggest change you’ve seen over time is that the way people construct these are different… I mean.. What were the constraints in the mid 90s when you were doing this.
Mike Shinn: [00:12:43] There were none. Right? Nobody… the customers only understood it in the broadest of ways that we’re gonna have these folks come in and they’re going to see if they can get in and they didn’t know how we would do it other than maybe if some of them had great imaginations they could think of it and pretty much the only thing you might have gotten someone to ask you at the time was please don’t break anything. But even that was relatively rare it was you know just go for it. The labor pool was also very different back then there was a very small number of people who did this. And as a consequence they were all relatively experienced at doing it. You know there weren’t any classes you could go to you couldn’t go take a class on how to do penetration testing. In fact, there really weren’t tools back then. You had to write your own so you had a fundamentally different kind of person doing Red Teaming and then what you need now. Back then you needed someone who understood a lot of things who could write their own code who you know knew how to restrain themselves, understood the things that were attacking well enough not to break them. And the demand for those people grew and the market… you know… has challenges with that. So we built tools to make it easier and so on and so forth. So yeah there were there were no rules. More than 20 years ago it was just go in and do this stuff. It was it was it was cowboy days I guess maybe is what you would call it.
Bret Kinsella: [00:14:18] Well and that’s led to a very mature Red Teaming market today.
Mike Shinn: [00:14:21] Certainly has. Certainly Has.
Bret Kinsella: [00:14:23] Well maybe in the future we’ll talk a little bit about tepees that you’ve used and read teams that are common. But I think for this week that’ll be “What Are Red Teams”. Thanks a lot Mike.
Mike Shinn: [00:14:31] Thank you.
Atomicorp provides unified workload security for cloud, data center or hybrid platforms. Built on OSSEC, the World’s Leading Open Source Server Protection Platform. See our products.