Podcast: What Is File Integrity Monitoring (FIM)?

Podcast: What Is File Integrity Monitoring (FIM)?

File Integrity Monitoring is designed to notify you when files have changed on a system. It was one of the very first security detection capabilities in existence and is almost as old as passwords.

FIM for PCI DSS Compliance and Other Security Protocols

FIM has also been incorporated into many regulatory and security protocols. Mike Shinn breaks down the core elements of FIM, how it evolved, where it falls short today, and how open source solutions like OSSEC provide new features that are a big step up from Tripwire and other legacy tools.

Atomicorp provides unified workload security for cloud, data center or hybrid platforms.Built on OSSEC, the World’s Leading Open Source Server Protection Platform. See our products.
 

Podcast Transcript: What Is File Integrity Monitoring (FIM)?

Bret Kinsella: [00:00:00]  This is Episode 2 of the Linux Security Podcast. Today’s topic: File Integrity Monitoring.

Mike Shinn: [00:00:16]  So FIM is probably the second oldest cybersecurity technology we have although people might debate that and say firewalls came first. It stands for File Integrity Monitoring. FIMs job is to notify you when files or directories change on a system or when new files are added to a system or something’s deleted, altered… whatever the case may be, its job is to tell you if something’s changed at its base level. And it was one of really the first intrusion detection technologies that existed beyond just something simply telling you that a user tried to log in and maybe they failed to do that. So it’s been around for a very, very long time. One of the first technologies out there was TripWire which originally was open sourced and there were some other clones similar to it. And it, as a result, became a fairly regularly cited requirement in most cybersecurity standards because people were already doing it. It made perfect sense to just say “Yes, this is a best practice and everyone should continue to do it”. So standards like PCI DSS have that requirement specifically spelled out in them. NIST has that spelled out in their cybersecurity standards and to some lesser degree some other standards that borrow from those two state that as well the Nuclear Regulatory Commission has requirements for that. Have requirements for that. So it’s it’s a fairly ubiquitous technology as I said it’s it’s almost as old as as passwords but definitely falls into that category of something that most people know about in the engineering and cybersecurity world. And because there was a free open source solution for it there was no reason not to use it.

Bret Kinsella: [00:02:10]  So we had passwords, firewalls and FIM or some…

Mike Shinn: [00:02:14]  That was pretty much it. I mean that was that was that was the state of the art at that point. There wasn’t really a log analysis of any meaningful sense yet. Vulnerability scanning really started to come around in about ’94-’95 with Dan Farmer’s Satin tool. I don’t even remember that want to know what a master of marketing that guy was. Seriously. Yeah. Look it up. It’s got a great logo and everything. It’s a very kind of Prince of Darkness kind of “wow” logo or whatever. So really it was one of the first… I would argue… one of the first true intrusion detection technologies because it operated independent of the operating system. It wasn’t that somebody had written an application to tell you that Joe tried to log in or that something appeared to be an attack. This was an independent piece of software that would take a snapshot of the system and tell you if that something changed and then that was pretty revolutionary because that gave you a lot of visibility and to really any type of system where you could deploy it because you didn’t have to worry that maybe your system was unique or different from someone else’s. It was really a fire and forget technology. And as a result a number of clones were created along with it and that capability became commoditization added into a lot of other products.

Mike Shinn: [00:03:43]  Again because it’s one of those best practices. So OSSEC has that capability in it natively to do the same thing to detect when files change. One of the first big improvements to File Integrity Monitoring was the ability to detect if files changed in real time. So the way the technology originally worked was it worked on a schedule. The software would take a snapshot and then it would run maybe hourly or depending on what the resources were on the system would run on some period… maybe daily if somebody is particularly paranoid maybe maybe every half an hour… but it would it would actually have to crawl the entire file system so you can only run it as often as that process took. So if it took six hours to crawl through the entire file system and generate a signature… we call them hashes… for every single file, that’s as often as you could run it. So there were disadvantages immediately apparent from that model that you might take you a very long time potentially before you knew that something was hacked. There were also performance issues associated with that.

Mike Shinn: [00:04:50]  So you had this piece of software that is effectively reading every single file or most of the files in the system depending on what you wanted it to watch. So there was a need for the ability to detect when files changed or added or deleted and only inspect them when those occur. And it took some time before operating systems had some native capability to support that. And now most File Integrity Monitoring solutions do this in real time. OSSEC is one of those. So it knows when to file is changed and it will tell you that it’s changed. One of the neat things about what OSSEC does that’s a little different from traditional File Integrity Monitoring is it will tell you that the files changed but it will tell you in some cases what precisely changed IN THE FILE if that’s some format where that makes sense. So for example if you had two graphics files and maybe somebody made some modification in the graphics file it’s challenging to represent that change in some easy way that a human being can understand other than to say the file is changed. But if you had something like a configuration file or a password file or a registry, something that’s got a more structured, human translatable format, like text, it will tell you “This is precisely what changed in the file” and the next kind of cool thing that it will do is it actually keeps a copy of all of those changes so that gives you tremendous forensics capability but also really a configuration management capability because if you’ve got configuration files changing on a system or if you’ve got software updates on the system, it will keep a copy of as many changes as you want that have occurred on that particular system which gives you the ability to roll back, analyze changes on a much deeper level than just telling you that a file changed, maybe this is the user that changed it.

Bret Kinsella: [00:06:51]  And this is part of the digital footprint of a hacker. So as someone who’s inside the enterprise who’s trying to set up maybe command and control or something. What they’re going to be doing is they’re going to be looking to make changes to certain files on the system to give themselves more access, to cover their tracks whatever it might be.

Mike Shinn: [00:07:14]  Correct. Yeah. Certainly. I mean that’s that’s not uncommon. Which is why this technology is more than a best practice. It’s one of these unusual to not see it used in an enterprise that is trying to be secure. I should be clear that not everybody does this. Right. They are just as many organizations that still believe that being behind a firewall is adequate security. And I won’t mention who those are but some of those are really large companies that should know better. But when organizations make a decision to try to be more secure. This is one of those first technologies. It tends to get deployed when organizations are required to be secure. That is there’s some standard or regulatory framework that applies to them, almost without exception this gets done because all of those frameworks either explicitly call out the need to do this or some reasonably knowledgeable contractor will come along and say…or consulting I guess… will come along and say “Yeah you should do this” because what you just said. Right. It’s not uncommon for a malicious person to make changes to a system and that’s something you would want to attacked and these are a good technology to do that.

Bret Kinsella: [00:08:33]  Well so when we talk about making these changes the FIM is going to track all changes. Right. From legitimate changes from approved users as well as nefarious changes. How do you how do you determine which one is which you’ve got the log for the forensics which is important if you have to do an investigation. Does it have a learning capabilities that it will detect certain types of activity that’s deemed to be nefarious?

Mike Shinn: [00:08:59]  So all the file integrity solutions… file integrity monitoring solutions that are out there now, all the reasonably modern ones, give you the ability to do all the things you describe. So the first thing, of course, they all do is allow you to define the scope of what you want to alert on not every change on a system is worthy of being notified about. It may be a good example is you don’t need to know… in most cases… the entries are being added to a log file. That’s its job. But you might want to know that the log files been truncated. That is somebody’s wiped it out. So all of the reasonably modern solutions including oversight do that they can differentiate between the type of changes and you can configure these solutions and OSSEC is no exception on what types of changes you want to be alerted on. So for example you could say for your log files I want to know if the file is deleted or it shrinks in size which a log file shouldn’t do. Or if it’s been replaced. Right. The node has changed. In other words someone has taken a similarly sized log file and used it to replace the original log file.

Mike Shinn: [00:10:13]  Those are things you might want to know. For other parts of the system, you might want to know precisely what changed, a configuration file or maybe this configuration file defines the security settings for the system. You want to know every single time that changes whether it’s authorized or unauthorized because maybe you need a record of that for auditing purposes to be able to say this individual logged in made this change, maybe an incident occurred when that change was in place, maybe they degraded the security in the system. And then another potential use case is you just want to record every change not for the purposes of being alerted but maybe just to keep a copy in case in a development environment you say “Oh you know what? We broke something yesterday. Let’s roll back 24 hours.” And tools like OSSEC others that record all of the changes that occur on the system over whatever period of time you want give you this much greater capacity not only to look at the changes that occurred on the system but to actually work with that data on an operational level. So it’s not something buried in a database somewhere. It’s actually stored a copy of the original version of the file, and the next version, the next version, the next version, the next version so you can you can go back.

Mike Shinn: [00:11:30]  So these technologies have really evolved from just being intrusion detection tools to forensics tools as you mentioned to tools that now are utilized for block and tackle operational tasks of maybe debugging, somebody broke the machine and while John logged in and changed these five files and then it stopped working. OK we know why this happened. So it’s got a lot more utility now than they did originally.

Bret Kinsella: [00:11:56]  Is it just system files or is it end user application files as well.

Mike Shinn: [00:11:59]  Anything. Again you know files. Right. In this case, hence File Integrity Monitoring but any kind of a file. It could be a user file. It could be mundane documents. Really doesn’t matter. It could literally be anything. And these technologies can be applied outside of the scope of the system that they’re deployed on. That is to say all of the File Integrity Monitoring solutions can also look at network shares as well. So they can go crawl around the network shares enumerate changes that have occurred on those as well.

Bret Kinsella: [00:12:31]  So you mentioned TripWire earlier which is probably the the most well known of the FIM solutions and yet OSSEC which is an open source software has equal or greater capability than TripWire today.

Mike Shinn: [00:12:47]  Yeah I mean it is certainly a one for one replacement in the sense that you can do the same things that you do with TripWire. There are presentation layer technologies that can be applied to OSSEC to give you those reports in almost any format that you want at this point. And it certainly has the same alerting capabilities, the same forensics capabilities and so on that tools like TripWire happen. And as you mentioned their open source so you can use these tools without having to pay a vendor to use it.

Bret Kinsella: [00:13:19]  And you can customize it any way you need to as opposed to just taking what you’re given.

Mike Shinn: [00:13:24]  Yes.

Bret Kinsella: [00:13:25]  So have you seen organizations shifting over from TripWire to using OSSEC? Why is that important for them?

Mike Shinn: [00:13:34]  Yeah I’ve definitely seen people do it. Certainly the simplest explanation is just cost. A solution out there that’s effectively I shouldn’t say free. Everything has a cost right. You still have to deploy it manage and so on but you don’t have licensing costs associated with it like you do in other ones. OSSEC also does a lot more than just File Integrity Monitoring so there are other reasons that people use OSSEC. And in some cases they discover that it has this capability and look at their portfolio and and decide I don’t really need this other tool because this thing happens to do this. And as you mentioned it’s very extendable. So we’ve seen in some fairly large famous enterprises that this is definitely used to do a lot more than just File Integrity Monitoring. In some cases they’re using it along with orchestration tools to rollback configuration changes because it’s just a good automated we’ll call it an automatic backup system that just occurs on the system somebody makes a change to a file, it detects it automatically, it’s keeping a local copy of that change and that creates an added layer of redundancy and resilience in those organizations. So it’s kind of hard to argue with I mean it’s free it’s very low overhead. It makes sense why operations folks like to use it.

Bret Kinsella: [00:14:59]  Well, we’ll leave it at that. Thanks Mike.

Mike Shinn: [00:15:00]  Yep. My pleasure.

 

Atomicorp provides unified workload security for cloud, data center or hybrid platforms. Built on OSSEC, the World’s Leading Open Source Server Protection Platform. See our products.