New tool seeks to block rootkits by protecting their targets

Status
Not open for further replies.
Rootkits often replace functions provided by an operating system's kernel in order to infect a machine and obscure their presence. A paper describes a way of blocking rootkits by gathering all these functions in one place in memory, then locking down the memory.<BR><BR><a href='http://meincmagazine.com/business/news/2009/11/new-tool-seeks-to-block-rootkits-by-protecting-their-targets.ars'>Read the whole story</a>
 

kcisobderf

Ars Legatus Legionis
12,037
Subscriptor
"The biggest potential problem here, which is recognized by the authors, is that their database of acceptable hooks will end up being incomplete. This is already a problem in the lab, but could be a nightmare in the real world, where software updates and new drivers may appear on a monthly basis. Still, it's easy to envision systems that update the profile of legitimate hooks as part of a software update process, or provides users with the opportunity to approve changes."<BR><BR>What happens if the database or the updates are corrupted? Rootkit makers have to be more savvy than typical script kiddies, so these methods are vulnerable.<BR><BR>If users are empowered to OK updates and they get lots of them, you run into the UAC problem. The kernel space has but one more layer of protection rather than being invulnerable. Security by obscurity is a poor way to protect things.
 
Upvote
0 (0 / 0)

bartfat

Ars Scholae Palatinae
985
Of course, if we had signed certificate applications for installing in the first place, we wouldn't even need antivirus, much less dealing with rootkits. But that's essentially what they're suggesting here.. is to keep a database of authorized hooks for the kernel. It's no stretch to say that it might as well be applied to installing each and every third party application. Of course, this would mean current applications that aren't code signed will break (unless somehow someone comes up with a solution for the user to override the code signed checking on a per-application basis). Here's to hoping both Apple and Microsoft implement this. Although, I bet Apple will do it first because something similar is already running on iPhone OS. <BR><BR>Basically the OS vendor becomes a certificate authority, and can revoke certificates as needed to prevent any sort of outbreak of malware. Side benefit is that software piracy would also become harder, therefore encouraging developers to lower prices.. sort of like the App Store.<BR><BR>I'm wondering why they aren't doing this already, actually. To quote Top Gear, <B>how hard can it be?</B>
 
Upvote
0 (0 / 0)

kaleberg

Ars Scholae Palatinae
1,258
Subscriptor
"One ring to rule them all..."<BR><BR>Is that a Multics reference? Thirty years ago you didn't have writable kernels, at least not writable by any programs not running in kernel mode. Multics took it a step farther and isolated the kernel in ring 0 and had users running in ring 4. Rings 1, 2 and 3 were for protected subsystems and rings 5, 6 and 7 were for user subsystems including emulators for backwards compatibility.<BR><BR>It's a pity we can't do things like this anymore. Ah, the lost technology of the ancients....
 
Upvote
0 (0 / 0)

spotter

Ars Tribunus Militum
2,331
#1, this isn't new. I had this idea few years back, and already saw that it had been done. (Can't find a good reference for what I saw, but I found another paper from 2008 that does something similar http://www.cs.rutgers.edu/~vinodg/papers/acsac2008a/)<BR><BR>#2, just because you protect the function pointers, doesn't mean you really did anything. i.e. the system call table is a bunch of function pointers, the easy way to attack this is to just replace the function pointer with a wrapper that filters the output (or input) to the real function the system call uses. This can protect agaisnt the easy way.<BR><BR>However, another way to do it is to not replace the function pointer, but just put overwrite the called function's first instruction with a jmp into you malicious wrapper which then jmp's back into the real function and which then jmp's back into your malicious wrapper at the end (replace return). Now, this is much more complicated to do, but very possible.<BR><BR>Furthermore, there are hooks that are dynamic in the kernel, namely FS hooks, and it's very easy to add a FS to the machine that is transparent to the end user (see stackable file systems). these are hooks that can't be made read-only as the paper wants to and are modified a lot.<BR><BR>with that said, it seems the author (from looking at his web page, haven't had a chance to read the paper in depth beyond the abstract which isn't so different than the description above), seems to be on the same track that I would have gone with leveraging no execute bits and with a hypervisor you can control that even if a malicious attacker would be able to get into kernel mode (instead of the complicated kernel being the TCB, the supposedly simpler hypervisor now becomes your TCB).<BR><BR>If I were doing it, I'd move towards a trusted computing type setup, where you don't allow kernel modules, and you all the kernel's executable code is read-only (and enforced by the hypervisor), and all the kernel's writable memory is always no-executable) and neither of these properties can ever change. this (I think, sort of back of the envelope, and as its late, could be missing something), prevent an attacker from ever modifying a kernel (and as these properties are enforced by the hypervisor, hopefully simpler to ensure correctness), with a loss of functionality provided by kernel modules.<BR><BR>of course the way to attack this would be to replace the kernel on disk with your modified kernel and then wait for (or induce) a reboot. (and again why I say it leans towards the trusted computing type setup).
 
Upvote
0 (0 / 0)

spotter

Ars Tribunus Militum
2,331
<BLOCKQUOTE class="ip-ubbcode-quote"><div class="ip-ubbcode-quote-title">quote:</div><div class="ip-ubbcode-quote-content">Originally posted by Caedus:<BR>How do I donate to this research?<BR><BR>A noble goal of there ever was one. </div></BLOCKQUOTE><BR><BR>Pay US taxes. probably a good amount of their support comes from US govt grants.
 
Upvote
0 (0 / 0)

Fake51

Seniorius Lurkius
17
<BLOCKQUOTE class="ip-ubbcode-quote"><div class="ip-ubbcode-quote-title">quote:</div><div class="ip-ubbcode-quote-content">Originally posted by spotter:<BR>However, another way to do it is to not replace the function pointer, but just put overwrite the called function's first instruction with a jmp into you malicious wrapper which then jmp's back into the real function and which then jmp's back into your malicious wrapper at the end (replace return). Now, this is much more complicated to do, but very possible.<BR> </div></BLOCKQUOTE><BR>And that's very easy to protect against and current rootkit detection tools already do that. Which is why you won't see it done much anymore: seeing a jmp instruction on the first opcode location in memory in kernel function starts all alarm bells when you're looking for rootkits.<BR>Another easy option for checking against this is simply checking the signature of core files against function signature in memory - if changed, then something strange has happened. You'd likely either be running a system level debugger, antivirus/firewall digging deep into your system, or you're rooted.
 
Upvote
0 (0 / 0)

drag

Ars Tribunus Angusticlavius
6,861
<BLOCKQUOTE class="ip-ubbcode-quote"><div class="ip-ubbcode-quote-title">quote:</div><div class="ip-ubbcode-quote-content">Originally posted by bartfat:<BR>Of course, if we had signed certificate applications for installing in the first place, we wouldn't even need antivirus, much less dealing with rootkits. </div></BLOCKQUOTE><BR><BR>All my applications are signed before I install them.<BR><BR>The application installer downloads a list of software packages from a trusted location. The list is cryptographically signed by a GNUPG Keyring. The client keys are setup during installation.<BR><BR>The list contains the software, descriptions, and other details. Part of that detail is the SHA hash of each software application.<BR><BR>When the package is downloaded the package is hashed and compared against the records in the signed list. This will detect any corruption or tampering. <BR><BR>I have not had any need for any anti-virus for as long as I can remember. I never had a single case of viruses, adware, spyware, or anything of that nature. No applications ever tried to install software without my knowledge or consent and as far as I can tell all network traffic coming out of my machine has always been on the up and up. Oh, and I never had to deal with 'crapware' installed by my machine's OEM either. <BR><BR><BR><BLOCKQUOTE class="ip-ubbcode-quote"><div class="ip-ubbcode-quote-title">quote:</div><div class="ip-ubbcode-quote-content"><BR>I'm wondering why they aren't doing this already, actually. To quote Top Gear, how hard can it be?<BR> </div></BLOCKQUOTE><BR><BR>Not terribly difficult for part of what your saying and exceptionally-difficult-to-practically-impossible for the other part. <BR><BR>If your system supports "Mandatory Access Controls" for different system resources. Each application developer then, if they wanted too, go and create a database of required system resources and accesses needed for the correct operation of their application. Then those rules can be included with the software installation package and added to the system during run time.<BR><BR>Then the OS vendor could be free to examine these rules before adding the package to their signed list or signing the package themselves.<BR><BR>The problem you run into is purely one of practicality. That sort of approach requires massive amounts of work and locking down a system to such a degree that that works well to secure it creates a operating system that becomes very unwieldy and next to impossible to use. <BR><BR>Users want to be able to modify the system behavior. They do this by installing software, creating their own software, changing software configurations, and combining software in novel ways. Static lists like what you want totally contradict that normal required desired behavior. So the expense and difficulty of using a locked-down system is so high that the sane user will reject it outright for a OS that is usable, but may have a worse security potential. <BR><BR>My OS supports SELinux by default. <BR><BR>Using SELinux you can implement strong: <BR> * Discretionary Access Control (DAC)<BR> * Access Control Lists (ACLs)<BR> * Mandatory Access Control (MAC)<BR> * Role-based Access Control (RBAC)<BR> * Multi-Level Security (MLS)<BR> * Multi-Category Security (MCS)<BR><BR>DAC is what Windows and OS X supports. Windows also supports ACLs. Microsoft says that there is 'mandatory access controls' but they are not in the same league. RBAC is a simpler way to go about doing restrictions by creating roles and adding and removing roles from particular users. MLS and MCS is typically only things that military folks are interested in... it is used for hiding information from non-privilaged users. For example if a top-secret document ends up in a directory that is readable and writable by people with only secret clearance then it is impossible for the 'secret' level user to even know that 'top-secret' file even exists... no information leakage; not even file names. <BR><BR><BR>Fun stuff. (not)<BR><BR>The only OS vendor that tries to implement the combination of supported signed software combined with MAC on their mainstream OS is going to be Redhat and Fedora (and CentOS). But they use a 'permissive' model that only lightly protects the system from external attackers and does not really accomplish a whole lot. To truly lock down the system requires a HUGE amount of work from a experienced admin.<BR><BR>My OS supports SELinux by default, but it is not used by default. I can also run SMACK (which is a much simplier MAC) or AppArmor (which is designed to make rules for applications much much simplier). <BR><BR>MAC and other things have existed for YEARS in all sorts of different OSes. But it always runs into the same problem.. it creates a unacceptable trade of off security vs convenience. <BR><BR>It is like having a home with 3-foot thick steel-reinforced concrete walls, feet-thick bullet proof observation glass, bunker-style doors for the garage required extensive hydraulics to operate, and industrial grade radiation and poison filtering on any incoming air or water. People are just not that paranoid.<BR><BR>There is a lot lower-hanging fruit to get. <BR><BR>For example: Fixing broken web applications. Typically systems get routed nowadays because people install buggy and poorly written 'web applications' on their servers. Avoiding those would create a instant increase in security while avoiding a lot of issues.
 
Upvote
0 (0 / 0)

ronelson

Ars Legatus Legionis
21,399
Subscriptor
Where is the database going to be stored? How will it be protected? How will updates be verified? This sounds like another layer that will increase admin workload and still allow problems through.<BR><BR>Possibly a good stepping off point for something revolutionary, just seems like a small evolutionary step right now.
 
Upvote
0 (0 / 0)

Sandman_1

Wise, Aged Ars Veteran
187
Sounds like old skool Interrupt Vector Table hooking to me which has been around for a long time. <BR><BR>The best defense against this is to not install any kind of executable code that is from a questionable source, period. If you must install a program, the best way to test it out is on a virtual machine or keep in a sandbox, even though a sandbox isn't a guarantee.
 
Upvote
0 (0 / 0)

aiki42

Seniorius Lurkius
28
<BLOCKQUOTE class="ip-ubbcode-quote"><div class="ip-ubbcode-quote-title">quote:</div><div class="ip-ubbcode-quote-content">Originally posted by bartfat:<BR>Of course, if we had signed certificate applications for installing in the first place, we wouldn't even need antivirus, much less dealing with rootkits. But that's essentially what they're suggesting here.. is to keep a database of authorized hooks for the kernel. It's no stretch to say that it might as well be applied to installing each and every third party application. Of course, this would mean current applications that aren't code signed will break (unless somehow someone comes up with a solution for the user to override the code signed checking on a per-application basis). Here's to hoping both Apple and Microsoft implement this. Although, I bet Apple will do it first because something similar is already running on iPhone OS. <BR><BR>Basically the OS vendor becomes a certificate authority, and can revoke certificates as needed to prevent any sort of outbreak of malware. Side benefit is that software piracy would also become harder, therefore encouraging developers to lower prices.. sort of like the App Store.<BR><BR>I'm wondering why they aren't doing this already, actually. To quote Top Gear, <B>how hard can it be?</B> </div></BLOCKQUOTE><BR><BR>I'm not going to argue for or against that idea... honestly I'm not sure either way. But... code signing is already in Mac OS X 10.5 Leopard. It's optional for the developer to add into their apps. Whereas, the iPhone requires it for all apps.
 
Upvote
0 (0 / 0)

xeoph

Ars Scholae Palatinae
1,172
<BLOCKQUOTE class="ip-ubbcode-quote"><div class="ip-ubbcode-quote-title">quote:</div><div class="ip-ubbcode-quote-content">Originally posted by Ralf The Dog:<BR>Is it true that Microsoft Windows 8 ACS will require user permission every time it writes to RAM? </div></BLOCKQUOTE><BR><BR>I was already picturing MS UAC prompts when I was reading the article. I don't think that it would be completely crazy to have prompts like this however it would be nice if it worked something like Kerio/Sunbelt firewall where you could approve certain things not to prompt you every time.
 
Upvote
0 (0 / 0)

hobgoblin

Ars Tribunus Angusticlavius
9,070
<BLOCKQUOTE class="ip-ubbcode-quote"><div class="ip-ubbcode-quote-title">quote:</div><div class="ip-ubbcode-quote-content">Originally posted by Ralf The Dog:<BR>Is it true that Microsoft Windows 8 ACS will require user permission every time it writes to RAM? </div></BLOCKQUOTE><BR><BR>probably tied into some *AA license checks to make sure you have the right to copy the data into ram in the first place<BR><BR>"please enter serial number on certificate, and put your finger onto the DNA scanner so it can extract a drop of blood for proof of user correctness"
 
Upvote
0 (0 / 0)
Status
Not open for further replies.