[Vol-users] 29c3 defeating windows memory forensics
George M. Garner Jr.
ggarner_online at gmgsystemsinc.com
Mon Jan 7 09:10:20 CST 2013
Good to hear from you and thanks again for drawing attention to the
importance of reliability issues in volatile evidence acquisition. The
robustness and reliability of memory acquisition tools has received
scant attention up until now, Yet, the acquisition phase is the most
important step. You can run an analysis tool and, if it fails, you can
still run a different analysis tool or patch the first analysis tool and
run it again. If an acquisition tool fails silently you are typically
out of luck! Note that the reliability issues addressed in your
presentation are not unique to memory acquisition but also apply to the
live acquisition of other storage media as well. Darren Bilby's 2006
Blackhat presentation (DDefy) addressed both "live" memory and other
storage acquisition. While your current presentation focuses on
physical memory acquisition, your techniques could easily be applied to
the "live" acquisition of fixed storage media as well. And perhaps they
should be so applied.
My apologies if my previous post seemed dismissive. The appropriate
response to a critical work is to view it critically, however. Darren
unfortunately abandoned his work on DDefy. My intention is not that you
abandon Dementia but that you improve it! I wouldn't worry too much
about giving the "bad guys" ideas, as they already are implementing
plenty of ideas, including some that haven't been mentioned yet.
Paper documents can be forged. Electronic documents can be tampered
with. Volatile evidence can be modified. We know that evidence can be
falsified. The question is, "What should we do about it?" That is why
the last slide in your presentation is the most important. Because it
attempts to provide us with a road map for the future, a way forward.
Unfortunately that is where I am having my greatest difficulty, as I
1. Acquisition tools should utilize drivers correctly, (i.e. they should
acquire memory into a kernel mode buffer and write that buffer to a file
from kernel mode)!
It is true that a tool such as win32dd will not be susceptible to user
mode DeviceIoControl/NtDeviceIoControlFile or WriteFile/NtWriteFile
hooks. However, win32dd will be/is vulnerable to your kernel mode
NtWriteFile hook and to a file system filter driver (assuming that you
are implementing those methods correctly). Also, if you are able to
install an inline hook in the kernel mode NtWriteFile function, you
ought equally to be able to install inline hooks in NtMapViewOfSection,
MmMapIoSpace and MmMapMemoryDumpMdl. Or the same could be accomplished
using IAT hooks, like the driver verifier does it. Indeed, hooking
those three functions would cover all of the memory acquisition tools
which you tested and render output compression or encryption irrelevant.
So how is the KM/KM paradigm (e.g. win32dd) superior to other methods?
Your PoC itself subverts both UM and KM acqusition, assuming that you
have implemented your NtWriteFile hook correctly. If not then shame on
you! Note that I am not saying this to disparage win32dd. I am just
saying that the approach taken by this tool is not inherently more
reliable than other methods.
2. Use hardware acquisition tools.
> You're absolutely right regarding the 4 GB limit. But from
> an attacker's perspective, this method cannot be used for hiding
> arbitrary object, and it might be difficult to "relocate" the
> driver, allocations and all resources above the specified limit. <
Actually it is quite easy to load a driver or arbitrary object (e.g.
encryption key or key schedules) into a reserved physical address space
above 4 GiB. We are currently shipping some software to our customers
that is able to do precisely that. It wasn't written with the intention
of "cheating" firewire memory dumps; however, it should serve that
purpose equally well.
3. Use crash dumps (native!) instead of raw dumps.
I may have been mistaken about Sinowal removing itself from crashdumps
during the dump process. Of the publicly available rootkits, Rustock.C
appears to be the one most often cited as an example of this.
http://www.eicar.org/files/lipovsky_eicar2010.pdf. A very cursory
static analysis of crashdmp!WriteBufferToDisk indicates that
BugCheckDumpIoCallback's are called before data is written to the dump file.
fffff880`00ac450a ff15f85b0000 call qword ptr
fffff880`00ac4510 448b4c2428 mov r9d,dword ptr [rsp+28h]
fffff880`00ac4515 448b442440 mov r8d,dword ptr [rsp+40h]
fffff880`00ac451a 488b542470 mov rdx,qword ptr [rsp+70h]
fffff880`00ac451f b801000000 mov eax,1
fffff880`00ac4524 488bce mov rcx,rsi
fffff880`00ac4527 660944245a or word ptr [rsp+5Ah],ax
fffff880`00ac452c e87f0e0000 call crashdmp!InvokeDumpCallbacks
fffff880`00ac5442 488364242800 and qword ptr [rsp+28h],0
fffff880`00ac5448 33c0 xor eax,eax
fffff880`00ac544a 4889442430 mov qword ptr [rsp+30h],rax
fffff880`00ac544f 4889442438 mov qword ptr [rsp+38h],rax
fffff880`00ac5454 48834c2428ff or qword ptr
fffff880`00ac545a 4c897c2430 mov qword ptr [rsp+30h],r15
fffff880`00ac545f 4489742438 mov dword ptr [rsp+38h],r14d
fffff880`00ac5464 8974243c mov dword ptr [rsp+3Ch],esi
fffff880`00ac5468 448d4818 lea r9d,[rax+18h] r9d =
fffff880`00ac546c 4c8d442428 lea r8,[rsp+28h] ; r8 =
fffff880`00ac5471 488bd3 mov rdx,rbx ; rdx = CallbackRecord
fffff880`00ac5474 8d4803 lea ecx,[rax+3]; ecx =
fffff880`00ac5477 ff5310 call qword ptr [rbx+10h]
The dump IO block contains a pointer to the data that is about to be
written to the dump file. Even if it turns out that this callback is
being called after data has been written to the dump file you could use
the first call (ReasonSpecificData->Type == KbDumpIoHeader) to remove
the rootkit from memory and then ignore subsequent calls.
The bigest problem with crash dumps is that the dump process discards so
much valuable evidence. Basically, you only get what is in the OS's
physical memory map with a crashdump. That will miss the real mode
interrupt vector table, many artifacts of ACPI rootkits and BIOS kits,
and rootkits loaded into unmanaged memory. In addition, crashdumps lack
some of the features that we expect from a trusted forensic tool such as
cryptographic checksums and robust error logging. Finally, the
crashdump overwrites the pagefile which itself may contain valuable
Once again, the superiority of crashdumps over other acquisition methods
appears to be largely due to a design decision on your part not to
attack crashdumps as vigorously as you might.
So where does that leave us? All hope is lost?
Paper documents can be forged. Electronic documents can be tampered
with. Volatile evidence can be modified. Computer forensics scarcely
could go on if the mere possibility or even fact of evidence tampering
were sufficient to invalidate the whole process. If evidence were
inherently reliable we would not need experts to acquire it or to apply
a forensic method to sift the facts. One of the major limitations of
your presentation is that it does not address the methods that
professional forensic tools use (or should use) to ensure the integrity
of evidence as it is being transmitted and archived for future use, e.g.
robust logging and cryptographic checksums. To be successful the "bad
guy" must not only alter volatile evidence, he must also keep the
investigator from knowing that the evidence has been altered. "Bad
guys" try to "hide" or alter evidence in all sorts of different ways.
But as Jesse Kornblum has pointed out, the act of "hiding" itself
produces artifacts which enhance the visibility of the malware that is
being hid (this is the "rootkit paradox"). Before we begin to ask about
malicious modification of volatile evidence an even more basic question
needs to be asked: Does the memory acquisition tool implement a sound
forensic method in the first place? If the tool does not offer any
means to ensure evidentiary integrity (e.g. robust logging and
cryptographic checksums) there is no need to ask about malicious
modification of evidence.
Once we are satisfied that a tool implements a sound forensic method we
would then like to see if the evidence adulteration is reflected in the
output of the tool in the form of log entries and/or modified
cryptographic checksum. In other words, I would like to see you acquire
memory evidence the way that a professional computer forensic "expert"
would acquire evidence and then see whether the PoC is able to subvert
the acquisition process without any indication in the form of log
entries or altered cryptographic checksums.
All of the techniques which you document, except for the user mode
DeviceIoControl hook, would be defeated simply by adopting a
forensically sound method. If any thing I draw the conclusion that a
crytographic hash of the data should be generated very early in the
acquisition process, and as close to the actual reading of the data as
possible. If you were to conclude that memory acquisition tools should
generate cryptographic hashes of the evidence in KM I would buy that!
Cryptographic checksums might still be defeated by NtMapViewOfSection,
MmMapIoSpace and MmMapMemoryDumpMdl hooks. But we can detect their
operation using other means.
While not strictly a part of the acquisition process, robust analysis is
also a means to ensure evidentiary reliability. Even when robust
logging and cryptographic hashes fail, robust analysis still may be able
to detect anomalies which indicate evidence tampering. To draw a couple
of analogies from the world of disk forensics, "bad guys" often tamper
with file system MAC times. But then forensic investigators look for
artifacts of altered MAC times (or the tools which generated them)
elsewhere in the file system image. "Bad guys" also use file or disk
wiping software (e.g. CCleaner) to destroy digital evidence. Forensic
experts then look for artifacts that CCleaner was used on the system
within the time frame that evidence is believed to have been destroyed.
How is using Dimentia to wipe certain artifacts from memory different
from using CCleaner to destroy disk evidence, if I can find artifacts
which show that Dimentia was used? Inline hooks in a user mode process
leave some definite artifacts even after they have been removed. Kernel
mode inline hooks (e.g. in NtWriteFile) also are highly visible, and
they don't work on 64-bit Windows as you acknowledge. Even the lower
disk filters used by most bootkits are visible now that everyone knows
to look for them.
In our own research we have not had much trouble coming up with ways to
"cheat" physical memory acquisition. What is difficult is finding ways
to "cheat" the acquisition process without anyone knowing about it
(assuming that a sound forensic method was applied during acquisition).
To be successful anti-forensic software must accomplish its purpose
while leaving very few traces behind. One approach may be to place the
rootkit in a memory location that the forensic tool does not acquire.
For example, some tools aren't able to acquire non-cached or
write-combined memory on Windows systems prior to Windows 7 (e.g. MDD).
You might be able to "hide" a rootkit from one of these tools simply
by placing the rootkit in non-cached or write-combined memory without
any hooks whatsoever. (Well, actually, only the appropriate entry in
the PFN database needs to say that the memory is non-cached or
write-combined, and not the PTE entry itself.)
Robust logging and proper use of cryptographic checksums are an
essential attribute of a trusted forensic tool. If your PoC were to
force forensic tool vendors to implement sound forensic methods, or
failing that, to show the perils of not implementing those methods, it
would do a great service to the forensic community. To be fair you
should not ignore suitable and obvious methods of subverting KM
acquisition methods. (Why is it that you chose not to attack
NtMapViewOfSection, MmMapIoSpace and MmMapMemoryDumpMdl directly?) A
very interesting extension of your work would be to modify open source
router firmware to strip malicious artifacts from the data stream as it
is being transmitted over the wire. Then see how many tools notice that
the evidence was modified. :-)
Just for the record, we strongly urge our customers to acquire evidence
to the network when the presence of malware is suspected. Malware that
spreads via removable storage devices is quite common now-a-days.
Evidence acquired to a local storage device needs special handling to
avoid spreading the infection to other computers on site and to avoid
the investigator becoming part of someone's botnet. We also strongly
recommend the use of content encryption (and not just channel
encryption) when acquiring over the network to prevent the disclosure by
the investigator of the very sensitive data that the "bad guy" was
attempting to acquire. Encryption may also be useful to protect
evidence written to removable storage media from tampering after it has
been written to disk. You didn't really look at that possibility.
However, if the "bad guy" has an advance copy of your tools and the time
to reverse engineer them, you will be toast in any event.
So here are my conclusions after reading your presentation:
1. Computer forensic investigators should acquire volatile evidence in a
forensically sound manner.
2. Memory acquisition tools should employ robust error logging and
cryptographic checksums to ensure the integrity of volatile evidence as
it is being transmitted and archived for future use.
3. Cryptographic checksums should be generated for volatile evidence as
early as possible in the acquisition process, preferrably in kernel mode.
4. Encryption should be employed during the transmission and archival of
digital evidence to preserve the confidentiality of sensitive data and
to reduce the attack surface.
My apologies to the Volatility mailing list for such a lengthy post.
However, I think that this topic is of sufficient importance to warrant
I look forward to the publication of your PoC code bits.
George M. Garner Jr.
GMG Systems, Inc.
More information about the Vol-users