Monday, August 11, 2014

AFoD Blog Interview With Andrew Case

I can’t remember how long I’ve known Andrew. It seems like we’ve been corresponding on social media and email for ages now and I’ve been a fan of his work for a very long time. He’s been a heroic contributor to the open source digital forensics community and I figured it was long past time for me to do a proper AFoD Blog interview with him.

Andrew Case’s Professional Biography

Andrew is a digital forensics researcher, developer, and trainer. He has conducted numerous large scale digital investigations and has performed incident response and malware analysis across enterprises and industries. Prior to focusing on forensics and incident response, Andrew's previous experience included penetration tests, source code audits, and binary analysis. He is a co-author of “The Art of Memory Forensics”, a book being published in summer 2014 that covers memory forensics across Windows, Mac, and Linux. Andrew is the co-developer of Registry Decoder, a National Institute of Justice funded forensics application, as well as a core developer of The Volatility Framework. He has delivered trainings to a number of private and public organizations as well as at industry conferences. Andrew's primary research focus is physical memory analysis, and he has presented his research at conferences including Blackhat, RSA, SOURCE, BSides, OMFW, GFirst, and DFRWS. In 2013, Andrew was voted “Digital Forensics Examiner of the Year” by his peers within the forensics community.

AFoD Blog: What was your path into the digital forensics field?

Andrew Case: My path into the digital forensics field started from a deep interest in computer security and operating system design. While in high school I realized that I should be doing more with computers than just playing video games, chatting, and doing homework. This led me to starting to teach myself to program, and eventually taking Visual Basic 6 and C++ elective classes in my junior and senior years. Towards the end of high school I had become pretty obsessive with programming and computer security, and I knew that wanted to do computer science in college.

I applied to and was accepted into the computer science program at Tulane University in New Orleans (my hometown), but Hurricane Katrina had other plans for me. On the way to new student orientation in August 2005, I received a call that it was cancelled due to Katrina coming. That would be the closest I ever came to being an actual Tulane student, which obviously put a dent in my college plans.

The one bright side to Katrina was that from August 2005 to finally starting college in January 2006, I had almost six months of free time to really indulge my computer obsession. I spent nearly every day and night learning to program better, reading security texts (Phrack, Uninformed, Blackhat/Defcon archives, 29A, old mailing list posts, related books, etc.), and deep diving into operating system design and development. My copy of Volume 3 of the Intel Architecture manuals that I read cover-to-cover during this time is my favorite keepsake of Katrina.

This six month binge led to two areas of interest for me - exploit development and operating system design. Due to these interests, I spent a lot of time learning reverse engineering and performing deep analysis of runtime behavior of programs. I also developed a 32-bit Intel hobby OS along the way that, while not overly impressive compared to others, routed interrupts through the APIC, supported notions of userland processes, and had very basic PCI support.

By the time January 2006 arrived, I was expecting to finally start college at Tulane. These plans came to a quick halt though as in the first week of January, two weeks before classes were to start, Tulane dropped all of their science and engineering programs. Due to the short notice Tulane gave, I had little time to pick a school or risk falling behind another semester. The idea of finding a school outside of the city, moving to it, and starting classes all in about ten days seemed pretty daunting. I then decided to take a semester at the University of New Orleans. This turned out to be a very smart decision as UNO’s computer science program was not only very highly rated, but it also had a security and forensics track inside of CS.

For my first two years of college I was solely focused on security. At the time forensics seemed a bit… dull. My security interest led to interesting opportunities, such as being able to work at Neohapsis in Chicago for two summers doing penetration tests and source code audits, but I eventually caught the forensics bug. This was greatly influenced by the fact that professors at UNO, such as Dr. Golden Richard and Dr. Vassil Roussev, and students, such as Vico Marziale, were doing very interesting work and publishing research in the digital forensics space. My final push into the forensics came when I took Dr. Golden’s undergraduate computer forensics course. This class throws you deep into the weeds and forces you to analyze raw evidence starting with a hex editor. You eventually move onto more user-friendly tools and automated processing. Once I saw the power of forensic analysis I was hooked, and my previous security and operating systems knowledge certainly helped ease the learning curve.

Memory forensics was also become a hot research area at the time, and when the 2008 DFRWS challenge came out it seemed time for me to fully switch my focus to forensics. The 2008 challenge was a Linux memory sample that needed to be analyzed in order to answer questions about an incident. At the time, existing tools only supported Windows so new research had to be performed. Due to my previous operating systems internals interest, I had already studied most of the Linux kernel so it seemed pretty straightforward to extract data structures I already understood from a memory sample. My work on this, in conjunction with the other previously mentioned UNO people, led to the creation of a memory forensics tool, ramparser, and the publication of our FACE paper at DFRWS 2008. This also led to my interest in Volatility and my eventual contributions of Linux support to the project.

AFoD: What was the memory forensics tool you created?

Case: The tool was called ramparser, and it was designed to analyze memory samples from Linux systems. It was created as a result of the previously mentioned DFRWS 2008 challenge. A detailed description of the tool and our combined FACE research can be found here: http://dfrws.org/2008/proceedings/p65-case.pdf. This project was my first real research into memory forensics, and I initially had much loftier goals than would ever be realized. Some of these goals would later be implemented inside Volatility, while some of them, such as kernel-version generic support, still haven’t been done by myself or anyone else in the research community.

Soon after the DFRWS 2008 paper was published, the original ramparser was scrapped due to severe design limitations. First, it was written in C, which made it nearly impossible to implement the generic, runtime data structures that are required to support a wide range of kernel versions. Also, ramparser had no notion of what Volatility calls profiles. Profiles in Volatility allow for plugins to be written generically while the backend code handles all the changes between different versions of an operating system (e.g. Windows XP, Vista, 7, and 8). Since ramparser didn’t have profiles, the plugins had to perform conditional checks for each kernel version. This made development quite painful.

ramparser2 (I am quite creative with names) was a rewrite of the original ramparser in Python. The switch to a higher-level interpreted language meant that much of the misery of C immediately went away. Most importantly, dynamic data structures could be used that would adapt at runtime to the kernel version of the memory sample being analyzed. I ported all of the original ramparser plugins into the Python version and added several new ones.

After this work was complete, I realized that, while my project was interesting, I had no real way of getting other people to use or contribute to it. I also knew that Windows systems were of much higher interest to forensics practitioners than Linux systems and that Volatility, which only supported Windows at the time, was beginning to see wide spread use in research projects, incident handling, and malware analysis. I then decided that integrating my work into Volatility would be the best way for my research to actually be used and improved upon by other people. Looking back on that decision now I can definitely say that I made the right choice.

AFoD: For those readers who are not familiar with digital forensics or at least not familiar with memory forensics, can you explain what the Volatility Project is and how you became involved with it?

Case: The Volatility Project was started in the mid-2000s by AAron Walters and Nick Petroni. Volatility emerged from two earlier projects by Nick and AAron, Volatools and The FATkit. These were some of the first public projects to integrate memory forensics into the digital investigation process. Volatility was created as the open source version of these research efforts and was initially worked on by AAron and Brendan Dolan-Gavitt. Since then, Volatility has been contributed to by a number of people, and has become one of the most popular and widely used tools within the digital forensics, incident response, and malware analysis communities.

Volatility was designed to allow researchers to easily integrate their work into a standard framework and to feed off each other’s progress. All analysis is done through plugins and the core of the framework was designed to support a wide variety of capture formats and hardware architectures. As of the 2.4 release (summer 2014), Volatility has support for analyzing memory captures from 32 and 64-bit Windows XP through 8, including the server versions, Linux 2.6.11 (circa 2005) to 3.16, all Android versions, and Mac Leopard through Mavericks.

The ability to easily plug my existing Linux memory forensics research into Volatility was one of the main points that led me to more deeply explore the project. After speaking with Brendan about some of my research and the apparent dead-end that was my own project, he suggested I join the Volatility IRC channel and get to know the other developers. Through the IRC channel I met Jamie, Michael, AAron, and other people that I now work with on a daily basis. This also got me in touch with Michael Auty, who is the Volatility maintainer, and who worked with me for hours a day for several weeks in order to get the base of the Linux support. Once this base support was added I could then trivially port my existing research into Volatility Linux plugins.

AFoD: I know we have people who read the blog who aren't day-to-day digital forensics people so can you tell us what memory forensics is and why it's become such a hot topic in the digital forensics field?

Case: Memory forensics is the examination of physical memory (RAM) to support digital forensics, incident response, and malware analysis. It is has the advantage over other types of forensics, such as network and disk, in that much of the system state relevant to investigations only appears in memory. This can include artifacts such as running processes, active network connections, and loaded kernel drivers. There are also artifacts related to the use of applications (chat, email, browsers, command shells, etc.) that only appear in memory and are lost when the system is powered down. Furthermore, attackers are well aware that many investigators still do not perform memory forensics and that most AV/HIPS systems don’t thoroughly look in memory (if at all). This has led to development of malware, exploits, and attack toolkits that operate solely in memory. Obviously these will be completely missed if memory is not examined. Memory forensics is also being heavily pushed due to its resilience to malware that can easily fool live tools on the system, but have a much harder time hiding within all of RAM.

Besides the aforementioned items, memory forensics is also becoming heavily used due to its ability to support efficient triage at scale and the short time in which analysis can begin once indicators have been found. Traditional triage required reading potentially hundreds of MBs of data across disk looking for indicators in event logs, the registry, program files, LNK files, etc. This could become too time consuming with even a handful of machines, much less hundreds or thousands across an enterprise. On other hand, memory-based indicators, such as the names of processes, DLLs, services, and kernel drivers, can be checked by only querying a few MBs of memory. Tools, such as F-Response, makes this fairly trivial to accomplish across huge enterprise environments and also allow for full acquisition of memory if indicators are found on a particular system.

The last reason I will discuss related to the explosive growth of the use of memory forensics is the ability to recover encryption keys and plaintext versions of encrypted files. Whenever software encryption is used, the keying material must be stored in volatile memory in order to support decryption and encryption operations. Through recovery of the encryption key and/or password, the entire store (disk, container, etc.) can be opened. This has been successfully used many times against products such as TrueCrypt, Apple’s Keychain, and other popularly used encryption products. Furthermore, as files and data from those stores are read into memory they are decrypted so that the requesting application (Word, Adobe, Notepad) can allow for viewing and editing by the end user. Through recovery of these file caches, the decrypted versions of files can be directly reconstructed from memory.

AFoD: The rumor going around town is that you're involved with some sort of memory forensics book? Is there any truth to that?

Case: That rumor is true! Along with the other core Volatility developers (Michael Ligh, Jamie Levy, and AAron Walters), we have recently written a book: The Art of Memory Forensics: Detecting Malware and Threats in Windows, Linux, and Mac Memory. The book is over 900 pages and provides extensive coverage of memory forensics and malware analysis across Windows, Linux, and Mac. While it may seem like it covers a lot of material, we originally had 1100 pages of content before the editor asked us to reduce the page count. The full table of contents for the book can be found here.

The purpose of this book was to document our collective memory forensics knowledge and experiences, including the relevant internals of each operating system. It also demonstrates how memory forensics can be applied to investigations ranging from analyzing end-user activity (insider threat, corporate investigations) to uncovering the workings of the most advanced threat groups. The book also spends a good bit of time introducing the concepts that are needed to fully understand memory forensics. There is an entire chapter dedicated to memory acquisition – a deeply misunderstood topic that can have drastic effects on people’s subsequent ability to perform proper memory analysis. An added bonus of this chapter is that we worked with the authors of the leading acquisition tools to ensure that our representation of the tools were correct and that we accurately described the range of issues that investigators need to be aware of when performing acquisitions.

The book is structured so that we introduce a topic (processes, kernel drivers, memory allocations, etc.) for a specific operating system, explain the relevant data structures, and then show how Volatility can be used to recover the information in an automated fashion. Volatility was chosen due to our obvious familiarity with it, but also due to the fact that it is the only tool capable of going so deeply and broadly into memory. The open source nature of Volatility means that readers of the book can read the source code of any plugins of interest, modify them to meet the needs of their specific environment, and add on to existing capabilities in order to expand the field of memory forensics. With that said, the knowledge gained from the book is applicable to people using any memory forensics tool and/or those who wish to develop capabilities outside of Volatility.

Along with the book, we will also be releasing the latest version of Volatility. This 2.4 release includes full support for Windows 8, 8.1, Server 2012, and Server 2012 R2, TrueCrypt key and password recovery modules, as well as over 30 new Mac and Linux plugins for investigating malicious code, rootkits, and suspicious user activity. In total, Volatility 2.4 has over 200 analysis plugins across the supported operating systems.

AFoD: What do you recommend to people who are looking to break into the digital forensics field? What would you tell someone who is in high school compared to someone who is already in the middle of a career and looking to make the switch.

Case: To start, there are a few things I would tell both sets of people. First, I consider learning programming to be the most important and essential. The ability to program removes you from the set of investigators that are limited by what their tools can do for them, and the skill also makes you highly attractive to potential employers. As you know, when dealing with advanced malware and attackers, existing tools are only the starting point and many customizations and “deep dives” of data outside tools’ existing functionality are needed to fully understand what occurred. To learn programming, I would recommend starting with a scripting language (Python, Ruby, or similar). These are the easiest to learn and program in and the languages are pretty forgiving. There are also freely accessible guides online as well as great books on all of these languages.

The other skill I would consider essential is at least a moderate understanding of networking. I don’t believe that people can be fully effective host analysts without some understanding of networking and how data flows through the environment they are trying to protect or investigate. If the person wants to become a network forensics analyst, then they obviously need the base set of skills to even be considered. To learn the basics of networking, I would recommend starting by reading a well-rated Network+ study book.  This will teach you about routing, switching, physical interfaces, sub-netting, VLANs, etc. After understanding the hardware devices and how they interact, you should then read Volumes 1 of TCP/IP illustrated.  If you can read C, I would recommend reading Volume 2 as well, but know that the book can be brutal. It is over 1000 pages and walks you literally line-by-line through the BSD networking stack. You will be a master if you can finish and understand it all. It took me a month of reading everyday after work to get through it during a summer in college.  If someone hasn’t read TCP/IP illustrated then I seriously question his or her networking background. To quote a document that I find very inspirational related to security and forensics: “have you read all 3 volumes of the glorious TCP/IP Illustrated, or can you just mumble some useless crap about a 3-way handshake”. 

As far as specific advices to the different audiences, I would strongly recommend that high school students learn at least some electronics and hardware skills. If you are going to do computer science make sure to take some electrical engineering courses as electives in order to get hands-on experience with electronics. I plan on expanding on this more in the near future, but I truly think that in the next few years not being able to work with hardware will limit one’s career choices and will certainly affect your ability to do research. In short, current investigators can get away with only interacting with hardware when performing tasks like removing a hard drive or disassembling components by hand. As phones, tablets, and other devices without traditional hard drives or memory become standard (see “Internet of Things”), the ability to perform actions, such as removing flash chips, inspecting hardware configurations, and interacting with systems through hardware interfaces will become common place. Without these skills you won’t even be able to image a hard drive - for example if I gave an investigator with the currently most useful skills a “smart TV” and told he or she to remove the hard drive in a forensically sound manner, do you think it would happen? Would the person grab an electronics kits and start pulling electrical components out? Most people in forensics would have no idea how to do that - myself included.

For people already in the field, I would play to your strengths. If you have a background in programming then use that to your advantage. Explain to your future employer how your programming background will allow you to automate tasks and help out in cases where source code review is needed. Being able to automate tasks is a huge plus and greatly increases efficiency while removing the chance for human error. If a person’s background is networking, then there are many ways he or she could transition into network forensics roles, whether as part of a SOC or a consultant. When transitioning roles I would make sure to ask any prospective employers about training opportunities at the company. If a person with an IT background can really get into the forensics/IR trenches while also getting quality training once or twice a year then he or she will quickly catch up to their peers.

AFoD: So where can people find you this year? Will you be doing any presentations or attending any conferences?

Case: The remainder of the year will actually be quite busy with speaking engagements. Black Hat just wrapped up and while there we did a book signing, released Volatility 2.4 at Black Hat Arsenal, and threw a party with the Hacker Academy to celebrate the book’s release. In September I will be speaking at Archc0n in St. Louis, and in October I will be taking my first trip to Canada to speak at SecTor. I may also be speaking at Hacker Halted in October. In November I will be speaking at the Open Memory Forensics Workshop (OMFW) and the Open Source Digital Forensics Conference (OSDFC) along with the rest of the Volatility team. I also have pending CFP submissions to BSides Dallas and the API Cyber Security Summit, both in November. I am currently eyeing some conferences for early next year including Shmoocon and SOURCE Boston, neither of which I have spoke at previously. Finally, if any forensics/security people are ever coming through New Orleans then they should definitely reach out. Myself, along with several other local DFIR people, can definitely show out-of-towners a good time in the city and have done so many times.

Wednesday, April 2, 2014

The State Of The Blog

I get enough people asking me about the fate of the blog where I thought it would make more sense to just crank out a blog post. I’m still here, but my time continues to be so limited that I’ve had to continue to put the blog on hold. A couple years back I started a great new job building out a world class cyber investigations team and that continues to take up the majority of my time. I’m planning a few blog posts about what I have learned as a hiring manager to help those who are looking to break into the field and how best to approach things like resumes, cover letters, and interviews.

What really killed my ability to stay on top of the blog was starting an MBA program last fall. It turns out a full-time job coupled with being a full-time MBA student doesn’t leave much free time. Pretty much anything that doesn’t involve my family, job, or school will be on hold until I graduate in the spring of 2015 or the University of Florida punts me out of their MBA program. I’ve managed to survive one term so far and I’m cramming for finals for my second term this week.Three more terms to go after that.

I’m hoping to carve out some time to crank out a couple blog posts now and again before graduation and then get back to my usually blogging schedule of a blog post every two to four weeks. I have a couple blog posts that I really want to get out this year including some interviews.

Monday, September 23, 2013

Ever Get The Feeling You’ve Been Cheated? (Part 2)

I’m back in graduate school these days which is one of the reasons why I’m long overdue on this blog post. Returning to school has provided me with perspective of a student when thinking about the issue of digital forensics degrees. The more I think about it, the less I like the idea of the digital forensics academic programs compared to some alternatives.

The last blog post resulted in plentiful public and private feedback. A common question was what I expected from the graduate of digital forensics programs. I don’t expect someone with a digital forensics degree and no experience to “hit the ground running” where they are immediately cranking out competent digital forensics exams. What I do expect from undergraduate students is that they will be able to perform basic digital forensics exams with about six months of substantial training from my team. I also expect that they will be able to talk intelligently about file system forensics in the initial job interview. If a candidate doesn’t know digital forensics beyond the tools, they were cheated and they’re yet another digital forensics degree victim. I might as well just draw a chalk outline around the chair they sat in for the interview because it’s a crime scene.

If a candidate has a graduate degree in digital forensics, I have the same six month expectation of when they can start to perform acceptable digital forensics exams. Additionally, they had better be able to keep up in an advanced NTFS discussion during the interview. I won't go into the specifics here because I don't want to give away my hiring methods and questions, but I expect a working knowledge of NTFS from the undergraduate degree holders and much more out of the people with a graduate degree.  If you have that shiny new digital forensics graduate degree, you also better have something you are passionate about and skilled at when it comes to the digital forensics world.

So how do you get to the place where you can be successful in a job interview and land that first job? In general, forget about getting a digital forensics degree at an undergraduate level. You’re better off building a firm intellectual foundation for yourself by mastering the fundamentals of computer hardware and software by going through a program such as computer engineering, electrical engineering, or a similarly structured information technology program. Most digital forensics programs are just warmed over mediocre information technology programs with enough poorly taught digital forensics content so that the school can call it a digital forensics degree.

If you want to be excellent at digital forensics, you need a strong understanding of the fundamentals of the technology that you are going to be investigating. The medical profession figured this out a long time ago when it came to training doctors. Medical school is about teaching students about the fundamentals before they move onto their more specialized job roles. Specialties such radiology and pathology are specializations in the medical world that are roughly similar to what we do in the technical world. Both of those jobs require a rigorous general education in medical school before more highly specialized training through residency and fellowship educational processes.

If someone in high school were to come to me today and ask me what the best way to prepare for a digital forensics career, I would tell them to find the best value they can in a degree such as computer or electrical engineering and to supplement that education with some specialized digital forensics training. The specialized training could take the form of a strong digital forensics undergraduate minor, a graduate or undergraduate certificate program, or a full digital forensics graduate program. Some of the best programs in the digital forensics world aren’t actually full digital forensics programs. You do not have to get an degree in digital forensics to prepare for and begin a rewarding career in the field.

Value is important when it comes education which is why I caution students about taking on excessive student loans. Racking up $80,000 in loans for a mediocre digital forensics degree is senseless. I can understand higher student loans if someone is fortunate enough to get into certain top-tier schools such as Cal Tech, MIT, or Stanford, but the math just isn’t likely to work for an expensive degree in digital forensics from Burning Stump Junction University (BSJU). If you are here in the United States, you likely have very fine options that are being offered in your state schools at in-state tuition prices. You will likely be much better off getting that computer engineering degree from the University of Your State at in-state tuition prices than going into massive debt for digital forensics degree at BSJU.

Sunday, July 7, 2013

Gary “Doc” Welt (You Can Help Batman)

In the way of warning, this blog post has almost nothing to do with digital forensics and everything to do with something more important. One of the nice things about having my own blog is that I am my own editor and I don’t have to ask permission to write about something that has very little to do with the original purpose of the blog.

I originally set out to write a follow-up to my last post dealing with the deficiencies that I’m seeing in digital forensics education. That blog post generated quite a bit of interest and I’m grateful for all of the responses both in public and private. I’ll get back to that topic in my next blog post and, as an added “bonus”, I’ll even talk about the new CCFP “cyber forensics” certification being offered by ISC2.

But none of that seems all that important to me as I write this on the 4th of July weekend given how many people over the years have sacrificed everything they had to defend the United States of America and the rest of Western Civilization against a whole host of profoundly bad people. Even a cursory glance at world history shows that peace and prosperity is not the natural state of human affairs. Being able to sustain a place like the United States requires an incredible amount of continuous effort by many people with the brunt of the burden falling on the United States military.

By day, I am a mild mannered digital forensics geek who has the honor and privilege to lead a pack of world-class border collies. By night (and sometimes weekends), among other things, I’m a rookie competitive action shooter. I started doing this early this year and it’s been an amazing experience in large part because of the people involved in the sport. They tend to be some of the nicest and most generous people that I've encountered in many years and this generosity reminds me of the digital forensics community in many ways. My primary game is USPSA action shooting and my home club is the Wyoming Antelope Club in Clearwater, Florida.

It’s through the Wyoming Antelope Club that I became aware of a real live superhero by the name of Gary “Doc” Welt. “Doc” Welt spent around thirty years of his life as a United States Navy SEAL. You can read about Gary’s career here and you will also read why I’m writing this. Gary Welt has been diagnosed with Amyotrophic Lateral Sclerosis (ALS) which also known as Lou Gehrig’s disease. ALS is a very tough set of cards to be dealt. Gary provides a very clear explanation of what he’s up against in this YouTube video. The life expectancy of someone diagnosed with it tends to be two to five years. There is a small percentage of people who live beyond that time. This is the same disease that Stephen Hawking has and, as CNN explains...

Most people with ALS survive only two to five years after diagnosis. Hawking, on the other hand, has lived more than 40 years since he learned he had the disease, which is also known as Lou Gehrig's Disease in America and motor neuron disease, or MND, in the United Kingdom.

So if anyone has a shot at beating the odds in the face of ALS, it’s a  superhero like Gary Welt.  What is interesting is that Gary’s military service might be one of the things that increased his risk for getting ALS. The Mayo Clinic reports that:

Recent studies indicate that people who have served in the military are at higher risk of ALS. Exactly what about military service may trigger the development of ALS is uncertain, but it may include exposure to certain metals or chemicals, traumatic injuries, viral infections and intense exertion.

I call Gary Welt a superhero because he is one. Think about it. No one would deny that Batman is a superhero, but he’s a superhero who doesn’t have any intrinsic superpowers. He wasn’t bitten by a radioactive spider or exposed to gamma radiation which provided him special powers. He wasn’t born on Krypton and sent to Earth. Batman is superhero because he's an exceptionally trained, highly intelligent, and supremely well-conditioned human being with a vast equipment budget. That also describes the US Navy SEALS. Most people can’t even get into their training pipeline much less complete it because of the mental and physical toughness that is required. They do incredibly complicated and challenging work with some of the most sophisticated weapons systems in the world. So even if you are mentally and physically tough enough, you aren’t going to become a SEAL if you are a dullard.

What about equipment? We all know that Batman has all sorts fantastic equipment like the Batmobile, Batcopter, Batcycle, Batboat, and all the rest of his goodies. The SEALS have their own stuff that might as well be right out of a comic book. Check out the picture below.

050505-N-3093M-007<br />

That’s right. The SEALS have their own version of the Batsub. They just call it a SEAL Delivery Vehicle. Put some capes on those guys and give it a bit more of a snappy name and you’ve got a picture right out of a comic book.

Not enough to convince you? Fine. The SEALS have their own version of the BatBuggy. Look at this:

020413-N-5362A-007

The SEALS just happen to call their BatBuggy a Desert Patrol Vehicle. Not the most creative name, but it can be equipped with a variety of weapons including a 40mm grenade launcher so it doesn’t need one. Good luck with that, Joker.

The only meaningful difference that I can see between a superhero like Batman and a superhero like Gary Welt is that Batman is fictional and “Doc” Welt and the rest of his SEAL brothers are real. “Doc” Welt is a superhero who has devoted his life to fighting bad guys and protecting the rest of his. Now we have an opportunity to try and return the favor by helping him out when he’s in a tough fight. How often do you get to say that you helped a real life superhero?

As the Red Circle Foundation webpage set for for him explains:

We are raising money to help Gary and his wife modify their home for his condition and for wheelchair access. The VA (Veterans Affairs) does a lot of good, but they are a slow moving bureaucracy and time is critical for the Welt family.

The primary way that you can help Gary is donating money via the Red Circle Foundation website. I think the current setup is that any money you give via that portal will result in 90 percent going to Gary and 10 percent to help pay for Red Circle Foundation costs. If you follow Gary’s progress at the HelpGaryWelt Facebook page you’ll see them discussing how that works.

I know the digital forensics community to be a very generous bunch with a culture of sharing and helping one another out. He’s an opportunity for us stand together to help someone who has done so much for others. How often can you say that you got the help Batman? Please consider giving anything you can to help a real live superhero like Gary “Doc” Welt.

Photo Credits and Captions

SDV Photo:

Atlantic Ocean (May 5, 2005) - Members of SEAL Delivery Vehicle Team Two (SDVT-2) prepare to launch one of the team's SEAL Delivery Vehicles (SDV) from the back of the Los Angeles-class attack submarine USS Philadelphia (SSN 690) on a training exercise. The SDVs are used to carry Navy SEALs from a submerged submarine to enemy targets while staying underwater and undetected. SDVT-2 is stationed at Naval Amphibious Base Little Creek, Va., and conducts operations throughout the Atlantic, Southern, and European command areas of responsibility. U.S. Navy photo by Chief Photographer's Mate Andrew McKaskle (RELEASED)

DPV Photo:

Camp Doha, Kuwait (Feb. 13, 2002) - U.S. Navy SEALs (SEa, Air, Land) operate Desert Patrol Vehicles (DPV) while preparing for an upcoming mission. Each Dune Buggy" is outfitted with complex communication and weapon systems designed for the harsh desert terrain. Special Operations units are characterized by the use of small units with unique ability to conduct military actions that are beyond the capability of conventional military forces. SEALs are superbly trained in all environments, and are the masters of maritime Special Operations. SEALs are required to utilize a combination of specialized training, equipment, and tactics in completion of Special Operation missions worldwide. Navy SEALs are currently forward deployed in support of Operation Enduring Freedom (OEF). U.S. Navy photo by Photographer's Mate 1st Class Arlo Abrahamson. (RELEASED)

Friday, May 17, 2013

Ever Get The Feeling You’ve Been Cheated?

The famous John Lydon quote strikes me as an appropriate title for a blog post on the state of digital forensics academic programs in the United States. I have been a hiring manager for high tech investigations teams since about 2007 and was involved in assessing candidates for the teams that I was before I became a leader. During the early years, it was rare to see applicants who had degrees in digital forensics, but I’m finding it increasingly common in recent years. One of the things that I have been struck by is how poorly most of these programs are doing in preparing students to enter the digital forensics fields.

It’s not just undergraduate programs that are failing to produce good candidates. I have encountered legions of people with masters degrees in digital forensics who are “unfit for purpose” for entry level positions much less for positions that require a senior skill level. The problem almost always isn’t with the students. They tend to be bright and eager people who just aren’t being served all that well. One of the core issues that I see with the programs that aren’t turning out prepared students are the people who are teaching them. It’s almost universal that programs who have professors who do not have a digital forensics background are turning out students who don’t understand digital forensics. This seems like an obvious and intuitive statement, but given how many digital forensics programs there are who are being lead and taught by unqualified people, it apparently isn’t obvious enough.

If you want to learn to be a good digital forensics examiner, you have to be taught be people who are good digital forensics examiners. If you are interested in learning digital forensics from an academic program, it is your responsibility to look beyond the promotional material and be an informed and educated consumer of your education. The last thing you want is a massive student loan and a degree that looks good on a resume, but then falls apart during a technical interview for that great entry level job that you had your heart set on. One of the best ways to make sure you don’t get burned is to carefully study the backgrounds of the professors who will actually be teaching your classes. We’re a bit too early in the development of the digital forensics field to see a host of full tenured professors with PhD’s in Digital Forensics, but that doesn’t mean you can’t screen out professors who have no earthly clue what they are teaching. Pay very close attention to the curriculum vitae of the people who are going to be teaching your classes. Does the CV show any actual interest in the field of digital forensics? I have seen many CV’s for people teaching digital forensics who don’t show any research or training in the digital forensics field. What it looks like is that we have quite a few institutions that have decided that the digital forensics field is hot right now and to capitalize on it, they press unqualified professors into teaching digital forensics classes just so they can lure paying students (and their tuition money) into their programs. Avoid these programs. Your future depends on it.

We are in a time where there are many fine academic programs available to aspiring digital forensics people who wish to learn digital forensics and launch successful careers. Unfortunately, there are more bad programs than good ones. It’s vital if you are going to spend the time and money getting an education that you don’t get cheated. It’s your life and your responsibility to look beyond the glossy promotional material and make sure you are trusting the right people to get you where you want to go.

Sunday, February 24, 2013

Microsoft Windows File System Tunneling

I’m way overdue on doing a proper blog post so I thought I’d swing back into action by showing everyone some really esoteric Microsoft Windows knowledge that I picked up by accident several years ago working on a research project. Around 2010 or so, I was working on Adobe Flash cookie research with Kristinn Gudjonsson of log2timeline fame. During that research, I discovered an odd occurrence dealing with Windows date and time stamps which led me into learning about file system tunneling.

So for purposes of this blog post, you don’t need to know anything about Adobe Flash cookies. I presented on them during the CEIC 2010 Conference and end of my presentation I went into the file system tunneling aspect of what I had found. This blog post covers that portion of the presentation. The only thing you need to know for purposes of this blog post is that Adobe Flash cookies, at least back when I as doing the research on them many years ago, can be deleted and then quickly replaced with a file of the exact same name and file location as the previous one.

For purposes of this demonstration, we’re using a Adobe Flash cookie file with the name “settings.sol”. I think I was using EnCase Version 6 for making this presentation in case anyone is curious about the tool being used.

Here is our baseline “settings.sol” file before any changes are made to it.

image

Here is what happens when a change has been made and the original file has been deleted and replaced with a file of the exact same file name and location. I don’t have the path listed in these screenshots, but the paths of the new files of the same name are being placed in the exact same location as the old files of the same name.

Here is the brand new file with the exact same name in the exact same location:

image

So we are seeing what we would expect for a brand new file that was created after the old one was deleted…except look at the file creation timestamp. It exactly matches the file created timestamp of the old file that was just deleted. That’s not accurate, is it? We know the file was actually created on 04/01/10 12:17:27PM, but it’s showing the created file time of the deleted file.

Here is another way of looking at it.

image

Here we see the old “settings.sol” file that was deleted and we also see the new “settings.sol” file that was created in the exact same location. As you can see, the new file has last accessed, last written, and entry modified time stamps that show when it was created, but it has the wrong file created time. It kept the file creation time of the old file of the exact same name that was in the exact same location.

What on earth is going on?

This almost made me go completely insane trying to figure this out. Like any good digital forensics person, I could absolutely not let this one go and had to run it down until I had an answer. I lost track of how many digital forensics gurus that I contacted to get help. At first, I was a little reluctant to do so in case it turned out to be something obvious that I missed. Who wants to humiliate themselves with a flagrant display of ignorance in front of some digital forensics icon? I was able to save face because the responses I received were all along the lines of telling me that I had an interesting problem on my hands and they didn’t have an answer for me.  The person who solved the problem in the end was Eoghan Casey. Eoghan listened to me describe what I was seeing and said he thought I might be observing file system tunneling. I did some further research and it turned out that Eoghan was exactly right.

You can find the official Microsoft write up on file system tunneling at their knowledge base 172190 article at http://support.microsoft.com/kb/172190 The relevant text (and the spelling errors are theirs) from that article is:

The Microsoft Windowsproducts listed at the beginning of this artice contain file system tunneling capabilities to enable compatibility with programs that rely on file systems being able to hold onto file meta-info for a short period of time.

When a name is removed from a directory (rename or delete), its short/long name pair and creation time are saved in a cache, keyed by the name that was removed. When a name is added to a directory (rename or create), the cache is searched to see if there is information to restore. The cache is effective per instance of a directory. If a directory is deleted, the cache for it is removed.

I haven’t performed any recent research to update this information, but at the time I was doing this research in 2010, file tunneling impacted a broad range of Microsoft operating systems including 2k, XP (including 64bit), and NT. Microsoft hasn’t updated the article since 2007, but I’d be surprised if it wasn’t an issue for Windows 7 and potentially Windows 8.

The Microsoft 172190 article goes on to explain that file system tunneling is an issue for both FAT and NTFS file systems because:

The idea is to mimic the behavior MS-DOS programs expect when they use the safe save method. They copy the modified data to a temporary file, delete the original and rename the temporary to the original. This should seem to be the original file when complete. Windows performs tunneling on both FAT and NTFS file systems to ensure long/short file names are retained when 16-bit applications perform this safe save operation.

Microsoft explains how to disable file system tunneling in the same 172190 article. All you need to do is to go to the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem setting in the registry and add a DWORD called MaximumTunnelEntries which you then set to 0.

So the Microsoft article talks about “file systems being able to hold onto file meta-info for a short period of time”. The short period of time is a default setting of fifteen seconds. However, if you wish to change that time, it’s yet another registry tweak.  You just have to head over to the same key as you would to disable file system tunneling, but this time you just need to add a new DWORD called “MaximumTunnelEntryAgeInSeconds” and then whatever time you would like. This is all explained in the Microsoft 172190 article.

So like any good forensic geek, I wanted to test what I had found because I didn’t want to look foolish in front of a big audience of people at CEIC. I set up an experiment based on making Flash Cookie modifications that would initiate the same deletion and creation process.  Again, all you need to know for purposes of this blog post is that I was making changes that resulted in near instant file deletion and creation of files with the exact same name in the exact same location.

For example, I used a Windows XP SP2 system that had file system tunneling enabled and obtained these results.

one

I was making changes that causing these “settings.sol” files to be deleted and then replaced with files of the exact same name. Like before, we are seeing the changes that we would expect from new files being created in that their accessed, written, and modified time stamps reflect the accurate time that they were created. However, all of these new files are keeping the same file created stamp as the original file.  This is file system tunneling at work.

How do we know for sure?  What I did next was to follow the registry modification that I explained before to disable file system tunneling and then to run the experiment again.  Again this was a Windows XP SP2 system, but with file system tunneling disabled.

two

Now we see that all of the file times are matching the time that the files were created including…finally… the file created times.

And because like all good digital forensics people, I’m always suspicious of what I’m seeing, I repeated the experiment a third time, but this time I used the registry to turn file system tunneling back on which resulted in...

three

…the exact same file system tunneling behavior that I had observed at the start of all of this.

Sunday, November 25, 2012

AFoD Interview With Carlos Cajigas

I have spent the last several months working on relocating from the New York metropolitan area to Tampa, Florida. Now that I’m starting to get settled into Florida, I will be blogging on a more consistent schedule. Carlos Cajigas is one of the many sharp Florida-based digital forensics people that I have had the privilege to meet in my travels around the state. Carlos is an accomplished digital forensics examiner as well as a bit of an entrepreneur. He’s very passionate about the use of Linux in digital forensics and it didn’t take talking to him for very long to realize that he would be a great interview subject for the blog.

Carlos Cajigas Professional Biography

clip_image002Carlos, a native of San Juan, Puerto Rico, is the Training Director and Senior Forensic Analyst for EPYX Forensics. Additionally, he is employed by the West Palm Beach Police Department (FL) as a Detective/Examiner assigned to the Digital Forensics Unit with over 9 years law enforcement experience. He has conducted examinations on hundreds of digital devices to include computers, cell phones, and GPS devices to go along with hundreds of hours of digital forensics training. His training includes courses offered by Guidance Software (EnCase), National White Collar Crime Center (NW3C), and the International Association of Computer Investigative Specialists (IACIS).

Carlos holds B.S. and M.S. degrees from Palm Beach Atlantic University (FL). In addition, he holds various certifications in the digital forensics field to include EnCase Certified Examiner (EnCE), Certified Forensic Computer Examiner (CFCE) from IACIS, and Certified Digital Forensic Examiner (CDFE) from Mile2. Carlos is a Florida Department of Law Enforcement (FDLE) certified instructor with experience teaching digital forensic classes. He is an active member of both the International Association of Computer Investigative Specialists (IACIS) and Miami Electronic Crimes Task Force (MECTF).

Most recently, Carlos has endeavored in writing a blog for EPYX Forensics that would assist other digital forensic examiners in using free open source Linux-based tools to do their jobs. He hopes to develop and implement course training in this area in the belief that there are alternatives to expensive commercial software and training.

A Fistful of Dongles Blog: What led you into becoming a law enforcement officer?

Carlos Cajigas: Although police work has always appealed to me, the decision to join law enforcement didn’t enter my mind until late in my 20’s.  At the age of 17, I moved from Puerto Rico to Palm Beach County, Florida to pursue a career in baseball. At that time my priorities were anything baseball and my responsibilities were simple: keep making good grades and go to practice. Although I was a fairly talented baseball player, I also knew that I wasn’t the most gifted. From an early age, I learned that hard work could make up for the areas where one lacks talent. That is a lesson that still holds value even to this day. So the answer was simple, I played and practiced as much as I could while working hard every day. Subsequently, I received a baseball scholarship to Palm Beach Atlantic University. 

I continued to work hard and had some success on the field. I broke a few records and became an MVP. It truly was a great experience that I will always remember. Unfortunately, my collegiate baseball career ended after 4 years of eligibility. So there I was - 22 years of age with a decision to make unsure of what I really wanted to do.

On September 11th 2001, halfway through grad school, the events of that day changed many lives forever. The impact that day had on me, led me to join law enforcement. Although law enforcement work always appealed to me, that day I decided that I wanted to make a career out of giving back to the community. I wanted to be part of another team with similar interests and values of mine. Baseball in many ways prepared me for that jump. I finished grad school and applied to the West Palm Beach Police Department. I have been a police officer now for nine years. Throughout my career, I have been part of multiple units and many teams. I have been given opportunities to do some good and have taken advantage of them. Our city is a great city and our Department is top notch. Joining law enforcement is another decision that I am glad I made. 

AFoD: What happened once you joined the West Palm Beach police? What was your initial training like and how did you end up doing digital forensics work?

Cajigas: After completing the Academy and once at the Department, I went through an initial series of stages that I had to pass before being allowed inside of a patrol car. They included training in defensive tactics, driving, and firearms, among others. These were the core primary skills that were taught to trainees. The department was very strict about their minimum qualifications. 

I then progressed into a multi-month field training program that required me to go on patrol while a qualified senior officer sat next to me and evaluated me. My trainers were very good, no-nonsense, seasoned officers.  Learning at this stage was done at a very fast pace. I was taught radio procedures, report writing and everything else.  Every new call brought about a new challenge. On-the-fly problem solving skills were a must have and strict emphasis was given to safety. 

Upon graduation from the field training program, I began responding to calls by myself. I completed a one year probationary period and remained as a road patrol officer for about four years. I then joined a specialized unit created to reduce crime in a specific area of our city. Our team was made up of 6 officers and we were responsible for just about any crime or issue in this area. This unit provided me with an opportunity to conduct investigations from beginning to end. Some investigations required travel to neighboring cities and others undercover work. 

When a position opened up for the Digital Forensics Unit, I interviewed.  After a few months, I was notified that I had been awarded the position.  That was a very exciting day. I have always had a passion for computers and now I was being given the opportunity to combine police work with that passion. A few years later with a few hundred hours of digital forensic training under my belt, I enjoy computers even more.   

AFoD: What is life like working on the Digital Forensics Unit?

Cajigas: Life in the Digital Forensics Unit is full of activity. Our unit is part of the Palm Beach County Internet Crimes against Children (ICAC) Task Force and the Palm Beach County Regional Forensics Task Force. The task forces are made up of investigators and examiners from different participating agencies in the county. Our ICAC task force has a very proactive approach towards pursuing individuals who hurt children. As a result, we conduct an average of one search warrant every other week involving child exploitation. 

The operations that we conduct can take us from one side of the county to the other with very short notice. We travel in our mobile forensics van, and we triage and preview devices on location. In most cases, we can retrieve the necessary evidence that the investigator needs to make an arrest on scene.

As part of our duties to the Regional Forensics Task Force, we provide assistance to neighboring agencies without forensic units. The cases that we assist with can range from simple thefts to homicides and everything in between. On any day, it is quite common that we might get a request to process ten phones and five computers. The amount of devices that we see at the lab daily poses challenges and opportunities to learn something new every day. This is the part of the job that I enjoy the most - the process involving identifying a problem, looking for a solution and then implementing that solution.

Recently during a case, we came across a 3TB hard drive with a corrupt GPT. Once the drive was imaged, our tool of choice was unable to see the directory structure of the volume. We needed specific files, and we wanted to be able to access the volume inside of the drive without restoring the image or editing the image. After a little bit of research, we ended up accessing the volume by mounting the E01 image in Linux and using the program ‘Testdisk’ to point us to the starting sector of the volume.  The mount command then mounted the volume, and the volume became accessible.    

I have learned that Linux can be a little complicated; however, it is powerful and free. In this instance, once we got past the complicated part, we were left with powerful and free. I see the usefulness and versatility that Linux has when used in forensics. On the days that we have time to catch up, I dedicate a few hours to learning Linux in the hopes that it can become another tool in our tool belt in the battle against online criminals.  

AFoD: So how did you discover that Linux could be a powerful tool for digital forensics examinations?

Cajigas: I first got introduced to Linux back in 2007, before learning forensics. I stumbled across Ubuntu version 7.04 out of curiosity and necessity. During those years, I used to spend a lot of time building and fixing computers. I decided to try Ubuntu when a friend requested my help with recovering family photos from his BSOD’d Windows PC.  I burned the ISO to a CD and booted his “dead” computer from the CD drive. Ubuntu loaded and his computer “magically” came back to life.  His drive was still healthy and the directory structure was intact.  All of his family photos were there waiting to be copied. My friend was happy and I was hooked. To this day it still amazes me how the entire OS can run from a CD, just for the cost of the CD.

I began testing and installing as many variants of Linux on as many computers as I could just to see what I could learn. For instance, it took me about a week of testing different distributions before finally getting the right version of xubuntu working on a PS3. As a result, I installed Ubuntu on a flash drive and carried it with me, just in case of a “dead PC” emergency. 

Fast forward a few years and after some forensics training, I decided to try Linux for forensics. I did it out of curiosity and necessity. I saw that great tools like the SIFT workstation were already out there, so I was curious as to how to use them. I needed a second method of doing forensics, so that at the very least, I could use them to validate procedures. I downloaded as many of the forensic distributions as I could and began testing the programs. Just like with any tool, there is always a learning curve.  Unfortunately when it comes to Linux, that learning curve can sometimes be curvier. Just learning the commands for a specific program could take hours of research and trial and error. But once you learn the program, the results can be very rewarding.

There are programs in Linux for just about every action needed in forensics. Some of the smarter minds in forensics build these programs and release them for free for the benefit of the community. Unfortunately, the documentation on how to use programs in Linux can sometimes be difficult to find. I have found myself reading blog after blog gathering bits from one site and pieces from another, while teaching myself how to use these tools.  As a result, I have decided to document procedures on how to use forensic tools on your own standalone Ubuntu 12.04 machine. My intent is to help other digital examiners in using open source tools during the course of their investigations for free.

I started documenting these procedures at the beginning of the year, and I plan on adding many more. So far, I have documented procedures on how to recover Win7 passwords, using Testdisk, acquiring and mounting E01’s, recovering IE history, file carving, extracting files by record number, registry analysis, and parsing the MFT with analyzeMFT. My attempt is to outline the procedures from beginning to the end for users new to Linux, while explaining every step with screenshots. You can find them at http://epyxforensics.com/blog

AFoD: What can Linux-based tools do for a digital forensics examiner that the increasingly wide range of Windows-based tools can't do?

Cajigas: They might just save you some money! Windows based tools like EnCase, FTK and X-Ways are simply excellent tools.They combine the  processes of acquiring, indexing, parsing, searching, recovering, and reporting all into one suite. In my opinion, there is no equivalent single program available in Linux that can compare to these great suites. Every lab should have at least one of these tools. What is available in Linux is an accumulation of different tools that when put together can accomplish almost all of the same things that these suites do. Some of the tools can do some tasks better, others, not so well. 

But the tools will always accomplish their tasks for free, and they might help you when you don’t have the commercial tool needed for the job. 

Let’s say for example that you and your case could benefit from an analysis of a timeline and you do not have the tool of choice to build that timeline.  Log2timeline is an excellent open-source framework for automatic creation of super timelines. Log2timeline received the 2012 Forensic 4cast, computer forensic software tool of the year. 

Every examiner that I have talked to can remember that time when their Windows forensic tool of choice crashed and failed to accomplish some sort of task. I can personally recall instances in the lab when Windows based tools failed to image devices, and I reverted to using Guymager in Linux with absolute success. And best of all, these tools can all be run from a ten cent DVD. I recently participated in a Rob Lee webcast were he so accurately described the SIFT workstation as your own mobile computer forensics lab.

Linux-based open-source tools alone can be used to complete forensic examinations. Many of these tools have helped me during my investigations.  They were released free to the community, and I believe that we can all benefit from them.  

AFoD: Let's pick out a couple tools to use as examples. I've sung the praises of log2timeline here on the blog and will continue to do so in the future. Let's focus on a tool that might not be as well known. What is Guymager and why should someone consider using it over a Windows-based tool?

Cajigas: Guymager is an open source forensic imager with an easy to use graphical user interface (GUI). The tool, created by Guy Voncken, was designed to be fast especially on multiple processor machines. It produces DD, E01, or AFF images and conducts verifications upon completion. In addition, it creates an .info file that stores acquisition details to include hashes, bad sectors, and SMART data. Because it runs on Linux, it often succeeds at acquiring those pesky drives that make Windows freeze and/or have trouble showing up in Disk Management.

Since acquisition is the one process that has to be done in 99% of examinations, Guymager is a tool I find myself using a lot of the time. Guymager can be downloaded from the Ubuntu Software Center.

Another open source Windows analysis tool with an easy to use GUI is the Forensic Registry EDitor (FRED). FRED is a registry hive editor created by Daniel Gillen. It can navigate the directory structure of a hive and has a built in hex viewer and data interpreter. Another cool feature built into the tool is that it has automated reporting functions that can give you the “RecentDocs” and the “TypedUrls” out of an NTUser.dat registry hive. FRED can be downloaded from penguin.lu.

For their ease of use, these two tools are a good start and a must try for those interested in using Linux based tools.  

AFoD: What do you recommend for someone who wants to learn Linux and get to a point where they can comfortable leverage it for digital forensics examinations?

Cajigas: There will be a lot of reading involved, but these three steps will get you going in the right direction. The first step towards becoming familiar with Linux is to begin using it. As simple as it sounds, installing and using your favorite distribution will teach you a lot about how the OS works and how the directory structure is laid out. Once you know the layout, you are now able to spot the things that look right and the ones that don’t.

The next step is to become familiar with manipulating the shell commands. This is the stage where you learn commands like “cd, cp, rm, mv” and terms like input/output redirection. To redirect the results of one command into a second command and get only one set of results is the linux equivalent of killing two birds with one stone. Redirection is one of the most useful features of the shell. A well written website on learning the linux shell can be found at linuxcommand dot org.

The last step in the familiarization process is to start playing with the forensic tools. This is the fun part. I have written some articles to get you started with the basic forensic tools, and many more can be found on the web. Just like with forensics, in Linux there is something new to be learned every day with tools that are available for just about any task.

The more you play with the tools, the more comfortable you get. Figure out what your need is, and learn how to accomplish that task with Linux. Chances are the next time that you need the same task completed, you will revert back to accomplishing it in Linux.

AFoD: Are their any Linux distros that you recommend over others for the beginning Linux user?

Cajigas: In my opinion, before using any of the forensic live distributions, anyone starting on Linux should start with Ubuntu. Ubuntu was first released in 2004 by a UK based company called Canonical. Canonical provides support, patches, bug and security fixes for a period of eighteen months on each of their new releases, keeping Ubuntu up to date. 

Ubuntu was designed with ease of use in mind and comes with GUI based tools for installation, updating, personalization of the OS, and many more. It has built in support for a lot of different hardware, which translates to a good chance that it will boot and recognize the hardware in your computer. Due to its popularity, a web search will often point you in the right direction towards solving most of the problems you may encounter.

During basic Ubuntu use, mandatory interaction with the terminal is minimal. This gives the user time to get to know the OS before being forced to use the terminal for non-GUI tools. Ubuntu has been selected as the platform for popular live DVD distributions like the SIFT and DEFT. After you become comfortable using your installed version of Ubuntu, graduating to using live distributions no longer feels like unfamiliar territory.  

AFoD: Is there anything else that you'd like people to know?

Cajigas: Open source (Linux) forensic utilities are very useful as a supplement to commercial tools. However, there is the good and bad when it comes to open source. The good - tools are free and they are just as, if not more powerful than commercial tools. The bad – there is a learning curve, and they are harder to use. With that said, as part of EPYX Forensics, my colleagues and I want to bridge that learning curve gap so that forensic examiners are able to take full advantage of open source forensic tools. As I spoke about earlier, I have begun doing this by posting tutorials through the EPYX Blog. We are also currently putting together training courses for law enforcement, government and private sector personnel that we are planning to launch in early 2013.

The world of digital forensics is constantly evolving, and I believe there is a shift towards increased usage of open source forensic utilities, especially with the expenses that come with commercial tools. My hope is for all forensic practitioners to sit down and at least try the open source passage – you just might like what you find.

Saturday, August 25, 2012

Cyber Pearl Harbor

I’m an amateur student of military history who has written several blog posts in the past discussing physical warfare concepts and how they can be applied to the cyber world. For example, I’ve written about how the Battle of the Atlantic and the Battle of Britain provide lessons that can be applied to cyber warfare. Thus, I’m the last person who will say that it’s inappropriate to apply lessons from the physical world to the information technology world. So while it can be an appropriate thing to do, it can also be done in a haphazard manner that doesn’t correctly respect the historical record. The term “cyber Pearl Harbor” is one that is increasingly being used in a manner that just doesn’t make sense from a military history perspective.

120702-N-VD564-016 The Pearl Harbor attack involved the nation of Japan engaging its military to strike a substantial blow against Pacific Fleet of the United States Navy. The damage that was inflicted on the US Navy was such that it provided the Japanese military with a decisive military imbalance in the Pacific that it exploited until the American military was able to rebuild and regroup. In my mind, for an attack to be accurately labeled “cyber Pearl Harbor” it would need to involve a cyber attack that accomplished a similar objective. A reasonable example of “cyber Pearl Harbor” could be a nation-state using a cyber attack to substantially degrade another nation’s ability to respond to future attacks in either the physical or cyber world. An incident where an attacker unexpectedly brings down a power plant without any further attacks isn’t “cyber Pearl Harbor”. That’s certainly a serious incident, but it’s not equivalent to the Pearl Harbor attack.

Another term that I’m having even more trouble with is “cyber terrorism”. The American English version of the Oxford dictionary defines terrorism as “the use of violence and intimidation in the pursuit of political aims.” The Oxford British & World English dictionary defines it as:

the unofficial or unauthorized use of violence and intimidation in the pursuit of political aims:
the fight against terrorism
international terrorism

For a cyber attack to be accurately defined as cyber terrorism, the attack would have to have a violent result or at least some sort of intimidating effect. I just don’t see how a DDoS attack or even something destructive like Stuxnet clears this bar. In my mind, for something in the information technology world to be considered “cyber terrorism”, you’d need a result where you had the loss of life or a substantial and intimidating impact such as taking down a power grid of a major city. An action like a major urban power outage could very well result in indirect loss of life (heat related deaths during summer months) and violence (riots). It’s not that this couldn’t happen, but we just haven’t seen it yet.

Google Takeout

Tom Thomas is the marketing director over at IACIS. He posted to the IACIS email list recently about a Christopher Null authored article he discovered in PC World that explained Google Takeout. Tom was nice enough to give me permission to pass along what he posted to the rest of the team through the blog.

The Google Takeout webpage explains that “Google Takeout allows you to download a copy of your data stored within Google products.” Null explained in his article that:

Google wants you to keep using Search, Docs, and Google+, so it’s trying to play nice, and last June Google introduced a service designed to let you see, in part at least, what Google knows about you with a single click.

Eric Zimmerman’s Tools

Unsurprisingly, the Eric Zimmerman interview generated a tremendous amount of interest in Eric and his tools. One of the popular questions has been how to obtain the various tools that Eric has created and made available to the law enforcement community. You can contact Eric through the FBI’s Salt Lake City Division or you can just send me an email and I’ll pass it along to Eric.

Digital Forensics Email Lists

Another one of the questions that came out of that interview was a request to know which email lists that Eric was participating in since that was mentioned in the interview. I won’t disclose what lists that Eric participates in, but what I will do is contact some of the people who run the various lists that I am on and get permission to post their membership requirements and how to join. Some of these lists have few requirements for membership, but others are more restrictive such excluding people who do criminal defense work.

About The Photo

Photo credit information from the United States Navy:

120702-N-VD564-016 PEARL HARBOR (July 2, 2012) Sailors man the rails aboard the aircraft carrier USS Nimitz (CVN 68) as it passes the USS Arizona Memorial in Pearl Harbor. Nimitz is participating in the biennial Rim of the Pacific (RIMPAC) exercise 2012, the world's largest international maritime exercise. Twenty-two nations, more than 40 ships and submarines, more than 200 aircraft and 25,000 personnel are participating in RIMPAC exercise from June 29 to Aug. 3, in and around the Hawaiian Islands. (U.S. Navy photo by Chief Mass Communication Specialist Keith W. DeVinney/Released)

Thanks to the United States Navy and all of the other services who make these photos available for people like me to use. Thanks to Chief Mass Communication Specialist DeVinney for his service and his excellent photographic work.

Friday, August 17, 2012

AFoD Interview with Eric Zimmerman

Eric Zimmerman is one of the most amazing digital forensics people that I have run across in the last few years. He’s a combination of passion, digital forensics skill, and sharp programming abilities. He has created a whole host of digital forensics tools to help the good guys catch the bad guys. He’s a credit to the digital forensics community as well as his employer who happens to be the Federal Bureau of Investigation (FBI). As Eric points out at the end of this interview, his answers reflect his views and not the views of the FBI unless clearly stated otherwise.

Professional Biography of Eric Zimmerman

Zimmerman Eric 18 Eric Zimmerman is a Special Agent assigned to the Cyber Squad of the Salt Lake City FBI field office where he has been investigating child pornography and computer intrusions since early 2008. He is a member of the Utah ICAC and has provided training and assistance to dozens of local, state, federal and international law enforcement agencies. Eric has a degree in Computer Science and has developed several computer programs to aid in the investigation and prosecution of child exploitation matters.

Eric is an EnCase certified examiner and has several other certifications from CompTIA and SANS.

A Fistful of Dongles Blog: What was your path to the FBI? What made you decide to join?

Eric Zimmerman: My path into the FBI was somewhat out of left field. I moved to Chicago in late 1998 to work at a 3rd party logistics start up company called Con-way Integrated Services. I was the 7th employee and over the years we grew the company to about $70 million/year in revenue. Around 2005, we started the process of merging with a sister company of ours, Menlo Logistics. They were a billion dollar company and as such, their culture won. After this happened, what used to take me two days would take two weeks. Things slowed down dramatically and I was no longer able to move at a pace I was comfortable with. As the years went by, I started asking myself "Who am I benefitting being here?"  I applied to a few other places and did some interviews, but nothing panned out.

In late 2005, my brother, who has been in the Army since he graduated from high school in 1994, recommended I look into such places as the CIA and Secret Service. Neither of those agencies appealed to me for various reasons. I then took a look at the FBI in January 2006 and felt it was a much better fit. I sent in an application that same month and soon after the pieces started falling into place. In September, 2007, I was given a slot in New Agents Class 08-01. I graduated on March 5, 2008 and reported to Salt Lake City soon after that.

The reason I wanted to join the FBI was to be a part of something bigger than myself and to have the opportunity to serve our country. I had lost my passion for what I was doing in the private sector and if you do not love what you are doing, it becomes very hard to get up every morning to go to work. Now I look forward to going to work every day (well, almost every day!).

In short, I left the red tape of corporate America behind to dedicate my career to serving the public, and now, as a FBI special agent, I am able to help the most innocent among us -- our children.

I know there are a lot of other people who go to work day in and day out in the law enforcement field who have enormous responsibilities and more work than they can address effectively due to budget and time restraints. I try to think of ways to make their jobs easier and provide tools and techniques to make their work more efficient at very little to no cost.

I have been given amazing opportunities at the FBI to solve problems and design things to help a lot of people both in the United States and across the world. It hasn't been without its challenges along the way, but it has definitely been worth it and I think the results speak for themselves.

AFoD: Can you tell us what you are currently doing for the FBI and how you came into that position?

Zimmerman: I am currently assigned to the Cyber squad in Salt Lake. We are responsible for both criminal and national security investigations. I have been assigned to this squad since arriving in Salt Lake City as I am a cyber agent. By this I mean my career path is designated ‘cyber’ as opposed to counter intelligence, counter terrorism, and so on. This isn't to say we do not regularly help other squads with things as there is a computer involved at some level in just about any crime these days. My day to day work involves general case work (both criminal and national security as we do not have dedicated squads for each), some programming and the occasional bout of reverse engineering various things.

I have also been actively involved with the Utah Internet Crimes against Children (ICAC) task force since arriving in the division. A lot of the forensic programs I have written were due to my involvement with the ICAC and the FBI Innocent Images program.

Due to the success of these programs, I also spend time supporting these tools to include basic tech support, training, and so on. I provide my cell phone number and email address to everyone who uses my programs so they can contact me any time should they have an issue. I recall sitting in a presentation at the last ICAC conference in Atlanta where a vendor, in all seriousness, essentially said “If you aren't paying for software, its junk.” My guess is he was insinuating free software isn’t supported or kept up to date.

Regardless of what he meant, I feel confident the vast majority of the more than 2360 users (in more than 40 countries) of my software would disagree with that statement. What commercial vendor can you call and speak directly to the developer about a problem? I feel a great sense of responsibility to the users of my software and feel I would be doing a disservice to them and the community if I didn’t make myself available when questions or issues arise, especially when the users are in the field conducting a search and whatnot.

I also teach at several national conferences in the United States (specifically, the national ICAC conference and the Dallas Crimes against Children conference) and at a few international ones as well. I will be teaching at Europol's conference in October of this year for the first time. I am also a member of the Interpol peer to peer technical working group which is comprised of people from various law enforcement agencies all over the world who meet regularly to discuss the most efficient way to combat the exploitation of children on the Internet.

I initially started writing the various software programs out of necessity. Other Special Agents and law enforcement personnel were finding themselves in situations where there were no tools or techniques to aid in their investigations or the tools that were available did not work very well. A lot of the tools we currently have started from a simple phone call and a few hours of work. Over time, more features were added. This process has culminated in a whole suite of cutting edge tools that provide features no one else has at any price. Some of the tools are simple one offs and others have been in active development for several years. I maintain all the tools (currently I maintain 13 programs) in addition to my case load, so it can get quite hectic!

AFoD: So this gets to the heart of why I wanted to do this interview. You're very active in the digital forensics community and anyone who is on the same digital forensics email lists as you can see how knowledgeable and helpful you are. I also want to talk about your development work and the award you were recently given for this work. However, before we dig into that, we have many people who follow the blog who are interested in breaking into digital forensics and cyber investigations. We know your path into the FBI now, but can you explain how you ended up getting assigned to the Cyber squad? Is that something that you can choose to do as a condition of joining the FBI or is it up to fate and the Bureau once you have completed the academy?

Zimmerman: There is currently a list of "critical skills" that the FBI is looking for, at least in regard to the Special Agent position. Some of these include accounting, engineering, and computer science/information technology. If you have a strong background in one of these fields it tends to help move you through the process a bit quicker. My education is in mathematics and computer science, so I came in under the computer science critical skill.

When I attended the FBI Academy, I was assigned to the cyber career path. Other possibilities included counter intelligence, counter terrorism, and so on. While I had some say in my career path in that I ranked my preferences, at the end of the day someone else made the decision for me based on my background and education.

Part of that decision is also based on how many agents the FBI requires in the various career paths as well. While I was sure I would be tracked cyber due to my background and the fact that I had put cyber as my number one choice, there were some people who were tracked cyber who had little knowledge of computers and whatnot. All hope is not lost though. There is opportunity to change career paths once you graduate.

Once you are assigned a career path, FBI Headquarters will assign you to a field office. As with career paths, there is an opportunity to rank the field offices you want to go to, but at the end of the day it is the needs of the Bureau that make the final determination. In my particular case, Salt Lake City was 17th on my list (out of over 50 field offices).

After getting my orders at the academy (about a third of the way through the 21 week course), I received an email message from my soon to be Supervisory Special Agent in Salt Lake welcoming me to the cyber squad. I arrived in the division in March, 2007.

There really isn't a lot that is set in stone before you go to the academy and, without a compelling reason, your orders are your orders. Worst case you could always quit but with how hard it is to get into the FBI academy (last I heard 1 in 5000 Special Agent applicants are given a slot at the FBI academy), very few people choose to do so (at least based on the people in my class). There are always opportunities to go to a different field office as well should people want or need to do so.

For people looking to get into the FBI and work cyber matters, majoring in computer science or some other information technology discipline would certainly be a good start. Being able to program is also a huge help for solving problems we run into every day but I do not know how much that goes into the selection process.

Beyond one’s choice in a particular major, in my mind it is much more important for a person to have a passion for computers. This, more often than not, leads to quite a bit of personal time being spent improving one's skills beyond what is taught in college. Most college classes are not at the cutting edge of what is out there and so, in my experience at least, personal experience and learning often trump what is taught in school.

In short, the FBI decides your career path based upon your background and college major (you must have a bachelor’s degree or better) once you are at the FBI academy.

AFoD: So what does a FBI cyber special agent in the Salt Lake City Field office do on a day-to-day basis?

Zimmerman: Hmm, that’s a difficult one to provide a succinct answer to.

A typical day will almost always involve paperwork of some kind, from
reading email to documenting investigative activities to requesting
authority for something. Other common activities include various types of training, meetings, and operational stuff like search warrants and arrests.

The squad you are assigned to will determine how many arrests and search warrants you are involved with. I have been lucky to have been involved with the Utah ICAC so we do search warrants and/or arrests just about every week. I have been on hundreds of arrests and searches in the time I have been in Salt Lake City. Depending on the nature of the case, you may have to travel for operational needs as well. I recently executed search warrants and an arrest on an Anonymous related case which required traveling to Ohio.

As for day to day activities, I usually have a mountain of email to answer both from internal and external sources. A portion of those emails are related to support questions for the programs I have written, others are asking for direction or how to best handle a particular computer related
matter, and others are for direct support of some initiative in the
office. I also get quite a few phone calls which come at all hours of the day.

Of course sometimes there are fires to put out so those have to be dealt
with as they come up. For example, a few weeks ago, several of us were called out to a search scene to conduct interviews and that ended up consuming the entire day.

Things tend to move in cycles a lot of the time, so there are times when
I will be buried in case work for a few weeks, followed by a period
where you are basically waiting for things to come back (a subpoena
request, search warrant return, etc.) When I am in the slow cycle of
case work, I typically focus on getting some extra programming done or
start looking into a new problem area we are seeing. I like the research
and development (R&D) side of things as it constantly presents new challenges. I tend to get bored doing the same thing over and over, so the time I get to spend doing R&D counteracts the often mundane nature of paperwork.

There are a lot of opportunities to use cutting edge software and
techniques as well. Some of these were developed by the FBI and some are commercial off the shelf software packages. For example, in a recent case I had a need to examine dozens of computers on a network but had to do so in a way that limited our exposure, both physically and on the network. Because of this, walking around to various computers and hooking up hardware to it was out of the question. I ended up using F-Response to facilitate access to all the target machines. This allowed me to view any of the hard drives on the network that were of interest. This access was completely transparent to the users of the workstations. Once the drives were exposed locally, I had a huge amount of flexibility in analyzing and reviewing computers, from taking a forensic image to pulling a few files for review.

Some of the other common activity would be reviewing evidence, imaging computers, reviewing intelligence products, going to firearms to qualify every quarter, taking various training such as legal training, defensive tactics, and specific computer related training (general classes or more specialized classes which result in industry certifications, etc.)

Cyber agents have quite a bit of mandatory training which range from A+
certification to a wide range of SANS courses. The training is divided up into four stages that roughly correspond to the first five years of a cyber agent's career. Outside of the mandatory training, there is opportunity to take various elective courses such as malware analysis and so on. The malware analysis classes are very challenging courses and, like a lot of the other classes, are taught by world class instructors and very accomplished professionals. There are also ways to pursue training which isn’t a part of the official curriculum if it can be articulated that said training is directly applicable to one’s job. For example, I was able to get my EnCase Certification since I do a lot of forensic work.

With all that said, and as I mentioned at the opening of this question, there really isn’t a typical kind of day (and that’s a good thing)!

AFoD: You are making quite a mark in the digital forensics community through your research and development efforts. What can you say in public about the work that you are doing and the investigative benefits that have come from it?

Zimmerman: The response to the tools and techniques has been overwhelming. As mentioned above, several thousand users in over 40 countries have downloaded one or more of my tools. I regularly get email from law enforcement officers and examiners with success stories as a result of using my software. These stories range from successful prosecution stories to the "I never knew that was there" kind of thing. I also regularly hear from people that they couldn't do their job without some of the software I have written.

To date I have released 13 programs ranging from file parsers and hashing tools to network monitoring tools. All of the software I write is provided 100% free of charge and, in most cases, comes with extensive documentation (some of the simpler programs do not require documentation).

Most of these programs sprung from either necessity or the fact that I was not happy with the tools currently out there. A good example of this is my hashing program. In one of the training classes I was in, we were provided a Java based hashing program which handled one kind of hash algorithm and was very unintuitive to use. Over the course of a few evenings in my hotel room, I wrote a replacement for it which included many more algorithms as well as many usability improvements. Over time I kept adding features and polishing the interface. As it stands now, Hasher is the fastest and easiest to use hashing program I am aware of.

I strive to provide intuitive interfaces in my programs which are, as we say in the FBI, "Agent proof." By this I mean the software programs are easy to use and hard to break. This isn't to say people haven't found creative ways to use the programs which I never thought of! Because of this, I have invested a significant amount of resources in providing a way to automatically report errors to me so I can fix issues as quickly as possible. This process includes sending an error report with a single mouse click which includes the complete stack trace, value of all variables, and even the line numbers where the error occurred. With this kind of information, correcting issues is much easier.

Related to this is the automatic updating feature of almost all my software. Gone are the days of having to manually check for updates on a website every few months. Now, assuming you are connected to the Internet, the software will tell you if any updates are available, download the update for you, and, when you tell it to, will apply the update and restart the program for you. This has saved thousands of hours of productivity as people do not need to be trained on the particulars of how to look for and apply updates.

Hundreds of people have downloaded my parsing and hashing utilities, but the most popular piece of software I have released is osTriage. To date, thousands have downloaded osTriage and used it to greatly extend their capabilities in the field. I am not aware of any other software at any price that provides as much information as osTriage does in such an easy to understand and consistently presented format. Most of the newer features are the result of user feedback. In most cases, new features can be added in a few hours. I am currently doing about four releases of osTriage a year, but the amount of new stuff depends on the rest of my case load.

Another major benefit is the improvement from the time a search warrant is executed to the time charges are brought against someone. Previous to osTriage, investigators may have to wait months to get back results of a forensic review. With osTriage, you have most of that data available to you before you leave the scene which can be used to pursue charges. In some states, a full forensic review is not even required anymore due to how well osTriage works.

A good portion of the forensic research I have done to date is directly available to law enforcement and forensic examiners for no cost. In any of my programs which deal with various kinds of forensic artifacts, detailed manuals are included which not only explain how to use the program, but also explain the exact layout of the files the parsers are dealing with. This lets examiners validate the tool using whatever means they choose to.

Most of this material is law enforcement sensitive and as such, I cannot get into details about the specifics of my work in this kind of open interview format. I fear we may lose some readers if we went too far down in the weeds anyways!

What I can say is that most of my work involved reverse engineering proprietary, closed source binary files and network protocols. The tools I used to reverse the files and protocols include packet capture tools, a hex editor, and custom testing programs I wrote to aid in decoding various chunks of data from the raw network captures. Some of the files were more trivial than others to reverse depending on what the purpose of the file was. Reversing the wire protocol was a significant investment of time over several weeks, but the result of the work has significantly improved our investigative efforts on certain networks.

I have also taken open source data (from Twitter for example) and written code to parse that information into a usable format. While Twitter provides a vast amount of data (as the result of a search warrant for example), it is essentially unusable in the format they provide it in. Rather than spend hours trying to correlate userIDs to usernames by hand, I spent a few hours and wrote a program to do it for me. Once that was done all it took was a little time working on a nice front end for it and now any investigator can immediately start using the return data for case work instead of busy work.

This last example is a good illustration of another major area of improvement my software has provided: the ability to automate tasks that used to be very time consuming. This includes such things as deconfliction systems which are used by law enforcement around the world to coordinate efforts related to investigating the exploitation of children online. Previous to my software, deconfliction was handled manually at just about every agency and no one had a means to share this data with each other. This resulted in a lot of duplicated effort and wasted time.

Even with all the gains in efficiency and the reduction of the more tedious aspects of various tasks, by far the best thing to come from my work is the rescue of at least 200 children who were being abused in 2011. We also saw over 300 arrests in 2011 in countries across the world.

AFoD: Can you provide a list of your tools and what they do? 

Zimmerman: Sure, here is a generalized list and a brief description where applicable (in no particular order).

osTriage: Live response tool that, among other things, finds image, video, encryption, virtual machine, archive, and P2P files fast. Live response data is pulled from the registry, via WMI, and various other files. This is by far the most capable live response tool available in my opinion. While it was primarily built for investigations related to child exploitation, it can be used for any investigation involving a computer as it provides such details as every USB device ever plugged into the computer (including make, model and serial #), full browser history for all major browsers, browser search history, dozens of registry keys such as MRU, PIDL, FirstFolder, TypedPaths, and many more, extracts passwords from p2p, email, chat, and other sources, full network details, ARP cache, DNS cache, open ports, running processes, installed software, and on and on. osTriage is very customizable and allows for adding additional items of interest beyond what is included. Anything 'of interest' is highlighted in red and moved to the top for easy visibility. Supports SHA1 base32 or MD5 hashes for image and video matching. There is MUCH more osTriage is capable of, but a full description would be a dozen paragraphs alone.

Hasher: Calculate file hashes using a multithreaded, easy to use interface. I have not found a faster or easier to use tool to date. Supports SHA1 base16, SHA1 base32, MD4, MD5, eMule/eDonkey, TIGER, WHIRLPOOL, SHA-256, SHA-512, RIPEMD-256, and CRC32 hashes and can recursively process files/folders. Also allows exporting of hash results directly to osTriage supported formats, Excel, etc.

Pingaling: Pings one or more IP addresses and/or host names and plays an audible alert when an IP address/host is available. I wrote this tool for use during surveillance of a suspect using a particular IP address at a hotel. Rather than watch command windows and the output of ping, this tool lets you put in the IP, start monitoring, and do whatever else you like. When the IP comes up, you will be alerted and can respond accordingly.

WMISpy: Monitor processes via WMI locally or across the network. Allows for setting up a list of executables to watch. Watched executables have their start and stop times recorded.

Web log parser: Parses Apache access logs, Apache error logs and IIS web server log files and allows for easy analysis. Also supports SSH logs and can perform DNS lookups. Has the ability to set up keywords to search for and will then give you totals of how many times those keywords were seen. I wrote this tool when I was working an Anonymous related case which involved a lot of SQL injection attacks, so I added keywords like sqlmap and Havij and then processed the web logs. I then knew exactly where the attack took place without having to do anything but open the log files. This saved me hours of time.

Twitter Parser: Takes a Twitter search warrant return and cross references Twitter IDs to usernames, allows searching and sorting of messages, and exporting into a variety of formats. Without this tool, Twitter search warrant returns are very hard to analyze.

I have several forensic programs which parse artifacts from various programs. I cannot provide details on those as they are law enforcement sensitive.

I have also written programs to generate certain types of files, monitor networks, and provide for global deconfliction of investigative efforts.

AFoD: Who is eligible to get these tools and how would they go about getting them?

Zimmerman: Anyone working for a law enforcement agency (or a company that directly supports law enforcement) as well as the military is eligible to get the tools. With very few exceptions, my software is available to every country in the world. All of the software is available free of charge.

We also provide hands on training at various conferences such as the national ICAC conference and Dallas Crimes against Children conference as well as more specialized training such as the FBI's Innocent Images basic classes.

There are also many subject matter experts around the country for the software which can provide regional training to agencies who cannot afford to send their personnel to conferences or other remote training venues. We are also looking to do some webinars for some of my programs which will provide another low cost method to provide training to users.

While live training is often ideal, great effort has gone into the manuals in order to allow for self-paced training. In the case of osTriage, reading the manual is the minimum requirement for FBI agents to use the software.

I understand everyone is at different levels of ability and knowledge and as such it was important to me to be able to provide a wide range of training options to people so as not to prohibit the use of a tool until some type of classroom training occurred. I also realize the things learned at training are not used all day, every day after the training is over. This is another reason why detailed, precise manuals are critical as they allow users to refresh their understanding of a program without having to attend some kind of refresher course or follow up training.

As to how to get the software, the URL to access the site will be provided in training or I can provide it directly to people who are interested. Since the site is not a public site, I cannot divulge the URL here, but anyone interested can email or call me and, after vetting the requester, I will provide direction for how to access the installer which allows for downloading all the software. I am easy enough to find on the mailing lists but calling the Sale Lake City field office is another way to contact me if needed.

AFoD: Your work has been so beneficial to the law enforcement community that you were recently recognized with a major award. Can you tell us about that award?

Zimmerman: I don't think I can explain the award any better than the first
paragraph from the press release:

"Every year the National Center for Missing & Exploited Children honors
law-enforcement officers who have demonstrated exceptional efforts in
the recovery of missing children and combat of child sexual exploitation."

My particular award was under the "Law Enforcement Excellence Award
Recipients" category. I was nominated for this award by a peer who
filled out an application which was sent to NCMEC and then reviewed by
NCMEC personnel.

My wife and I were flown to Washington DC where the award ceremony took place. After the ceremony, we were given a tour of NCMEC headquarters in Alexandria, VA. It was a wonderful experience and it was an honor to even be nominated let alone selected as a recipient of the award.

The full press release from NCMEC can be found at: http://goo.gl/l1wXm

AFoD: What advice would to give someone who is in high school or college and wants to break into the digital forensics field?

Zimmerman: The best advice I can think of is to have a passion for the subject matter. Without a passion for the work, it will quickly become stale and tedious. Passion is what will take you through the mundane and grinding side of forensics. You can only look at hex for so long without passion before you go crazy.

My particular background education wise is mathematics and computer
science and I (perhaps biased I realize) think those two disciplines
serve as a strong base for a career in forensics. Both require
significant analytical skills which directly translate to working in the
field of forensics.

In my experience I have found much more satisfaction in pursuing things
beyond what is found on a curriculum. Planning and getting a curriculum
approved takes time and by the time those two things happen, the
information is rarely at the forefront of the discipline. With that
said, formal classes certainly serve to provide foundational knowledge
and provide the framework for more original work.

I also see huge value in computers being a hobby outside of formal
studies. Are you interested in protecting a network? Then setting up and
maintaining a mail server, web server, DNS server, etc. on a domain you
control is a great way to get your feet wet and learn the basics.
Protecting systems and data becomes a lot more meaningful when they
belong to you! Similarly, I find it helps immensely in terms of
motivation to have some kind of personal stake in your areas of
research. For me, part of my job is combating child exploitation. It
is very hard to be motivated about something you do not care about.

Another piece of advice I would recommend is this: Do not get hung up in
the ivory tower of pure academia. What works on paper is rarely 100% possible to accomplish in the real world. There are just too many unknowns (especially in the field; on a search warrant, etc.) to not be somewhat flexible in one's approach to forensics. You will quickly find yourself alone on an island (perhaps with a few others who cannot adapt) if you do not accept the fact that it is perfectly fine to deviate from standard operating procedure if the situation calls for it AND you can justify your decisions. As long as you can articulate your position and back it up, I do not see a reason to be fearful of going outside the box.

In other words, be a forward thinker and do not be afraid to question what is accepted as a standard just because it has been done that way forever. I think huge opportunities are missed more often than we realize because of a perceived need to "stay in the lines at all costs."

A good example of this is the movement toward live response from the
"pull the plug and take it to the lab" technique which most of us have
been trained to do for so long. The argument for the latter technique is
of course to avoid making changes to a computer and so on.

In reality, things are changing on a computer regardless if the plug is
pulled or not, so why lose data that can be of value? To me, forensics
should be based on being "minimally intrusive" vs. "do not change
anything." Of course minimally intrusive means being able to show what
changes you made to a system and the easiest/safest way to be consistent is via automation, i.e. live response programs.

Finally, I would like to comment on certifications. When it comes to
certifications I have mixed feelings. Before I joined the FBI, I was burned by people who looked good on paper but in reality could do nowhere near
what their certification made it look like they could do. Competency in
any field is much less a function of being able to pass a test than it
is to have a practical working knowledge of a subject AND the ability to
apply that knowledge to a problem.

I think there is value in certifications as it can be used to demonstrate knowledge and of course some employers like to see candidates with certain letters by their name, but do not get so hung up on getting a certification that you short change yourself in the long run by simply cramming information in your head to pass a test. For those that do, it is readily apparent to others when it is time to deliver.

AFoD: Let's talk a bit about tool development. Let's also go with the same scenario again. You have someone in high school or in college who not only wants to make digital forensics their career, but they also want to follow in your footsteps and develop great tools. How would they prepare themselves to create useful digital forensics tools? Are their certain classes they should take? Are their certain programming languages they should learn?

Zimmerman: As to how to best prepare themselves to create useful tools, the first and most important thing is this: have a problem to solve. I cannot overemphasize that point. It can be anything really, but without a problem to solve, teaching oneself to code is next to impossible as the motivation will just not be there as there will always be other things vying for your time. If spending a few dozen hours learning to program can save you and others around you hundreds of hours, it becomes easy to justify learning to program. Your goal may be as grand as finding the end of the Interwebs to making a simple GUI for your kids that has a few buttons on it that tells jokes when clicked. Either way, having that goal is critical.

Some examples of typical problems in law enforcement include something like taking a flat file and converting it to XML, parsing an Apache log file looking for signs of compromise, or writing a class to parse a binary file to be displayed to an end user.

Knowing your audience is also an important aspect to being a good programmer. Anyone can learn to throw some code together, but creating powerful programs that are easy to use is a skill that takes a lot of trial and error to get right. The only way to achieve this is code, code, code! After a few years, you will look back at your first programs and laugh at what a hack you were!

In my experience, one of the biggest weaknesses in programmers in general is the belief that their users are always nice people. One need look no further than the rampant issues with SQL Injection attacks these days to see how devastating blindly trusting end users can be. What this problem boils down to is programmers not doing adequate input validation (a note to any aspiring programmers: Never, EVER concatenate strings to build SQL statements. ALWAYS use parameterized queries!) of user supplied data.

Closely related to the issue explained above is the belief that end users will use a program exactly how you, the programmer, intended it to be used. It is almost universally true that the programmer will rarely if ever make a mistake when using their own program. By this I mean the programmer knows exactly what they were thinking when writing it and therefore know the exact steps to take when using it.

However, once you give a program to someone else, all bets are off as people are very good about using a program in ways the programmer never thought of. This ranges from everything to not checking the types of data being passed into the program (string vs. integer for example) to ensuring all necessary data is available before continuing.

Finally, finding a mentor or someone to bounce ideas off of is recommended. Anyone who has built anything of significance has failed many times along the way and being able to leverage such experience can do much to improve one's skills than the cruel lessons of trial and error.

While experience is often the best teacher, finding the balance between asking questions and being pointed in the right direction by a mentor can do wonders to keep motivation high and prevent burnout.

When it comes to classes to take, almost every computer science curriculum will include several programming languages as a requirement, so people will more than likely get exposed to the basics in a few different languages. C and/or C++ will almost always be included in courses as well, especially if the class involves Unix or some BSD derivative. While I took a few C classes here and there, I never really developed anything of any significance with it because it takes a lot of work with little to show as far as GUI development goes (at least for the kind of problems I was trying to solve).

A good database class is always helpful too as you will almost always have to persist data in some way in every application. Learning how to properly create a database schema and then manage that schema is a critical skill which will save a lot of pain down the road. When it comes to databases, it is not normal to not normalize.

Taking a class on file systems is highly recommended as you will almost always be dealing with a file system when it comes to forensics. File System Forensic Analysis by Brian Carrier is a fantastic resource for jumping into file systems outside of any class that is offered in a formal curriculum.

There are a lot of excellent resources out there for things other than file systems too. Harlan Carvey's Windows Forensic Analysis is one such example that touches on a lot of key areas not only in regard to Windows, but the incident response process in general.

Personally, I bet I came close to reading one or more books like the ones mentioned above for every college class I took. More often than not, I got more out of the extracurricular reading than what was presented in class.

Finally, which programming language someone chooses to focus on really depends to some degree as to what you want to accomplish. If you plan on writing Windows programs with full blown user interfaces, either C# or VB.net would be great choices. I am a VB.net developer myself. I started using Microsoft Access and VBScript about 15 years ago and once I outgrew that, moved to VB6. Once .net came out I migrated to that platform.

Similar to the PC vs. Apple debate (the PC is clearly better of course!), there is quite the battle between C# and VB.net in some circles. C# has traditionally received new features before VB but once Visual Studio 2012 comes out, the gap between VB and C# will be nearly closed. At the end of the day, any .net language would work as it all ends up as the same intermediate language (MSIL) at some point.

If I was going to recommend a language to someone wanting to learn to program it would almost certainly be Python. Not only is it incredibly powerful, but it can be run just about everywhere and that flexibility will come in handy. I would not want to develop any kind of GUI in Python (though I am sure it’s possible, it just doesn't look polished nor does it have the powerful 3rd party components that are available on the .net platform), but for general scripting and parsing programs, Python is hard to beat. Once you learn the basic constructs of programming (loops, if then statements, functions, etc.) you can then simply learn a slightly new syntax in other languages. By choosing something as commonplace as Python you are ensuring you will be able to execute your code on a wide variety of platforms.

At the end of the day the programming language one decides to use doesn't matter as much as what the program can do for people. You can be an amazing programmer and solve difficult problems, but if your program is hard to use or doesn't provide meaningful information to end users you will find people just won’t use it.

For example, if your program pulls a 64-bit timestamp out of the registry or some binary file and reports it as such to an end user, you just alienated the vast majority of computer users out there as most won’t have a clue what to do with such a number.

Now if you are writing some kind of low level API that carves files into objects or something that is one thing (and it can be argued that you should convert it to a meaningful datetime regardless), but if you write programs with the intention of "normal" users consuming such data you have missed the boat.

As I mentioned earlier, the moral of the story is to know your audience and tailor the output to that audience. Just as with teaching a class or lecturing, if you aren’t bringing things to a level where most can understand it, you aren't being as effective as you should be.

Finally, I am a believer in agile development vs. the classic waterfall approach. Simply put, agile development means letting the actual end users of the product use the tool as it evolves vs. waiting for a program to be 'finished' and then having end users request a myriad of changes as the program doesn’t do what they envisioned it to do.

Remember, the end users are not programmers (if they were they wouldn’t need you), so they will, more often than not, lack the ability to describe exactly what they want in a way that translates to crafting a program. By putting incrementally changing versions of your program in the hands of the people who will ultimately be using the product you will, in my experience, find it much easier to come up with a product people want to use at the end of the development cycle.

My recommendation after identifying a problem is to find a group of people who understand what they want to get from a program in order to accomplish their job. For example, if your problem is developing software to track pedophiles online, find a group of law enforcement officers who are experienced enough to know what they need through all stages of the investigative process AND be able to identify the gaps they currently have to deal with. It is this group of people who should be the ones using your programs as incremental changes are made so they can steer the development along to best solve the problem that caused you to undertake writing a program in the first place.

AFoD: Any final thoughts? 

I would like to close by saying that it has been a privilege to be a part of the forensics community for the past few years. I have met many dedicated professionals who deeply care about their craft and are passionate about computers as well as highly motivated law enforcement officers who are underpaid and underappreciated. Your work impacts people far beyond what get to see every day.

Be safe out there everyone!

P.S. The opinions stated, unless clearly indicated otherwise, are my own and not that of my employer.