Tuesday, April 28, 2015

Acquiring an Image of an Amazon EC2 Linux Instance

As cloud services continue gaining popularity and become more affordable, more people are learning about what is available and are increasingly opting-in to the idea of having computers in the cloud.  This became evident during a recent conversation with my old friend JJ, @jxor2378.  I called JJ to get his opinion on what an ideal password cracking rig would be?  Without hesitation, JJ answered, “Why would you invest in buying the hardware, when you can just rent it!”  And he was right, for what I needed, using cloud computing services was in fact a good match for me.

It is not like I didn't know about the concept of cloud computing, it was just simply that because I hadn't had a need for it, I hadn't taken the time to do my computing in the cloud just yet.  By that afternoon that changed.  JJ recommended that I play with Amazon's Elastic Compute Cloud, also known as their EC2 service.

Paraphrased from their website, http://aws.amazon.com/ec2/.  Amazon's EC2 is a service that provides resizable compute capacity, designed to make cloud computing easier.  The web service interface allows you to obtain and configure capacity with minimal friction, and complete control of your computing resources.  You can quickly scale capacity, both up and down, as your computing requirements change, and you only pay for capacity that you actually use.

That last line is the neat thing about the service.  You only pay for what you use, and they even offer you a chance to try their service for free.  It only took a few minutes to get an instance up and running.  Once I had access to the instance, I couldn't help but wonder how I would go about analyzing it for forensic artifacts.

In this article we are going to go over the steps of how to acquire an image of a Linux Ubuntu Server Amazon EC2 Instance.  For the purposes of this article I used a Live Linux Distribution of LosBuntu.  LosBuntu is our own Ubuntu 14.04 distribution that can be downloaded here

Installing the Tools:

The tools that we will be using during the process are ssh, dd and netcat.  All of these tools come preinstalled in the Live version of LosBuntu, so there is no need to install anything else.

The Plan:

The plan is to go fire up an EC2 instance, remotely log-in to the instance and then go through the steps of acquiring an image of the instance back to a remote location of your choice.  Let's get started
  
The Test:

To set up your instance, you can use your current amazon username and password to log in to aws.amazon.com.  Once authenticated, navigate to the EC2 dashboard and click on create instance.  The instance that we will use for the test is the Ubuntu Server 14.04 LTS t2.micro instance that you can try for free.



For added security purposes we created a public/private key pair that can be used to securely SSH into the instance.  Using a key to SSH into a system offers a bit more security than a username and password combination alone.  I named the key ec2key and downloaded the key.


Once you have downloaded the key, simply launch the instance with all of the defaults.  The service shines due to the amount of customization that you can do to your instance, but that is beyond the scope of this article.  For now, all defaults will work.

Once you have your instance running, locate the “Connect” icon and click it.


A set of instructions on how to connect will appear on your screen, including the username and public IP address of the instance.



That's it.  That is all the information that is needed to SSH into the instance.  We now have the information that we need, so let’s take care of some final things.  For the command on the screenshot to work, you will need to make sure that, if your key is not in your current working directory, you will need to provide the path to it.  Also, the permissions of the key must be set so no other users or groups can read it, the command $ sudo chmod 400 yourkey.pem  will take care of that requirement.

Now type the below command to SSH into the server.  SSH is the remote login tool that will establish a secure encrypted connection to the server, the “-i” is the option that points SSH to your identity file (key), ubuntu is the username and @51.11.255.55 is the public IP of the instance.  Remember to change the key to the name that you provided and use the IP of your instance.

$ ssh -i ec2key.pem ubuntu@52.11.255.55


We are now logged-on to the EC2 instance.  From here you can remotely control the system and do whatever it is that you intent to do with it.  The possibilities are endless, but there is one caveat...  Since we set up the instance for remote access using a key, anytime that we want to access the instance we are going to have to pass it this key so that access can be granted.  This means that if you, for example, want to use Remote Desktop Protocol (RDP) to get a GUI on your screen, you may have to do it by authenticating to the instance first using an SSH tunnel.

Which is exactly what we are going to do to acquire the image of this instance.  We are going to establish a second SSH connection to a second remote server a pass the entire contents of the instance's hard drive through an SSH tunnel.  This process can be accomplished by standing up a second EC2 instance with enough space to store your image, or you can use an already publicly accessible existing server that you control.

If you read the article titled “Analyzing Plain Text Log Files Using Linux Ubuntu” then you may know that I like to run a publicly accessible server to transfer data and serve files.  So I took advantage of the SSH access to this server that I control, and authenticated back into it from the EC2 instance using an SSH tunnel.

The SSH tunnel consists of an encrypted tunnel created through the SSH protocol connection that can be used to transfer unencrypted traffic over the network through an encrypted channel.  The purpose here is to use the SSH tunnel to securely transfer the entire contents of the hard drive using DD and Netcat through the tunnel, even though Netcat itself does not use encryption.  All of the contents of the drive from the EC2 instance will travel encrypted through the tunnel back to my forensic machine in my lab.

You do not need to go through the trouble of setting up a public server for this.  A second EC2 instance will also work, but you will then have to transfer the now acquired image back to you.  Accessing a server that you control kills two birds with one stone.

Using its own terminal window on the EC2 instance, type the below command set up the SSH tunnel back to your server.  SSH is the command -p 5678 is to tell SSH to use a non-default port to connect to the server you control, -N is used so SSH does not execute any remote commands, which is useful when just forwarding ports, -L is to specify the port on the local host that is to be forwarded to the given host and port on the remote side.  Secretuser is the user on my server and 432.123.456.1 is the public IP of the server is that will be receiving the data.

$ ssh -p 5678 -N -L 4444:127.0.0.1:4444 secretuser@432.123.456.1
secretuser@432.123.456.1's password:

If everything went well and you entered the password correctly, this shell window is simply going to hang and will not show any output.  From this point forward, any data that is sent to localhost on port 4444 is going to be redirected to the server back in my lab.

On that server you will now need to set up a netcat listening session with the below command.  Nc is the netcat command, -l is to listen, -p is the port to listen on, and we are piping that data to pv and redirecting it to a file titled ec2image.dd.  Pv is a neat utility that will measure the data that is passed through the pipe.  A visual of what data is coming in, helps in determining if things are going according to plan.

$ nc -l -p 4444 | pv > ec2image.dd

Finally, on the EC2 instance run dd to image the instance’s hard drive and pipe it to netcat using the below command.  Remember that you are piping to netcat on localhost to port 4444.

$ sudo dd if=/dev/xvda bs=4k | nc -w 3 127.0.0.1 4444

This is an illustration of how the data should flow.

# Image courtesy of Freddy Chid @fchidsey0144 

After about 30 minutes it had sent over 4GB of data to my server located many states away from the EC2 instance.

$ nc -l -p 4444 | pv > ec2image.dd
4.39GB 0:32:28 [2.51MB/s] [           <=>                                      ]

It finished in less than an hour and transferred the entire 8GB (default) bit-by-bit image of the hard drive from the instance.

$ nc -l -p 4444 | pv > ec2image.dd
8GB 0:57:30 [2.37MB/s] [                                  <=>               ]

Speeds will vary depending on bandwidth.  In reference to image verification, since this was a live acquisition, doing a hash comparison at this point will not be of much value.  At the very least, check and compare that the size of your dd image matches the amount of bytes contained in /dev/xvda from the instance.  This can be accomplished by comparing the output of fdisk -l /dev/svda against the size of the acquired dd.

Check the size of the hard drive on the instance:

$ sudo fdisk -l /dev/xvda
Disk /dev/xvda: 8589 MB, 8589934592 bytes

Check the size of the dd on your sever.

$ ls -l
total 8388612
-rw-r--r-- 1 secretuser secretgroup 8589934592 Apr 17 18:20 ec2image.dd

We have a match.  And there you have it.  You have acquired an image of an Ubuntu 14.04 Server running on Amazon EC2.

Conclusion:


Amazon offers a full set of developer API tools for EC2 that might offer an easier way of accomplishing this task.  If in a pinch, and if you have an evening to spare, know that at least you have this option available to get the job done.  If this procedure helped your investigation, we would like to hear from you.  You can leave a comment or reach me on twitter: @carlos_cajigas



Monday, January 5, 2015

Mash That Key Releases LosBuntu




Mash That Key Releases LosBuntu



What it is...

   LosBuntu is a Live DVD Linux distribution (distro) that can be used to assist in data forensic investigations. LosBuntu is the result of our desire to have a bootable forensic distro with all of the tools and features that we like, installed by us, controlled by us, and built by us. LosBuntu was built using a clean installation of Linux Ubuntu 14.04 64 bit. Once the foundation was created, many open source forensic tools were installed and tweaks were made to turn the installation into LosBuntu. LosBuntu was then turned into a Live DVD using the tool remastersys.

   LosBuntu has been tweaked to never automount media. Although it will boot a computer without mounting its drives, it does not write block them (LosBuntu may write to a pre-existing swap partition). The distro was primarily designed for analyzing images and not booted computers. Validated Linux distributions designed for acquisition have already been released and should be used for that purpose. Remember to always validate your results.

What it is not....

   LosBuntu is not better and should not be argued as better or worse than any other distro. LosBuntu is simply another forensic distro. One that was designed the way that we like it, and released to the public in an attempt to give back to the forensic community. LosBuntu will save you the time and trouble of having to install the tools listed below.

One neat usable feature...

   Because remastersys was used to turn LosBuntu into a Live DVD, you can re-use remastersys to create your own live DVD distro. This means that you can install LosBuntu to the hard drive, add just about any tool that you wish to add, and then run remastersys to create a new version of LosBuntu with any and all of the tools and tweaks you installed, in essence, creating your very own version of LosBuntu. Change the background, add or remove tools, do anything that you please. LosBuntu was released to you so that you can use it, tweak it, and improve it.



This is the list of packages that we have added to LosBuntu:

7zip, Abiword, Archivemount, Autopsy, Bkhive, Bleachbit, BTRFS-tools, Bulkextractor, Chntpw, Chromium-browser, Clamtk, Dcfldd, Dconf, DFF, Efw-tools, Exfat-fuse, Fileinfo, Filezila, Flashplugin-installer, Foremost, FRED, Furiusisomount, Gddrescue, Gparted, Guymager, Hexedit, Hfsprogs, Hfsutils, Jacksum, John, libbde-alpha-20141023, libevt-alpha-20141229, libewf-20140427, libfuse-dev, libfvde-experimental-20140907, libfwevt-experimental-20141026, liblnk-alpha-20141026,
libpff-experimental-20131028, libsmraw-alpha-20141026, libvhdi-alpha-20141021, libvmdk-alpha-20141021, libvshadow-alpha-20141023, Log2timeline, Nautilus-open-terminal, Pasco,
Python-TK, Rar, Regripper plus plugins, Rifiuti2, Samdump2, Scalpel, SSH, Testdisk, Truecrypt 7.1a,
Vinetto, VLC, Volatility, W3m, Wine, Wireshark, Xmount, Zenmap

UPDATE on 4/26/2015:  Packages added
aircrackng, bless, curl, lvm2, macchanger, nautilus-wipe, proxychains, pv, reaver, seahorse tools, sshfs, tor, traceroute, Volatility 2.4, whois, wifite

UPDATE on 8/29/2015:  Packages added
recordmydesktop, libevtx, vmfs-tools, open-scsci, boot-up-manager

UPDATE on 11/17/2015:  Packages added
hashcat, ntdsextract_1.3, lxde, libesedb, rdesktop

UPDATE on 12/14/2015:  Packages added
nbd-client, gdebi-core, veracrypt

UPDATE on 02/01/2016:  Packages added
hashid, git, hashcat2.0, newbackground, SMB Access

UPDATE on 05/13/2016:  Packages added
mdadm, dmraid, dos2unix, libqcow, libfvde, TorBundle5.5.5

UPDATE on 09/27/2016:  Packages added
xor.exe, plaso 1.5, ubuntu-zfs

The distro is 2.4 GB in size. The password is “mtk”, without the quotes. The MD5 of the ISO is 90bcdff015f81071283847b9b2916a38 LosBuntu_2018_01_04.iso

Download it from this link:  LosBuntu

Special thanks go out to my old friends at the lab Mark B, Paul I, Pete M, and John T. for the corny name, ideas, and testing. You guys are the best in the State, keep doing what you do best.

If this tool helped you during your investigation, we would definitely like to hear from you. You can leave a comment or reach me on twitter: @carlos_cajigas

Useful Links:

- LosBuntu used to analyze an MS12-020 RDP Crash Dump with Volatility.
   Link to - ComputerSecurityStudent

- LosBuntu used to mount and convert a VMDK virtual disk to raw, on-the-fly.  YouTubeVideo

- LosBuntu used for physical disk, image acquisitions using guymager.  YouTubeVideo

- LosBuntu used to wipe and validate sterilization of physical disks.  YouTubeVideo

- LosBuntu used to Activate & Set Windows 7 Admin Password.
   Link to - ComputerSecurityStudent




Wednesday, December 31, 2014

Analyzing Plain Text Log Files Using Linux Ubuntu

   Analyzing large amounts of plaint text log files for indications of wrong doing is not an easy task, especially if it is something that you are not accustomed to doing all the time.  Fortunately getting decent at it can be accomplished with a little bit of practice.  There are many ways to go about analyzing plaint text log files, but in my opinion a combination of a few built-in tools under Linux and a BASH terminal can crunch results out of the data, very quickly.

   I recently decided to stand up an SFTP server at home, so that I could store and share files across my network.  In order to publicly access the server from the public internet, I created strong passwords for my users, forwarded the ports on my router and went out for the weekend.  I accessed the server from the outside and shared some data.  From the time that I fired it up to the time I returned, only 30 hours passed.  I came back home to discover that the monitor attached to my server was going crazy trying to keep up with showing me large amounts of unauthorized log-in attempts.  It became evident that I was under attack, Oh my! 

   After some time and a little bit of playing around, I was able to get the situation under control.  Once the matter was resolved, I couldn't wait to get my hands on the server logs. 

   In this write-up, we go will over a few techniques that can be used to analyze plain text log files for evidence and indications of wrong doing.  I chose to use the logs from my SFTP server so that we can see what a real attack looks like.  In this instance, the logs are auth.log files from a BSD install, recovered from the /var/log directory.  Whether they are auth.log files, IIS, FTP, Apache, Firewall, or even a list of credit cards and phone numbers, as long as the logs are plain text files, the methodology followed in this write-up will apply to all and should be repeatable.  For the purposes of the article I used a VMware Player Virtual Machine with Ubuntu 14.04 installed on it.  Let's get started.

Installing the Tools:

  The tools that we will be using for the analysis are cat, head, grep, cut, sort, and uniq.  All of these tools come preinstalled in a default installation of Ubuntu, so there is no need to install anything else. 

The test:

  The plan is to go through the process of preparing data for analysis and go through the process of analyzing it.  Let's set up a working folder that we can use to store the working copy of the logs.  Go to your desktop, right click on your desktop and select “create new folder”, name it “Test”.



  This will be the directory that we will use for our test.  Once created, locate the log files that you wish to analyze and place them in this directory.  Tip: Do not mix logs from different systems or different formats into one directory.  Also, if your logs are compressed (ex: zip, gzip), uncompress them prior to analysis.



   Open a Terminal Window.  In Ubuntu you can accomplish this by pressing Ctrl-Alt-T at the same time.  Once the terminal window is open, we need to navigate to the previously created Test folder on the desktop.  Type the following into the terminal.

$ cd /home/carlos/Desktop/Test/

   Replace “carlos” with the name of the user account you are currently logged on as.  After doing so, press enter.  Next, type ls -l followed by enter to list the data (logs) inside of the Test directory.  The flag -l uses a long listing format.



   Notice that inside of the Test directory, we have 8 log files.  Use the size of the log (listed in bytes) as a starting point to get an idea of how much data each one of the logs may contain.  As a rule, log files store data in columns, often separated by a delimiter.  Some examples of delimiters can be commas, like in csv files, spaces or even tabs.  Taking a peak at the first few lines of a log is one of the first things that we can do to get an idea of the amount of columns in the log and the delimiter used.  Some logs, like the IIS logs, contain a header.  This header indicates exactly what each one of the columns is responsible for storing.  This makes it easy to quickly identify which column is storing the external IP, port, or whatever else you wish to find inside of the logs.  Let's take a look at the first few lines stored inside of the log tilted auth.log.0.  Type the following into the terminal and press enter.

$ cat auth.log.0 | head

   Cat is the command that prints file data to standard output, auth.log.0 is the filename of the log that we are reading with cat.  The “|” is known as a pipe.  A pipe is a technique in Linux for passing information from one program process to another.  Head is the command to list the first few lines of a file.  Explanation: What we are doing with this command is using the tool cat so send the data contained in the log to the terminal screen, but rather than sending all of the data in the log, we are “piping” the data to the tool head, which is used to only display the first few lines of the file, by default it only displays ten lines. 



   As you can see, this log contains multiple lines, and each one of the lines has multiple columns.  The columns account for date, time, and even descriptions of authentication attempts and failures.  We can also see that each one of these columns is separated by a space, which can be used as a delimiter.  Notice that some of the lines include the strings “Invalid user” and “Failed password”.  Right away, we have identified two strings that we can use to search across all logs for instances of either one of these strings.  By searching for these strings across the logs we should be able to identify instances of when a specific user and/or IP attempted to authenticate against our server.   

   Let's use the “Invalid user” string as an example and build upon our previous command.  Type the following into the terminal and press enter.

$ cat * | grep 'Invalid user' | head



   Just like in our previous command, cat is the command that prints the file data to standard output.  The asterisk “*” after cat is used to tell cat to send every file in the current directory to standard output.  This means that cat was told to send all of the data contained in all eight logs to the terminal screen, but rather than print the data to the screen, all of the data was passed (piped) over to grep so that grep can search the data for the string 'Invalid user'.  Grep is a very powerful string searching tool that has many useful options and features worth learning.  Lastly the data is once again piped to head so that we can see the first ten lines of the output.  This was done for display purposes only, otherwise over 12,000 lines containing the string 'Invalid user' would have been printed to the terminal, yikes!

   Ok, back to the task at hand.  Look at the 10th column of the output, the last column.  See the IP address of where the attempts are coming from?  Lets say that you were interested in seeing just that information from all of the logs and filter only for the tenth column, which contains the IP addresses.  This is accomplished with the command cut.  Let's continue to build on the command.  Type the following and press enter.

$ cat * | grep 'Invalid user' | cut -d " "  -f 10 | head



   In this command, after the data is searched for 'Invalid user' it is piped over to cut so that it may print only the tenth column.  The flag -d tells cut to use a space as a delimiter.  The space is put in between quotes so that cut can understand it.  The flag -f tells cut to print the tenth column only.  Head was again used for display purposes only.  Next, let’s see all of the IP's in the logs by adding sort and uniq to our command.

$ cat * | grep 'Invalid user' | cut -d " "  -f 10 | sort | uniq



   In this command, head is dropped and sort and uniq are added.  As you imagined, sort will sort the text, and uniq is responsible for omitting repeated text.  This is nice, but it leaves us wanting more.  If you wanted to see how many times each one of these IP's attempted to authenticate against the server, the flag -c of uniq will count each instance of the repeated text, like so. 

$ cat * | grep 'Invalid user' | cut -d " "  -f 10 | sort | uniq -c | sort -nr



   In this command, the instances of each IP found in the logs were counted by uniq and then again sorted by sort.  The flag -n is to do a numeric sort and the flag -r is so that the text is shown in reverse order. 

   And there you have it.  Now we can see who was most persistent at trying to get pictures of my dog from my SFTP server. 

   Keep practicing.  Hopefully this helped you in getting started with the basics of cat, grep, cut, sort, and uniq. 

Conclusion:

   This is a quick and powerful way to search for specific patterns of text in a single plain text file or in many files.   If this procedure helped your investigation, we would like to hear from you.  You can leave a comment or reach me on twitter: @carlos_cajigas 

Suggested Reading:

- Ready to dive to the next level of command line fun?  Check out @jackcr ‘s article where he implements the use of a for loop to look for strings inside of a memory dump.  Awesome!  Find it here.


- Check out Ken's blog and his experience with running a Linux SSH honeypot.  Find it here.



Thursday, October 2, 2014

Acquiring Images of Virtual Machines From An ESXi Server

ESXi is an enterprise level computer virtualization product offered by VMWare, the makers of VMWare Player and Workstation.  ESXi can be used to facilitate the centralized management of many different types of Windows, Unix, and Linux systems.  Unlike its cousins Player and Workstation, ESXi runs directly on server hardware (bare metal) without the need for an underlying operating system.  Management of the ESXi server can and is often done remotely. 

As this type of virtualized server environment continues to gain even more popularity, you are bound to encounter it more and more.  It seems that we see one of these in just about every case.  If you are in the DFIR world then you know that it is our responsibility to get usable images out of these things.

In this article we will go through the process of acquiring a Linux Virtual Machine (VM) running on an ESXi server.  Since there is more than one way of accomplishing this, we will talk about some of the options that are available to us.  We will talk about the pros and cons of each so that you get to choose one when your time comes to implement it.   

Tools:

For the purposes of this article, all of the systems except for the ESXi server, were installed and ran using VMware Player.  Our host machine was an examination computer with Ubuntu 13.10 installed to the hard drive.  Except for the Windows VM, all of the tools used in the article are free and are available for download.

The test:

The first thing we need to do is get ESXi up and running.  VMware offers a free version of their EXSi product that can be downloaded here.  To get the download, you are going to have to create and account with them.  Once you have the ISO, go ahead and install it.  There are many tutorials online on how to do this.  I don't want to go over the steps of installing ESXi here due to the many resources available and how much longer it would make the write up.  We went ahead and installed ESXi version 5.5 on an old laptop with a core 2 duo processor and 4GB of memory.  It worked fine.  ESXi was installed using all of the defaults.  Upon startup the ESXi server was given an IP via DHCP.  It was assigned IP address 10.0.1.11


ESXi servers can be managed locally from the command line or remotely from another host using the VMware vSphere Client, which as of this writing only works on Windows hosts.  Using another VMware Player VM with Windows 7, we installed the vSphere client and logged in to the ESXi server. 



We went ahead and installed one VM on the ESXi server.  The VM on the server is a fresh install of Ubuntu 14.04.  We named the VM, Ubuntu14_04.  This will be the VM that we will we use to go through the process of acquisition.


Now that we have an ESXi server up and running and a VM to acquire, we need to determine what our best course of action for acquisition is.  This will be decided by your conversation with the system administrator (sysadmin) and the company's ability to be able to take the “server” off-line.  In most situations these are production environment servers that need to stay up to not disrupt the business.  If you are lucky and find that the sysadmin is willing to take the server off-line, then suspending the VM is probably the better method of acquiring a VM out of the server.  We will get to do this just a little later.

For now, let's assume that the server is a money making server that cannot be taken off-line.  When encountered with this issue we have conducted successful acquisitions by using a combination of SSH and an externally available source to send the data to.  For example, you can SSH into the server and use the linux native tool dd, to acquire the virtual disks and send the acquisition image to a mounted NFS share. 

Let's simulate this scenario by using our previously created Ubuntu14_04 VM.  Notice that our last screenshot tells us that our Ubuntu VM was assigned IP address 10.0.1.12.  The command $ ssh carlos@10.0.1.12 can be used to SSH into the VM.  If you are following along, make sure that your VM has an SSH server installed.  Once the key pairing has been created and the password has been entered you will be given a command prompt just as if you were on a shell terminal on the system.  Notice that my prompt changed to carlos@esxivm:~$ which is an indication that I am now inside of the VM.


The next step is to mount the NFS share that we will use to receive our acquisition image.  To simulate this part, we used another VM and the free version of the popular open source NAS solution called FreeNAS.  The FreeNAS VM was assigned IP address 10.0.1.13.      

Run the below command to mount the NFS share to the Ubuntu VM.  If you are following along make sure that your Ubuntu VM has nfs-common installed. 

$ sudo mount -t nfs -o proto=tcp,port=2049 10.0.1.13:/mnt/vol1 /mnt/nfs/


Mount is the command to mount the share, -t tells mount which type of mount that it is about to do, -o are the options that are passed to mount.  Proto= tells mount to use tcp as a protocol, this is done to ensure compatibility with other operating systems.  Port=2049 tells it which port to use.  10.1.1.13:/mnt/vol1 is the IP of the FreeNAS VM and the location of the share on the NAS.  /Mnt/nfs is a previously created mount point used to mount the NFS.   

We are now ready to send the acquisition to the NFS using dd.  We navigated to the share and kicked it off.

$ sudo dd if=/dev/sda of=esxivm.dd bs=4096 conv=sync,noerror


The acquisition successfully completed.  Mine took a while as I piped the data wirelessly rather than through the wire.  Speeds at your client location will vary depending on bandwidth.  In reference to image verification, since this was a live acquisition, doing a hash comparison at this point will not be of much value.  At the very least, check and compare that the size of your dd image matches the amount of bytes contained in /dev/sda.  This can be accomplished by comparing the output of fdisk -l /dev/sda against the size of the dd.


Great, so now we have an image of our server sitting on a NAS and we need to bring it back to our system.  Well you have options.  If the image is too large to pull it back over the wire, you can have the sysadmin locate another system close to the NAS that has access to the NAS and also has USB 3.0.  This can even be a Windows system.  If this is not an option for you, and you have no choice but to pull it back over the network, use your favorite ftp or sfpt tool to access the NAS and bring the image back to your system.  For the purposes of this article we used FileZilla.


And there you have it.  You have successfully acquired a live image out of a VM running on an ESXi server. 

Now let’s take some time and talk about another way of doing things.  If you are lucky and the sysadmin said he is good with taking the server off-line, then you are in luck and things just got easier for you.  Suspending the VM is also a good way of collecting an image of that VM.  When you suspend a VM on ESXi, the current state of the VM is saved so that it can later be loaded and reused.  Additionally, the VM's memory gets written to a .vmss file that contains the entire memory allotted to the VM.  This, in essence, can be considered a simulated way of acquiring memory from the VM.  The point is that with one click of a button you get the VM's memory and a current state of the VM's virtual disk.  The vmdk file will contain the entire raw virtual disk. 


Once you have suspended the VM, the process of getting the data out of the server can be accomplished just like in our first scenario.  So let us first talk about the method which involves the combination of SSH and the NFS share. 

If the sysadmin allowed you to take their server off-line to suspend it, then he may also be willing to give you the password so that you can SSH into the ESXi server. 


Once you have SSH'ed into the sever, locate the VM directory by navigating to
/vmfs/volumes/datastore1.  There, we located the directory for our Ubuntu14_04 VM.


The Ubuntu14_04 directory contained the following files.


Ubuntu14_04-167f1682.vmss            Is the memory file
Ubuntu14_04-flat.vmdk                      Is the VM disk

Those are the main files that we need to get out of the server.  If you had to go the mounted share route, then the process of mounting the NFS goes like this.

# esxcfg-nas -a -o 10.0.1.13 -s /mnt/vol1 nas

Esxcfg-nas is the ESXi command to mount the share, -a is to add a share, -o is the address of the host and -s /mnt/vol1 is the share directory to mount, finally nas is the name that we are going to assign to the mounted NFS share.  Be aware that unlike in Ubuntu, you do not have to create a mount point, the NFS share will be created under /vmfs/volumes.


Once the NFS is mounted you can use the cp command to copy your files over to the NFS.  Upon completion, it would be a good idea that you implement some way of validating that your data is a true representation of the original.  Hashing the files on the server and comparing them to the ones on the NFS could be a way.


To get the images out of the NAS, again, use your favorite ftp or sftp tool to retrieve them.  It should be noted that we went over the process of mounting an NFS on ESXi, so that you could have that as an option.  If SSH is enabled on the EXSi server don't forget that you can access it directly with scp or sftp and drag the images back to your system.  FastSCP on the windows side also works well.


To disconnect the NFS from the server run $ esxcfg-nas -d nas. 

Conclusion:


There are many available options of acquiring images out of these servers.  Discussed above were only a couple of these options.  If this procedure helped your investigation, we would like to hear from you.  You can leave a comment or reach me on twitter: @carlos_cajigas



Wednesday, September 10, 2014

Using Curl to Retrieve VirusTotal Malware Reports in BASH

   If you are in the DFIR world, there is a good chance that you often find yourself either submitting suspicious files to VirusTotal (VT) for scanning, or searching their database for suspicious hashes.  For these tasks and other neat features, VT offers a useful web interface were you can accomplish this.  If submitting one file or searching one hash at a time is enough for you, then their web interface should suffice for your needs.  Find the web interface at www.virustotal.com.

   If you are looking for a little bit more functionality or the ability to scan a set of suspicious hashes, you may want to look into using their public API.  VirusTotal's public API, among other things, allows you to access malware scan reports without the need to use their web interface.  Access to their API gives one the ability to build scripts that can have direct access to the information generated and stored by them. 

   To have access to the API you need to join their community and get your own API key.  The key is free and getting one is as simple as creating an account with them.  After joining their community you can locate your personal API key in your community profile. 

   In this article we will go through the process of communicating with their API form the Bourne-Again Shell (BASH) using the program curl.  The chosen format to communicate with the API is HTTP POST requests.  We will discuss a few curl commands, and once we become familiar with the commands, we will then incorporate the commands into a script to automate the process.  The command’s and the script were courtesy of a tip that I got from my co-worker John Brown.  He gave me permission to talk about his tip and permission to publish his script.   

   BASH is the default terminal shell in Ubuntu.  For the purposes of this article I used a VmWare Player Virtual Machine with Ubuntu 14.04 installed on it. 

Installing the tools:

   All of the tools that we will use are already included in Ubuntu by default.  You will not need to download and install any other tools.  If you want to follow along, make sure that you have your VT API key available.  Also, we are going to need suspicious hashes.  Feel free to use your own hashes, or copy these two md5 hashes that I will use for the article, e4736f7f320f27cec9209ea02d6ac695 and  7f16d6f96912db0289f76ab1cde3420b.  One of the hashes belongs to a fake antivirus piece of malware that I use for testing, and the other one is a hash of a text file that contains no malicious code.  One of the hashes will return hits the other one will not.  Let's get started. 

The test:

   Open a Terminal window, In Ubuntu you can accomplish this by pressing Ctrl-Alt-T at the same time or by going to the Dash Home and typing in “terminal”. 

   In order to communicate with the VT database to retrieve a file scan report we are going to need two things.  First we need to know the URL to send the POST request to.  That URL will be “https://www.virustotal.com/vtapi/v2/file/report.”  And second we will need to feed the curl command some parameters, your API key and a resource.  The API key will be the key that was given to you upon joining the VT community and the resource will be the md5 hash of the file in question.

   The following curl command should satisfy all of those requirements.

$ curl -s -X POST 'https://www.virustotal.com/vtapi/v2/file/report' --form apikey="c6e8f956YOURAPIKEY06a82eab47a0cb8cbYOURAPIKEY53aa118ba0db1faeb67" --form resource="e4736f7f320f27cec9209ea02d6ac695"


   Curl is the command that we will use to send the POST request to the specific VT URL.  The -s tells curl to be silent, to not print the progress bar.  The -X tells which request we want it to send, which in this instance is a POST request.  --form apikey= will be your API key, and --form resource is the MD5 hash of the aforementioned fake antivirus file.  These are my results.


   It looks like our fake antivirus file’s hash was located in the database and the file had been previously scanned by VT.  Lots of data with positive hits was returned.  The scanned file report currently contains no line breaks, so it was sent to our terminal window in a format that is difficult to read.  Let's see if we can fix that.  Notice that results from each individual antivirus solution start after each combination of a curly brace and a comma “},”  Armed with this information let’s add a new line character at the end of each one of those lines to separate the output so that we can see it better.   Run the same command as above, but this time let’s pipe it to sed 's|\},|\}\n|g'

$ carlos@vm:~$ curl -s -X POST 'https://www.virustotal.com/vtapi/v2/file/report' --form apikey="c6e8f9563e6YOURAPIKEY82eab47a0cb8cb9454824YOURAPIKEYba0db1faeb67" --form resource="e4736f7f320f27cec9209ea02d6ac695" | sed 's|\},|\}\n|g'

   The sed command is changing our standard output by switching every }, for a }n which is the curly brace followed by a newline.  The \ in the sed command is to escape the braces and the newlines so that the sed command can interpret these characters as literal characters and not as strings.  These are my results.


   We can now start to see information that we can work with.  From here you can redirect this data to a file or continue using grep, sed, awk and/or any other command line magic that you can throw at this output to continue editing it to your needs.  Personally, I am interested in the bottom area of the screen, the part that says "positives": 23,.  This tells me that this hash was recognized by 23 different antivirus engines on the VT database.  This is the data that I may need to pay attention to during an investigation.  That sed command was just an example of how to manipulate the output. 

   The next command will incorporate a combination of awk and sed pipes to filter the output to a final set of data that we felt comfortable working with.  We chose to filer the data with this combination of awk and sed commands.

$ carlos@vm:~$ curl -s -X POST 'https://www.virustotal.com/vtapi/v2/file/report' --form apikey="c6e8f9563e63YOURAPIKEY2eab47a0cb8cb9454824YOURAPIKEYba0db1faeb67" --form resource="e4736f7f320f27cec9209ea02d6ac695" | awk -F 'positives\":' '{print "VT Hits" $2}' | awk -F ' ' '{print $1$2$3$6$7}' | sed 's|["}]||g'



   The first awk command use the "positives" string as a field delimiter and tells it to print the string “VT Hits” followed by the second field, which is the 23 instances of positive hits.  The second awk command uses a space as a delimiter and tells it to print the first, second, third, sixth and seventh column to extract the string md5 and the md5 hash of the file from the output.  The last sed command is simply to remove any quotes and curly braces from the resulting output.  These are my results.



VTHits23,md5:e4736f7f320f27cec9209ea02d6ac695

   The end result is that we get a string of data that tells us the amount of antivirus solutions that recognize the file as being malicious plus the md5 hash of the file, so that we know which file is the suspicious file.

   If by now you are thinking that was way too long of a command to remember, or even wish to type again, then you are more like me.  For this reason, John has made a script available that automates this exact process, and is extremely easy to use.  Find the script here.

   After making the script executable, run the script and give it a hash value as an argument.  It will use the same command as above and will search the VT database for the hash that you fed it as an argument.  Run the script like this.

$ carlos@vm:~$ ./grabVThash.sh e4736f7f320f27cec9209ea02d6ac695

   These are my results.


   Same results as above.  The script automates the process of sending hash values to VT and sends the results to the screen.  It even has the ability to take a file containing multiple hashes as its input.  It will send 4 hashes per minute to VT as this is a limitation set by VT for its public access of the API.  You will need to add your API key to the script. 

Conclusion:

   VT gives us access to its database by allowing us to build scripts that can have direct access to the information generated and stored by them.  The script that we published is just one of many ways that we can add ease of access to the data stored by VT.  
   If this procedure helped your investigation, we would like to hear from you.  You can leave a comment or reach me on twitter: @carlos_cajigas