ESXi is an enterprise level computer virtualization product offered by VMWare, the makers of VMWare Player and Workstation. ESXi can be used to facilitate the centralized management of many different types of Windows, Unix, and Linux systems. Unlike its cousins Player and Workstation, ESXi runs directly on server hardware (bare metal) without the need for an underlying operating system. Management of the ESXi server can and is often done remotely.
As this type of virtualized server environment continues to gain even more popularity, you are bound to encounter it more and more. It seems that we see one of these in just about every case. If you are in the DFIR world then you know that it is our responsibility to get usable images out of these things.
In this article we will go through the process of acquiring a Linux Virtual Machine (VM) running on an ESXi server. Since there is more than one way of accomplishing this, we will talk about some of the options that are available to us. We will talk about the pros and cons of each so that you get to choose one when your time comes to implement it.
For the purposes of this article, all of the systems except for the ESXi server, were installed and ran using VMware Player. Our host machine was an examination computer with Ubuntu 13.10 installed to the hard drive. Except for the Windows VM, all of the tools used in the article are free and are available for download.
The first thing we need to do is get ESXi up and running. VMware offers a free version of their EXSi product that can be downloaded here. To get the download, you are going to have to create and account with them. Once you have the ISO, go ahead and install it. There are many tutorials online on how to do this. I don't want to go over the steps of installing ESXi here due to the many resources available and how much longer it would make the write up. We went ahead and installed ESXi version 5.5 on an old laptop with a core 2 duo processor and 4GB of memory. It worked fine. ESXi was installed using all of the defaults. Upon startup the ESXi server was given an IP via DHCP. It was assigned IP address 10.0.1.11
ESXi servers can be managed locally from the command line or remotely from another host using the VMware vSphere Client, which as of this writing only works on Windows hosts. Using another VMware Player VM with Windows 7, we installed the vSphere client and logged in to the ESXi server.
We went ahead and installed one VM on the ESXi server. The VM on the server is a fresh install of Ubuntu 14.04. We named the VM, Ubuntu14_04. This will be the VM that we will we use to go through the process of acquisition.
Now that we have an ESXi server up and running and a VM to acquire, we need to determine what our best course of action for acquisition is. This will be decided by your conversation with the system administrator (sysadmin) and the company's ability to be able to take the “server” off-line. In most situations these are production environment servers that need to stay up to not disrupt the business. If you are lucky and find that the sysadmin is willing to take the server off-line, then suspending the VM is probably the better method of acquiring a VM out of the server. We will get to do this just a little later.
For now, let's assume that the server is a money making server that cannot be taken off-line. When encountered with this issue we have conducted successful acquisitions by using a combination of SSH and an externally available source to send the data to. For example, you can SSH into the server and use the linux native tool dd, to acquire the virtual disks and send the acquisition image to a mounted NFS share.
Let's simulate this scenario by using our previously created Ubuntu14_04 VM. Notice that our last screenshot tells us that our Ubuntu VM was assigned IP address 10.0.1.12. The command $ ssh firstname.lastname@example.org can be used to SSH into the VM. If you are following along, make sure that your VM has an SSH server installed. Once the key pairing has been created and the password has been entered you will be given a command prompt just as if you were on a shell terminal on the system. Notice that my prompt changed to carlos@esxivm:~$ which is an indication that I am now inside of the VM.
The next step is to mount the NFS share that we will use to receive our acquisition image. To simulate this part, we used another VM and the free version of the popular open source NAS solution called FreeNAS. The FreeNAS VM was assigned IP address 10.0.1.13.
Run the below command to mount the NFS share to the Ubuntu VM. If you are following along make sure that your Ubuntu VM has nfs-common installed.
$ sudo mount -t nfs -o proto=tcp,port=2049 10.0.1.13:/mnt/vol1 /mnt/nfs/
Mount is the command to mount the share, -t tells mount which type of mount that it is about to do, -o are the options that are passed to mount. Proto= tells mount to use tcp as a protocol, this is done to ensure compatibility with other operating systems. Port=2049 tells it which port to use. 10.1.1.13:/mnt/vol1 is the IP of the FreeNAS VM and the location of the share on the NAS. /Mnt/nfs is a previously created mount point used to mount the NFS.
We are now ready to send the acquisition to the NFS using dd. We navigated to the share and kicked it off.
$ sudo dd if=/dev/sda of=esxivm.dd bs=4096 conv=sync,noerror
The acquisition successfully completed. Mine took a while as I piped the data wirelessly rather than through the wire. Speeds at your client location will vary depending on bandwidth. In reference to image verification, since this was a live acquisition, doing a hash comparison at this point will not be of much value. At the very least, check and compare that the size of your dd image matches the amount of bytes contained in /dev/sda. This can be accomplished by comparing the output of fdisk -l /dev/sda against the size of the dd.
Great, so now we have an image of our server sitting on a NAS and we need to bring it back to our system. Well you have options. If the image is too large to pull it back over the wire, you can have the sysadmin locate another system close to the NAS that has access to the NAS and also has USB 3.0. This can even be a Windows system. If this is not an option for you, and you have no choice but to pull it back over the network, use your favorite ftp or sfpt tool to access the NAS and bring the image back to your system. For the purposes of this article we used FileZilla.
And there you have it. You have successfully acquired a live image out of a VM running on an ESXi server.
Now let’s take some time and talk about another way of doing things. If you are lucky and the sysadmin said he is good with taking the server off-line, then you are in luck and things just got easier for you. Suspending the VM is also a good way of collecting an image of that VM. When you suspend a VM on ESXi, the current state of the VM is saved so that it can later be loaded and reused. Additionally, the VM's memory gets written to a .vmss file that contains the entire memory allotted to the VM. This, in essence, can be considered a simulated way of acquiring memory from the VM. The point is that with one click of a button you get the VM's memory and a current state of the VM's virtual disk. The vmdk file will contain the entire raw virtual disk.
Once you have suspended the VM, the process of getting the data out of the server can be accomplished just like in our first scenario. So let us first talk about the method which involves the combination of SSH and the NFS share.
If the sysadmin allowed you to take their server off-line to suspend it, then he may also be willing to give you the password so that you can SSH into the ESXi server.
Once you have SSH'ed into the sever, locate the VM directory by navigating to
/vmfs/volumes/datastore1. There, we located the directory for our Ubuntu14_04 VM.
The Ubuntu14_04 directory contained the following files.
Ubuntu14_04-167f1682.vmss Is the memory file
Ubuntu14_04-flat.vmdk Is the VM disk
Those are the main files that we need to get out of the server. If you had to go the mounted share route, then the process of mounting the NFS goes like this.
# esxcfg-nas -a -o 10.0.1.13 -s /mnt/vol1 nas
Esxcfg-nas is the ESXi command to mount the share, -a is to add a share, -o is the address of the host and -s /mnt/vol1 is the share directory to mount, finally nas is the name that we are going to assign to the mounted NFS share. Be aware that unlike in Ubuntu, you do not have to create a mount point, the NFS share will be created under /vmfs/volumes.
Once the NFS is mounted you can use the cp command to copy your files over to the NFS. Upon completion, it would be a good idea that you implement some way of validating that your data is a true representation of the original. Hashing the files on the server and comparing them to the ones on the NFS could be a way.
To get the images out of the NAS, again, use your favorite ftp or sftp tool to retrieve them. It should be noted that we went over the process of mounting an NFS on ESXi, so that you could have that as an option. If SSH is enabled on the EXSi server don't forget that you can access it directly with scp or sftp and drag the images back to your system. FastSCP on the windows side also works well.
To disconnect the NFS from the server run $ esxcfg-nas -d nas.
There are many available options of acquiring images out of these servers. Discussed above were only a couple of these options. If this procedure helped your investigation, we would like to hear from you. You can leave a comment or reach me on twitter: @carlos_cajigas