Friday, August 5, 2022

Velociraptor Playground 2022-08-02

     On Tuesday August 2nd, 2022, I created a playground consisting of 23 systems.  Ten Window 10 machines, ten Windows 11 machines, one Velociraptor Server and one Server 2019 Windows machine, compromised with a persistent remote access trojan that communicated to an attacker machine hosted outside of the US.  

     I created a few questions that I felt would be interesting and posted them as a quasi CTF/Practice with instructions on Github.

     I then made the Velociraptor server publicly accessible, tweeted about it, and stated that it would be available for three days.  


     Interestingly enough and much to my surprise, many people took me up on the opportunity, requesting access to the server.  Everyone that requested access got a chance to play around with the telemetry collected by the Velociraptor Server.  The playground remained up for the promised three days and on Friday we did a walk through of many of the steps outlined in the instructions of the CTF.    

     Below is the walk through of the CTF/Practice questions, with a screenshot of the data.  Prior to taking down the playground I used a custom created offline Velociraptor collector so that I could create a triage image of the compromised server.  By the time that you read this, the infrastructure will no longer be up.  Nevertheless you can follow along and find the evil by analyzing the triage image using your favorite tools of choice.  

     Download the triage image from here...  

     Watch the walk through here...


 

Incident Background

The compromised client is a Server2019 Windows Machine. This server is in the AWS cloud, it is currently compromised with an active backdoor that has an ESTABLISHED connection to an attacker machine, which is also in the AWS cloud. The backdoor was created by us using the sliver C2 framework. All machines are owned and controlled by us. This is a practice playground. Although you are going to be hunting for malware, this is considered a SAFE playground and you will not (should not) get infected.

Hunting for Evil

Objective: Your goal is to find how the system was compromised.
All of the artifacts that you need to get your answers have already been collected and you can read the collected data to find the answers. For example.

Look for Running Processes

Objective: Use the artifacts ability to check for unsigned binaries to find untrusted running processes.

Hunt Artifact: Windows.System.Pslist Post process your artifact collection with the below notebook

Notebook:

SELECT Pid,Name,Exe, Hash.MD5 AS md5hash, Authenticode.Trusted AS signer
FROM source(artifact="Windows.System.Pslist")
WHERE signer =~ "untrusted"

And you get...




As you can see there is one process that is untrusted that looks very suspicious. This is just the first artifact that you looked at. We recommend that you do more.

Look for Network Connections

Objective: Use the artifacts ability to check for established connection to see if the process has networking capabilities.

Hunt Artifact: Windows.System.Netstat Post process your artifact collection with the below notebook

Notebook:

SELECT Timestamp,Pid,Name,Status,`Laddr.IP`,`Laddr.Port`,
geoip(ip=`Raddr.IP`,db='/velo/velo/public/GeoLite2-City.mmdb').country.names.en
AS Country,`Raddr.IP`,`Raddr.Port`,Fqdn
FROM source(artifact="Windows.Network.Netstat")
WHERE Status =~ "ESTAB"
AND NOT Country =~ "United States" 
And you get... 


Did the attacker RDP to the system

Objective: Use the artifact to examine RDP connections to the server. Look for 4624 LogonType 10

Hunt Artifact: Windows.EventLogs.RDPAuth

Notebook:

SELECT EventTime,EventID,LogonType,UserName,SourceIP,
geoip(ip=SourceIP,db='/velo/velo/public/GeoLite2-City.mmdb').country.names.en AS Country,Description
FROM source(artifact="Windows.EventLogs.RDPAuth")
WHERE EventID = 4624 
AND LogonType = 10



How did the attacker get initial access to the server

Objective: Use the artifact to examine logon attempts to the server. You will realize that this was a successful brute force attack with over 400 failed attempts and successful logons from the same IP

Hunt Artifact: Windows.EventLogs.RDPAuth

Notebook:

SELECT EventTime,EventID,UserName,SourceIP,
geoip(ip=SourceIP,db='/velo/velo/public/GeoLite2-City.mmdb').country.names.en AS Country,
count() AS Count
FROM source(artifact="Windows.EventLogs.RDPAuth")
WHERE EventID = 4624
OR EventID = 4625
GROUP BY EventID,SourceIP




When was the malware executed by the attacker

Objective: Use a custom created artifact to get the execution time of the malware. The artifact named Custom.Windows.EventLogs.SysmonProcessCreationID1 queries the Sysmon Event logs installed on the machine for process creation ID 1

Hunt Artifact: Custom.Windows.EventLogs.SysmonProcessCreationID1

Notebook:

SELECT * FROM source(artifact="Custom.Windows.EventLogs.SysmonProcessCreationID1")
WHERE image =~ "dllhost"

Does the malware have persistence

Objective: Do analysis to determine persistence. Use a custom artifact to execute autoruns on the system.

Hunt Artifact: Custom.Windows.Sysinternals.Autoruns

Notebook:

SELECT count() AS Count, Time, Signer, Entry,Category,Profile,Description,`Image Path` AS ImagePath,`Launch String` AS LaunchString, Enabled,MD5 
FROM source(artifact="Custom.Windows.Sysinternals.Autoruns")
WHERE Enabled 
AND NOT Signer OR Signer =~ "Not verified"
GROUP BY ImagePath,LaunchString


     There were a few more artifacts that were collected at the time of the practice scenario, but the ones above are the ones that contain the notebooks with  most value.  We are definitely planning on running another one of these in the future.  
     If you though that this was interesting and you want to take part in the next one, please follow me on Twitter for updates on dates. @carlos_cajigas

Wednesday, June 3, 2020

Installing a Velociraptor Server on Ubuntu 18.04

Update on 2023/10/12.  The Velociraptor project is still going stronger than ever, but this post is now old and outdated.  

--- 

     What if I were to tell you that there is a free way to query one hundred hosts or more in a matter of 20 seconds, and get visibility into the possibility of them being compromised.  Would you believe me?  But most importantly, would you take advantage of it?  Would you like to know how to do it with Velociraptor?  If the answer to any of these questions was a “yes”, then you are reading the right blog post. 

     Velociraptor from Velocidex is a free open source utility created by Mike Cohen, who is one of the lead developers for Rekall and GRR.  “With a solid architecture, a library of customisable forensic artifacts and its own unique and flexible query language, Velociraptor provides the next generation in endpoint monitoring, digital forensic investigations and cyber incident response.” (Source: Velocidex). 

     Mike has many years of experience making forensic utilities and we think that Velociraptor has been created with the incident responder in mind.  Velociraptor allows you to query just about anything on the system and also collect just about anything that you want.  It can scale to over 100 thousand hosts and the coolest thing is that you can extend its usability by creating your own collection artifacts using the “The Velociraptor Query Language” (VQL).  VQL is an expressive query language similar to SQL that allows you to quickly and easily adapt Velociraptor to do what you want it to do without needing to modify any of the source code or deploy additional software. 

     Writing VQL queries is beyond the scope of this post, but we can consider creating an article to help you get started with VQL.  The purpose of this post is to talk about the process of standing up a Velociraptor server on Linux machine, so that you can connect a single Windows machine to it.  We hope that once you feel comfortable with the steps described in this post you will be ready to graduate into using these procedures to connect more clients to the server and even graduate into putting your velociraptor server in the cloud.  Let’s get started. 

 

What you need:

     In order to get started with Velociraptor and to follow along with this post, you are going to need two computers that can reach each other.  In other words, two systems that can ping each other or be on the same subnet. 

     One system will be a Linux machine and the other will be a Windows machine.  The Velociraptor server will be installed on Ubuntu 18.04 and the Windows machine that we will connect to it, will be a Windows 10 machine.  For the purposes of this article we will install each of these machines using the free Virtualization software called VMWare Player.  

 

Installing the Velociraptor Sever:

     The first step in the process is undoubtedly getting the Velociraptor server up and running and getting the web GUI working.  To accomplish that we went ahead and started by creating a new virtual machine of Ubuntu 18.04 with the default hardware configurations provided by VMWare Player.  Two GB of memory is more than enough for this test server, but feel free to increase it to improve performance. 

     We created one user “ubuntu” with a password of 1234 and gave the machine hostname "velo01".  Yes, the password is short and weak, great for a demo, but not secure as this is just a test.  Do not make this a publicly routable machine. 


     Once the installation completes.  Log in to the Ubuntu machine and open a terminal window.  We will be working from the terminal window during the installation of the Velociraptor utility. 

     Once the terminal window is open, you should be in the home directory of the user of your machine, mine is user “ubuntu”.  Next, elevate privileges to root with “sudo su”.  This is what my machine looks like at this moment.

     The next step is to download the Velociraptor Linux binary to the Ubuntu machine.  The easiest way to accomplish this is via the "wget" utility.  Navigate to this link which should contain the latest release.  As of this writing the latest release was version 0.4.4.   

     Copy the link address to the Linux binary and paste it to the Ubuntu Terminal window.


     Once the binary is on our machine.  The next step in the process would be to add executable permissions to our binary file to that we can well… execute it.  You can accomplish this via the chmod +x command, like so.

# chmod +x velociraptor-v0.4.4-linux-amd64

     
     Now that the Velociraptor binary has executable permissions, we can now move into the process of installing Velociraptor.  Velociraptor uses two separate locations to store data.  One location is used by the “Data Store” and the other is for its logs.  In order to store both of these locations under one directory, we like to create these two directories using these two commands below.  Mkdir is the command to make a directory and the -p is to create the parent directory in the event that it doesn’t already exists.  

# mkdir -p /velo/velo

# mkdir -p /velo/logs

     The “Data Store” will be stored in “/velo/velo” and the logs will be stored in “/velo/logs”.  The reason why we like to put these directories under the “/velo” directory is simply for organization purposes.  The location of the “Data Store” can be put anywhere that you like.  The “Data Store” of Velociraptor is extremely important to Velociraptor as this location collects information about its clients and whatever data Velociraptor monitors from your clients, so remember it!

     With the location of the “Data Store” now defined, let’s move on to the next step which is to create the pair of configuration files that Velociraptor needs.   Velociraptor uses a pair of configuration files that tell Velociraptor how to operate.  One configuration file is for the server and the other configuration file is for the clients.  The client configuration file let the clients know the location of the velociraptor server and also contains information on how to securely communicate with the Velociraptor server.  

     We can finally begin the process of creating these configuration files by executing the Velociraptor executable interactively like so.

# ./velociraptor-v0.4.4-linux-amd64 config generate -i

     Use arrows to move


     Press enter as we will in fact be using a Linux server

     The next question is: Will you be using the File Base “DataStore” or “MySQL”.  I have heard Mike mention that around after 10,000 clients you should consider using “MySQL”.  For now, we will just use FileBaseDataStore, press enter.


     The next question is:  What is the path to the DataStore.  I placed mine under “/velo/velo”.  So I added that location there and pressed enter.


     The next question is:  What kind of deployment will you need?  Let us talk about the different options...

> Self Signed SSL.   In a self-signed SSL deployment, communication between you, the user, and the Velocirpator web GUI is established using TLS with self-signed certificates.  When you use the browser to log into the Velociraptor GUI, the communications are authenticated with basic Auth and the GUI will bind to localhost only on port 8889 by default.  This means that when you fire up the browser to log into the GUI you must navigate to https://localhost:8889/.  For the purposes of this test this will work just fine and we will use this, but as you graduate from testing into other forms of deployment the other two forms of deployment offer better options. 

> Automatically provision certificates with Lets Encrypt.  In this type of deployment.  You can assign a domain during the Velociraptor installation and Velocirpator will automatically uses the Letsencrypt protocol to obtain and manage its own certificates.  This allows you to put a server in the cloud that you can access by going to a domain that you control, but the GUI authentications are still utilizing username and password authentications that you set up and MFA cannot be used. 

> Authenticate users with Google OAuth SSO.  In this type of Deployment, the verification of the user’s identity is done via Google’s oauth mechanism and you no longer need to assign a password to the users.  Additionally, during this kind of deployment, MFA can be used.  This is where you ultimately want to end up with, this type of deployment is sweet and the authentication is seamless.  As you can imagine, is does take a little bit of set up, time and practice to get good with it. 

     We will use Self Signed SSL because we are just testing, it is fast, and it works.  Press Enter


     The next question is:  What is the port of the front end, leave it default and press enter


     The next question is an Important one:  What is the DNS name of the front end?  In a self-signed deployment, you can leave this default and it will bind to localhost, but I want to edit this here and assign a DNS name so that our Windows client can resolve a DNS to an IP just like if this were a cloud deployment.  This helps in practicing using DNS without having to assign a DNS to a public IP address. 

     We gave it a DNS of “velo01.com” and pressed enter.


     Default port for the GUI, and a “no” since we are not using google DNS.


     This is where you add your user’s usernames to the GUI as well as their password.  I kept it simple with a username of carlos and a password of 1234.  Again, not secure, this is just a demo…   Press enter when you are done adding your users.


     Location of the logs directory and press enter


     Finish by pressing enter to save the client and server configuration files to your current directory


     Our configuration is now finished.  We are now ready to install Velociraptor to this Ubuntu machine.  To do that we should create Linux packages for both server and clients.  To create Linux packages for the installation of the server and Linux clients execute this command for the server..

# ./velociraptor-v0.4.4-linux-amd64 -c server.config.yaml debian server

And this command for the client..

# ./velociraptor-v0.4.4-linux-amd64 -c server.config.yaml debian client

     This will grab the server configuration file and package it inside of the .deb file so that they can easily be handled by a package installer like “dpkg”.


     Since the server package is ready for installation, let’s go ahead and install it with “dpkg”.  Execute the below command to install the debian server package on this Ubuntu machine.  This will get the Velociraptor utility persistently installed on the machine and ready to allow us to log in using the previously created username and password. 


     When the package is installed on your Ubuntu machine, the installation process also creates a new user on your machine called “velociraptor” that should be the “owner” of the directories created during installation and used by the Velociraptor utility.  This doesn’t always occur since the creation of the directories and the installation of the package were done by user “root”.  My recommendation is to take a quick peak at the “velo” directory to see who currently owns that directory.  To check the “/velo” directory ownership run, ls -l on /velo and see…

# ls -l /velo/


     Notice that the “logs” directory is still owned by “root”.  We can fix this with the “chown” command.  Do you remember how we earlier recommended the “DataStore” and the “logs” directory under one?  It was also for this reason.  Run the “chown” command in recursive mode on the “velo” directory to fix ownership like so!

# chown -R velociraptor:velociraptor /velo


   All fixed now. 

     We are now ready to log into the GUI.  We need to test the GUI and make sure that it works, before we can deploy the configuration file to client machines.  You can log into the GUI of a self-signed SSL deployment by navigating to https://localhost:8889/.  Open your browser on the Ubuntu server and navigate to that URL. 


     If you get the security warning, you have done everything right so far.  This happens because the GUI is served over TLS with a self-signed certificate and the browser is letting you know.  This is normal.  Accept the and continue to the Velociraptor GUI.  You should see a prompt asking you for the username and password created during the velociraptor configuration process.


     Enter the username and password created earlier and log in.  Congratulations.  Marvel at the beauty of Velociraptor!


     Awesome! You now have a Velociraptor server that is waiting for clients to connect to it.  If you click on the “magnifying glass” you can see the clients that are currently talking to your server.


     Unfortunately, there are no clients talking to it.  Lets us change that.  In order to get a client communicating to your Velociraptor server you are going to have to extract the “client.config.yaml” file from your Velociraptor server and get it over to the Windows machine, which is our client in this demo.  The client machine needs to know how to communicate to the server and all the information that it needs is stored there. 

     We have tested Velociraptor so far on Windows 7, Windows 10, Server 2012 and Server 2016 and the Velociraptor client has worked perfectly on all versions.    

     The easiest method of installation that I have found so far has been installing the Velociraptor client on a windows machine via the Windows MSI installation package.  The MSI can be downloaded from here.  As of this writing Mike has not made the MSI for 0.4.4 accessible, but the MSI for version 0.4.2 should work for 0.4.4 as well, as you will see below.  The Velociraptor MSI is signed, which means that it should get treated as a trusted executable and Windows Defender should allow the installation and should not quarantine it.  The MSI will install Velociraptor as a service that starts at boot.  When the service starts, it attempts to load the configuration file from the C:\Program Files\Velocirapror directory called velociraptor.config.yaml.

     This means that if the C:\Program Files\Velocirapror directory doesn’t exist, you will need to create it.  It also means that the client.config.yaml configuration file needs to renamed to velociraptor.config.yaml after it has been added the to the C:\Program Files\Velocirapror directory along with the MSI.  

     It should look like this…


     OK.  We are just one step away from getting this Windows machine talking with the Velociraptor server, but in order to do that we need to tell the Windows machine how to resolve the DNS name that we assigned to Velociraptor.  One easy way to accomplish this is by editing the hosts file on the Windows machine located on C:\Windows\System32\drivers\etc\hosts. 


     Finally, right click on the MSI and select install. 


     If everything worked your directory should look like this.


     Even if your directory on the windows machine looks like ours, nothing beats checking the Velociraptor server to see if it worked.  Go back to the Velociraptor GUI and refresh the screen and boom!  Success!


     You are now ready to query the client machine or machines (plural!!!), using Velociraptor.  This was just the first step needed to connect a client to the Velociraptor server so that you can get visibility into the hosts that are talking to the Velociraptor server.  Continue to practice with the utility and you will see that out of the box, it can do a lot.  Use it to query the running processes, established network connections, and even installed services (like Velociraptor) on your client machines.  The possibilities are endless. 



Conclusion:

     Velociraptor is a very powerful free utility that can give you visibility into the hosts or hosts withing your organization.  If the article and its procedures helped you at all, we would like to hear from you.  You can leave a comment and reach me on twitter: @carlos_cajigas or email carlos at CovertBitForensics.com 

About the Author: Carlos brings his deep forensic experience from the West Palm Beach Florida Police Department where he served as a digital forensics detective, examiner, and instructor specializing in computer crime investigations. Carlos shares his expertise in his classes on how to directly target specific files and folders that can yield the biggest amount of answers in the least amount of time - "That way you can have answers within minutes rather than within hours," he says, when he teaches the FOR500 and FOR508 courses for the SANS Institute.  Carlos is currently the Managing Partner and Chief Technical Officer of Covert Bit Forensics, a firm specializing in Digital Forensic Investigations.

Saturday, October 12, 2019

Use KAPE to collect data remotely and globally

     If you have been following along with the amazing utility that KAPE is then you are aware that it is a game changer to the forensics community.  KAPE, from Eric Zimmerman, makes it possible for a first responder to be able to collect anything from a compromised machine, and automatically process the collected data.  This allows the first responder to go from nothing, to having a triage image and results that can quickly be analyzed.  KAPE is able to accomplish this by giving you complete control of the data to acquire and process, through a series of targets and modules that you can select, edit, and even create.  All of this functionality is available 100% for free.

     Well, if the benefits of using this utility for acquisition and processing are so apparent, can we then use KAPE to acquire data remotely from one or 100 machines?  Allow me to tell you that the answer is an absolute YES!  As a matter of fact, what if I were to tell you that you can use this utility to acquire data from a machine outside of your network, anywhere in the world, would you take advantage of it?  Of course you would!  I will show you how.

     Eric has even added functionality to send the acquired and processed data to an SFTP server.  This means that KAPE can be used to collect and process anything from a machine and subsequently send the results to an SFTP server that you control.   This receiving SFTP server can be set up locally in your organization or it can be publicly in the cloud.  How cool is that!! 

     The purpose of this article is to show you how to use KAPE, to collect data from a system or 100 systems and send the data from all of those systems to an SFTP server in the cloud.  This will open up the possibilities to execute KAPE on a machine outside of your environment and still be able to send the data from that machine to the cloud server that you control.  This can be useful, if you are in consulting and/or you are working with a client remotely.  Send your client one command and done.  Let’s get to it!

What we need:

     In order for us to be able to pull this off, and be able to acquire data from a machine and send it to an SFTP server where are going to need four things:
1) KAPE
   KAPE can be downloaded from here
2) A powershell script
   This powershell script is a very simple script containing the list of commands  required to download and run KAPE.
3) A web-server
   This web-server is going to host and serve KAPE.  This can be a local web-server or a web-server in the cloud.
4) An SFTP server
   This SFTP server will receive and store the data sent from KAPE.  This can be a local SFTP server or an SFTP server in the cloud.  It can be the same as the web-server.

Setting up the Environment:  

     OK, with the “what we need” out of the way, let us now talk about how to actually do this.  In this section I am going to go over each one of the elements of setting up this technique for data collection.

1) KAPE
     The number one requirement for this technique to work is KAPE.  Download KAPE and unzip it.  Play with it and be sure that you know how to use it.  This article assumes that you are familiar with KAPE and you know it works.  We will not be talking about how to run KAPE as this is an article on how to use it remotely, not on how to run it.  If you have created Targets and Modules for KAPE, add them.  If you have an executable that is required for one of your modules, then add it to the BIN directory.  Once your files are where you want them, you are going to ZIP it up.  We will push this ZIP file to the web server in the cloud a little later.

2) A powershell script
     The next step of this technique is an important one.  This next step in the process is going to involve creating a powershell script that is going to hold the commands needed to download and run kape on the remote machine.  Since the purpose of this article is getting KAPE to run remotely, we need to figure out a way to get KAPE to the remote machine before it can run.  While speaking with my friend Mark Hallman about how to accomplish this he gave me the idea of hosting KAPE on a web-server and simply have the remote machine download and run it.  That is exactly what this powershell script is going to do.  It will download kape, unzip it, run it, send the data to the SFTP server, and then clean up.  What we just talked about can be accomplished with the following lines.

Invoke-WebRequest -Uri http://10.10.10.10/KAPE.zip -OutFile .\KAPE.zip
     This command will download the previously zipped up KAPE.zip file, hosted on a web-server located at sample IP 10.10.10.10 and will be written to the current directory as KAPE.zip

Expand-Archive -Path .\KAPE.zip
     This commands expands the newly downloaded zip file

KAPE\KAPE\kape.exe --tsource C: --tdest C:\temp\kape\collect --tflush --target PowerShellConsole --scs 10.10.10.10 --scp 22 --scu user --scpw 12345678 --vhdx host --mdest C:\temp\kape\process --mflush --zm true --module NetworkDetails --mef csv --gui
     This command will run KAPE.  This is a sample Kape execution.  Kape will run your targets and modules of choice depending on what it is that you intend to capture and process.  The acquired data is going to be sent to an SFTP server listening on port 22 at sample IP 10.10.10.10 with user “user” and password “12345678”.  The KAPE command, server location, port, user, and password need to be changed to match the server that you created.  This SFTP server can be the same as the web server that will serve the KAPE zip file downloaded by this powershell script. 
   
Remove-Item .\KAPE\ -Recurse -Confirm:$false -Force
     This command removes the kape directory that was unzipped, the same directory that contained your KAPE executable and the targets and modules as well as your files in the BIN directory.

Remove-Item .\KAPE.zip
     This command removes the KAPE zip file that was downloaded by the Invoke-WebRequest at the beginning of the script.

     Folks, that is it.  That is the entirety of the powershell script.  Get these five commands all together on a text file.  Save it with a ps1 file extension and your script is ready.  Alternatively, you can download a sample script from my github here.

3) A web-server
     The next step of this technique is to set up a web-server to host the KAPE.zip file.  This web-server can be hosted locally or publicly in the cloud.  For the purposes of this article and for ease of use, I decided to host an ubuntu web-server in AWS.  I installed apache on it and made sure that port 80 was publicly accessible.  I then uploaded the KAPE.zip file and the X1027_Get_DoKape script to /var/www/html so that both could be publicly accessible.



4) An SFTP server
     The last step is to set up an SFTP server, that can receive the data that KAPE will send to it.  This requires a server with ssh listening on the port of your choice, traditionally port 22, but it doesn’t have to.    For the purposes of this article I went ahead and used the same server from step 3.  Now, a couple of things to keep in mind.  Since this is an ssh server in the cloud, I created a new user with a very strong password and changed the ssh port from 22 to another port.  If you are also going to use AWS, don't forget to edit the sshd_config file so that is can accept password authentication.

     That's it.  That is all the setup that is required.  We are now ready to test this.

The Test:

     Now that the environment has been set up, all you have to do is get the remote machine to run the powershell script.  If you want to accomplish this task against 100 machines, one way to do this is to push the script to the remote machines via invoke-command.  This requires powershell remoting to be enabled in your organization, and if it is, you are in luck.

     Prior to pushing the script to the remote machine, I like to test the invoke-command execution.  One way that you can do this is by issuing the below command.  This command connects to the remote machines and runs “hostname” to get a visual that the machines you wanted to talk to, are in fact responding.

Invoke-Command -ComputerName (Get-Content .\computers.txt) -Credential carlos -ScriptBlock {hostname}



     As you can see, we were able to communicate with two systems via the invoke-command execution.  One was labeled FOR01 and the other was labeled FOR02.  These are exactly the two hostnames inside of the computers.txt file.

    We are now ready to push the script over to the machines.  This is an example of how to push the script using invoke-command.

Invoke-Command -ComputerName (Get-Content .\computers.txt) -Credential carlos .\X1027_Get_DoKape_V0.1.ps1



     This command will push the powershell script named X1027_Get_DoKape_V0.1.ps1 to a list of hosts contained inside of “computers.txt” using credentials for user “carlos”.  I ran this command against two machines named FOR01 and FOR02.  Two minutes later the data was waiting for me on the SFTP server in AWS. 



     All that is left to do now is to download the data from the SFTP server to your examination machine and begin your analysis.

     There is one more thing that I would like to talk about...  Do you remember that we mentioned that this technique could be used globally as well?  As this is a technique that is started by running a powershell script, we can actually host and download that powershell script on any web-server and it will do its job.  As a demonstration of this capability I launched a Windows Server 2016 in AWS and connected to it via RDP.


     I then ran the below command on an Administrator prompt from the “Desktop” directory.  This command will go to our web-server, download the script and start it.   
PowerShell "Invoke-Expression (New-Object System.Net.WebClient).downloadstring('http://52.41.38.678/X1027_Get_DoKape_V0.1.ps1')"



     A few minutes later the data form the server was sitting in our “collection” server available to be analyzed.



     It should be noted that we ran this command from an CMD prompt, not even a powershell prompt.  You can simply email this command to your client and it can be reused on as many systems as they like, anywhere in the world.  I hope that you like that!

Conclusion:  

     This is an amazingly powerful utility that is/will become the de-facto standard for triage acquisition and processing.  I hope that you are able to use this technique during your acquisitions.  If this procedure helped your investigation, we would like to hear from you.  You can leave a comment or reach me on twitter: @carlos_cajigas



Saturday, December 5, 2015

Mounting the VMFS File System of an ESXi Server Using Linux

  It won't happen very often when you will find yourself holding in your hand a hard drive that belonged to an ESXi server.  These servers usually house production machines that just don't get shutdown very often.  Why the decision has been made to turn it off is one that I am sure was not made lightly.  Whatever the scenario is, it is what it is.  It wasn't your call, but the client decided to shut down their ESXi server and subsequently shipped it to you for analysis.  Now you have the drive in your hand and you have been tasked with extracting the Virtual Machines out of the drive for analysis.

   The underlying file system of an ESXi server is the VMFS file system.  It stands for Virtual Machine File System.  VMFS is Vmware, Inc.'s clustered file system used by the company's flagship server visualization suite, vSphere. It was developed to store virtual machine disk images, including snapshots. Multiple servers can read/write the same file system simultaneously while individual virtual machine files are locked (Source Wikipedia).

   As of the date of this writing, not all of the big forensic suites have the ability to read this file system.  And I can understand why, as is extremely difficult for the commercial suites to offer support for all available file systems.  Fortunately for us, it is very possible to read this file system using Linux. 

   The purpose of this article is to go over the steps required to mount the VMFS file system of the drive from an ESXi server.  Once access to the file system has been accomplished, we will acquire a Virtual Machine stored on the drive. 

Installing the Tools:

   For you to be able to accomplish the task, you will have to make sure that you have vmfs-tools installed on your Linux examination machine.  You can get the package from the repositories by running $ sudo apt-get install vmfs-tools.  Vmfs-tools is included by default in LosBuntu.  LosBuntu is our own Ubuntu 14.04 distribution that can be downloaded here.  If you download and boot your machine with LosBuntu, you will be able to follow along and have the exact same environment described in this write-up.   

The Test:

   To illustrate the steps of mounting the partition containing the VMFS file system on the drive, I will use a 2TB hard drive with ESXi 6.0 installed on it.  This drive is from an ESXi server that I own.  The ESXi server drive is currently housing some virtual machines that we will be able to see, once the file system is mounted.  I booted an examination machine with a live version on LosBuntu and connected the drive to the machine.  LosBuntu’s default behavior is to never auto-mount drives.   

   Now, fire up the terminal and let's begin the first step of identifying the drive.  Usually the first step involves running fdisk, so that we can identify which physical assignment was given to the drive.  Running $ sudo fdisk –l lists the physical drives attached to the system, the flag -l tells fdisk to list the partition table.  Sudo gives fdisk superuser privileges for the operations.  Press enter and type the root password (if needed, pw is "mtk").

$ sudo fdisk -l



   Not show on the screen is /dev/sda, which is my first internal drive, therefore /dev/sdb should the drive of the ESXi server.  The output of fdisk give us a warning that /dev/sdb may have been partitioned with GPT and fdisk was unable to read the partition table.  Fdisk is telling us to use parted, so let’s do that.  The following parted command will hopefully get us closer to what we need.

$ sudo parted /dev/sdb print



  From the output, we can see that yes, it is indeed a GPT partitioned drive, containing multiple partitions.  The last displayed partition, which is actually partition number three, looks to be the largest partition of them all.  Although parted was able to read the partition table, it was unable to identify the file system contained in partition three.  We currently have a strong suspicion that /dev/sdb is our target drive containing our target partition, but it would be nice to have confirmation.  Let's run one more command.

$ sudo blkid -s TYPE /dev/sdb*

   Blkid is a command that has the ability to print or display block device attributes.  The flag -s TYPE will print the file system type of the partitions contained in /dev/sdb.  We used an asterisk “*” after sdb so that blkid can show us the file system types of all partitions located in physical device sdb like sdb1, sdb2, sdb3 and so on. 



   Finally, we can now see that /dev/sdb3 is the partition that contains the VMFS volume. 

   To mount the file system we are going to have to call upon vmfs-fuse, which is one of the commands contained within the vmfs-tools package built into LosBuntu.  But before we call upon vmfs-fuse, we need to create a directory to mount the VMFS volume.  Type $ sudo mkdir /mnt/vmfs to create our mount point.

   Mount the VMFS file system contained in /dev/sdb3 to /mnt/vmfs with the below command

$ sudo vmfs-fuse /dev/sdb3 /mnt/vmfs/



   As you can see, the execution of the command simply gave us our prompt back.  As my friend Gene says.  “You will not get a pat on the back telling you that you ran your command correctly or that it ran successfully, so we need to go check.”  True and amusing at the same time…

   Check the contents of /mnt/vmfs by first elevating our privileges to root, with $ sudo su and then by listing its contents with # ls -l /mnt/vmfs.



   Great! We can read the volume and we see that we have many directories belonging to Virtual Machines.  From here you can remain in the terminal and navigate to any of these directories, or you can fire up nautilus and have a GUI to navigate.  The following command will open nautilus at the location of your mount point as root.  It is important to open nautilus as root so that your GUI can have the necessary permissions to navigate the vmfs mount point that was created by root. 

# nautilus /mnt/vmfs



   Insert another drive to your examination machine and copy out any of the Virtual Machines that are in scope.    

   Another option would be to make a forensic image of the Virtual Machine.  For example, we can navigate to the Server2008R2DC01 directory, which houses the Domain Controller used on the previous write-up about examining Security logs.  Find that article here.



   In this specific instance, this Virtual Machine does not contain snapshots.  This means that the Server2008R2DC01-flat.vmdk is the only virtual disk in this directory responsible for storing the data on disk about this server.  If the opposite were true, you would have to collect all of the delta-snapshot.vmdk files to put back together at a later time. 

   The Server2008R2DC01-flat.vmdk file is a raw representation of the disk.  It is not compressed and can be read and mounted directly.  The partition table can be read with the sleuthkit tool mmls.  Mmls is a tool that can display the partition layout of volumes.  Type the following into the terminal and press enter.  The flag -a is to show allocated volumes, and the flag -B is to include a column with the partition sizes in bytes.

# mmls -aB Server2008R2DC01-flat.vmdk



   You can see that the 50GB NTFS file system starts at sector offset 206848.

   If you want to acquire this virtual disk in E01 format, add the flat-vmdk file to Guymager as a special device and acquire it to another drive.



And there you have it!

Conclusion:

   Using free and open source tools you have been able to mount and acquire images of Virtual Machines contained in the file system of a drive belonging to an ESXi server.  If this procedure helped your investigation, we would like to hear from you.  You can leave a comment or reach me on twitter: @carlos_cajigas