Wednesday, December 31, 2014

Analyzing Plain Text Log Files Using Linux Ubuntu

   Analyzing large amounts of plaint text log files for indications of wrong doing is not an easy task, especially if it is something that you are not accustomed to doing all the time.  Fortunately getting decent at it can be accomplished with a little bit of practice.  There are many ways to go about analyzing plaint text log files, but in my opinion a combination of a few built-in tools under Linux and a BASH terminal can crunch results out of the data, very quickly.

   I recently decided to stand up an SFTP server at home, so that I could store and share files across my network.  In order to publicly access the server from the public internet, I created strong passwords for my users, forwarded the ports on my router and went out for the weekend.  I accessed the server from the outside and shared some data.  From the time that I fired it up to the time I returned, only 30 hours passed.  I came back home to discover that the monitor attached to my server was going crazy trying to keep up with showing me large amounts of unauthorized log-in attempts.  It became evident that I was under attack, Oh my! 

   After some time and a little bit of playing around, I was able to get the situation under control.  Once the matter was resolved, I couldn't wait to get my hands on the server logs. 

   In this write-up, we go will over a few techniques that can be used to analyze plain text log files for evidence and indications of wrong doing.  I chose to use the logs from my SFTP server so that we can see what a real attack looks like.  In this instance, the logs are auth.log files from a BSD install, recovered from the /var/log directory.  Whether they are auth.log files, IIS, FTP, Apache, Firewall, or even a list of credit cards and phone numbers, as long as the logs are plain text files, the methodology followed in this write-up will apply to all and should be repeatable.  For the purposes of the article I used a VMware Player Virtual Machine with Ubuntu 14.04 installed on it.  Let's get started.

Installing the Tools:

  The tools that we will be using for the analysis are cat, head, grep, cut, sort, and uniq.  All of these tools come preinstalled in a default installation of Ubuntu, so there is no need to install anything else. 

The test:

  The plan is to go through the process of preparing data for analysis and go through the process of analyzing it.  Let's set up a working folder that we can use to store the working copy of the logs.  Go to your desktop, right click on your desktop and select “create new folder”, name it “Test”.



  This will be the directory that we will use for our test.  Once created, locate the log files that you wish to analyze and place them in this directory.  Tip: Do not mix logs from different systems or different formats into one directory.  Also, if your logs are compressed (ex: zip, gzip), uncompress them prior to analysis.



   Open a Terminal Window.  In Ubuntu you can accomplish this by pressing Ctrl-Alt-T at the same time.  Once the terminal window is open, we need to navigate to the previously created Test folder on the desktop.  Type the following into the terminal.

$ cd /home/carlos/Desktop/Test/

   Replace “carlos” with the name of the user account you are currently logged on as.  After doing so, press enter.  Next, type ls -l followed by enter to list the data (logs) inside of the Test directory.  The flag -l uses a long listing format.



   Notice that inside of the Test directory, we have 8 log files.  Use the size of the log (listed in bytes) as a starting point to get an idea of how much data each one of the logs may contain.  As a rule, log files store data in columns, often separated by a delimiter.  Some examples of delimiters can be commas, like in csv files, spaces or even tabs.  Taking a peak at the first few lines of a log is one of the first things that we can do to get an idea of the amount of columns in the log and the delimiter used.  Some logs, like the IIS logs, contain a header.  This header indicates exactly what each one of the columns is responsible for storing.  This makes it easy to quickly identify which column is storing the external IP, port, or whatever else you wish to find inside of the logs.  Let's take a look at the first few lines stored inside of the log tilted auth.log.0.  Type the following into the terminal and press enter.

$ cat auth.log.0 | head

   Cat is the command that prints file data to standard output, auth.log.0 is the filename of the log that we are reading with cat.  The “|” is known as a pipe.  A pipe is a technique in Linux for passing information from one program process to another.  Head is the command to list the first few lines of a file.  Explanation: What we are doing with this command is using the tool cat so send the data contained in the log to the terminal screen, but rather than sending all of the data in the log, we are “piping” the data to the tool head, which is used to only display the first few lines of the file, by default it only displays ten lines. 



   As you can see, this log contains multiple lines, and each one of the lines has multiple columns.  The columns account for date, time, and even descriptions of authentication attempts and failures.  We can also see that each one of these columns is separated by a space, which can be used as a delimiter.  Notice that some of the lines include the strings “Invalid user” and “Failed password”.  Right away, we have identified two strings that we can use to search across all logs for instances of either one of these strings.  By searching for these strings across the logs we should be able to identify instances of when a specific user and/or IP attempted to authenticate against our server.   

   Let's use the “Invalid user” string as an example and build upon our previous command.  Type the following into the terminal and press enter.

$ cat * | grep 'Invalid user' | head



   Just like in our previous command, cat is the command that prints the file data to standard output.  The asterisk “*” after cat is used to tell cat to send every file in the current directory to standard output.  This means that cat was told to send all of the data contained in all eight logs to the terminal screen, but rather than print the data to the screen, all of the data was passed (piped) over to grep so that grep can search the data for the string 'Invalid user'.  Grep is a very powerful string searching tool that has many useful options and features worth learning.  Lastly the data is once again piped to head so that we can see the first ten lines of the output.  This was done for display purposes only, otherwise over 12,000 lines containing the string 'Invalid user' would have been printed to the terminal, yikes!

   Ok, back to the task at hand.  Look at the 10th column of the output, the last column.  See the IP address of where the attempts are coming from?  Lets say that you were interested in seeing just that information from all of the logs and filter only for the tenth column, which contains the IP addresses.  This is accomplished with the command cut.  Let's continue to build on the command.  Type the following and press enter.

$ cat * | grep 'Invalid user' | cut -d " "  -f 10 | head



   In this command, after the data is searched for 'Invalid user' it is piped over to cut so that it may print only the tenth column.  The flag -d tells cut to use a space as a delimiter.  The space is put in between quotes so that cut can understand it.  The flag -f tells cut to print the tenth column only.  Head was again used for display purposes only.  Next, let’s see all of the IP's in the logs by adding sort and uniq to our command.

$ cat * | grep 'Invalid user' | cut -d " "  -f 10 | sort | uniq



   In this command, head is dropped and sort and uniq are added.  As you imagined, sort will sort the text, and uniq is responsible for omitting repeated text.  This is nice, but it leaves us wanting more.  If you wanted to see how many times each one of these IP's attempted to authenticate against the server, the flag -c of uniq will count each instance of the repeated text, like so. 

$ cat * | grep 'Invalid user' | cut -d " "  -f 10 | sort | uniq -c | sort -nr



   In this command, the instances of each IP found in the logs were counted by uniq and then again sorted by sort.  The flag -n is to do a numeric sort and the flag -r is so that the text is shown in reverse order. 

   And there you have it.  Now we can see who was most persistent at trying to get pictures of my dog from my SFTP server. 

   Keep practicing.  Hopefully this helped you in getting started with the basics of cat, grep, cut, sort, and uniq. 

Conclusion:

   This is a quick and powerful way to search for specific patterns of text in a single plain text file or in many files.   If this procedure helped your investigation, we would like to hear from you.  You can leave a comment or reach me on twitter: @carlos_cajigas 

Suggested Reading:

- Ready to dive to the next level of command line fun?  Check out @jackcr ‘s article where he implements the use of a for loop to look for strings inside of a memory dump.  Awesome!  Find it here.


- Check out Ken's blog and his experience with running a Linux SSH honeypot.  Find it here.



Thursday, October 2, 2014

Acquiring Images of Virtual Machines From An ESXi Server

ESXi is an enterprise level computer virtualization product offered by VMWare, the makers of VMWare Player and Workstation.  ESXi can be used to facilitate the centralized management of many different types of Windows, Unix, and Linux systems.  Unlike its cousins Player and Workstation, ESXi runs directly on server hardware (bare metal) without the need for an underlying operating system.  Management of the ESXi server can and is often done remotely. 

As this type of virtualized server environment continues to gain even more popularity, you are bound to encounter it more and more.  It seems that we see one of these in just about every case.  If you are in the DFIR world then you know that it is our responsibility to get usable images out of these things.

In this article we will go through the process of acquiring a Linux Virtual Machine (VM) running on an ESXi server.  Since there is more than one way of accomplishing this, we will talk about some of the options that are available to us.  We will talk about the pros and cons of each so that you get to choose one when your time comes to implement it.   

Tools:

For the purposes of this article, all of the systems except for the ESXi server, were installed and ran using VMware Player.  Our host machine was an examination computer with Ubuntu 13.10 installed to the hard drive.  Except for the Windows VM, all of the tools used in the article are free and are available for download.

The test:

The first thing we need to do is get ESXi up and running.  VMware offers a free version of their EXSi product that can be downloaded here.  To get the download, you are going to have to create and account with them.  Once you have the ISO, go ahead and install it.  There are many tutorials online on how to do this.  I don't want to go over the steps of installing ESXi here due to the many resources available and how much longer it would make the write up.  We went ahead and installed ESXi version 5.5 on an old laptop with a core 2 duo processor and 4GB of memory.  It worked fine.  ESXi was installed using all of the defaults.  Upon startup the ESXi server was given an IP via DHCP.  It was assigned IP address 10.0.1.11


ESXi servers can be managed locally from the command line or remotely from another host using the VMware vSphere Client, which as of this writing only works on Windows hosts.  Using another VMware Player VM with Windows 7, we installed the vSphere client and logged in to the ESXi server. 



We went ahead and installed one VM on the ESXi server.  The VM on the server is a fresh install of Ubuntu 14.04.  We named the VM, Ubuntu14_04.  This will be the VM that we will we use to go through the process of acquisition.


Now that we have an ESXi server up and running and a VM to acquire, we need to determine what our best course of action for acquisition is.  This will be decided by your conversation with the system administrator (sysadmin) and the company's ability to be able to take the “server” off-line.  In most situations these are production environment servers that need to stay up to not disrupt the business.  If you are lucky and find that the sysadmin is willing to take the server off-line, then suspending the VM is probably the better method of acquiring a VM out of the server.  We will get to do this just a little later.

For now, let's assume that the server is a money making server that cannot be taken off-line.  When encountered with this issue we have conducted successful acquisitions by using a combination of SSH and an externally available source to send the data to.  For example, you can SSH into the server and use the linux native tool dd, to acquire the virtual disks and send the acquisition image to a mounted NFS share. 

Let's simulate this scenario by using our previously created Ubuntu14_04 VM.  Notice that our last screenshot tells us that our Ubuntu VM was assigned IP address 10.0.1.12.  The command $ ssh carlos@10.0.1.12 can be used to SSH into the VM.  If you are following along, make sure that your VM has an SSH server installed.  Once the key pairing has been created and the password has been entered you will be given a command prompt just as if you were on a shell terminal on the system.  Notice that my prompt changed to carlos@esxivm:~$ which is an indication that I am now inside of the VM.


The next step is to mount the NFS share that we will use to receive our acquisition image.  To simulate this part, we used another VM and the free version of the popular open source NAS solution called FreeNAS.  The FreeNAS VM was assigned IP address 10.0.1.13.      

Run the below command to mount the NFS share to the Ubuntu VM.  If you are following along make sure that your Ubuntu VM has nfs-common installed. 

$ sudo mount -t nfs -o proto=tcp,port=2049 10.0.1.13:/mnt/vol1 /mnt/nfs/


Mount is the command to mount the share, -t tells mount which type of mount that it is about to do, -o are the options that are passed to mount.  Proto= tells mount to use tcp as a protocol, this is done to ensure compatibility with other operating systems.  Port=2049 tells it which port to use.  10.1.1.13:/mnt/vol1 is the IP of the FreeNAS VM and the location of the share on the NAS.  /Mnt/nfs is a previously created mount point used to mount the NFS.   

We are now ready to send the acquisition to the NFS using dd.  We navigated to the share and kicked it off.

$ sudo dd if=/dev/sda of=esxivm.dd bs=4096 conv=sync,noerror


The acquisition successfully completed.  Mine took a while as I piped the data wirelessly rather than through the wire.  Speeds at your client location will vary depending on bandwidth.  In reference to image verification, since this was a live acquisition, doing a hash comparison at this point will not be of much value.  At the very least, check and compare that the size of your dd image matches the amount of bytes contained in /dev/sda.  This can be accomplished by comparing the output of fdisk -l /dev/sda against the size of the dd.


Great, so now we have an image of our server sitting on a NAS and we need to bring it back to our system.  Well you have options.  If the image is too large to pull it back over the wire, you can have the sysadmin locate another system close to the NAS that has access to the NAS and also has USB 3.0.  This can even be a Windows system.  If this is not an option for you, and you have no choice but to pull it back over the network, use your favorite ftp or sfpt tool to access the NAS and bring the image back to your system.  For the purposes of this article we used FileZilla.


And there you have it.  You have successfully acquired a live image out of a VM running on an ESXi server. 

Now let’s take some time and talk about another way of doing things.  If you are lucky and the sysadmin said he is good with taking the server off-line, then you are in luck and things just got easier for you.  Suspending the VM is also a good way of collecting an image of that VM.  When you suspend a VM on ESXi, the current state of the VM is saved so that it can later be loaded and reused.  Additionally, the VM's memory gets written to a .vmss file that contains the entire memory allotted to the VM.  This, in essence, can be considered a simulated way of acquiring memory from the VM.  The point is that with one click of a button you get the VM's memory and a current state of the VM's virtual disk.  The vmdk file will contain the entire raw virtual disk. 


Once you have suspended the VM, the process of getting the data out of the server can be accomplished just like in our first scenario.  So let us first talk about the method which involves the combination of SSH and the NFS share. 

If the sysadmin allowed you to take their server off-line to suspend it, then he may also be willing to give you the password so that you can SSH into the ESXi server. 


Once you have SSH'ed into the sever, locate the VM directory by navigating to
/vmfs/volumes/datastore1.  There, we located the directory for our Ubuntu14_04 VM.


The Ubuntu14_04 directory contained the following files.


Ubuntu14_04-167f1682.vmss            Is the memory file
Ubuntu14_04-flat.vmdk                      Is the VM disk

Those are the main files that we need to get out of the server.  If you had to go the mounted share route, then the process of mounting the NFS goes like this.

# esxcfg-nas -a -o 10.0.1.13 -s /mnt/vol1 nas

Esxcfg-nas is the ESXi command to mount the share, -a is to add a share, -o is the address of the host and -s /mnt/vol1 is the share directory to mount, finally nas is the name that we are going to assign to the mounted NFS share.  Be aware that unlike in Ubuntu, you do not have to create a mount point, the NFS share will be created under /vmfs/volumes.


Once the NFS is mounted you can use the cp command to copy your files over to the NFS.  Upon completion, it would be a good idea that you implement some way of validating that your data is a true representation of the original.  Hashing the files on the server and comparing them to the ones on the NFS could be a way.


To get the images out of the NAS, again, use your favorite ftp or sftp tool to retrieve them.  It should be noted that we went over the process of mounting an NFS on ESXi, so that you could have that as an option.  If SSH is enabled on the EXSi server don't forget that you can access it directly with scp or sftp and drag the images back to your system.  FastSCP on the windows side also works well.


To disconnect the NFS from the server run $ esxcfg-nas -d nas. 

Conclusion:


There are many available options of acquiring images out of these servers.  Discussed above were only a couple of these options.  If this procedure helped your investigation, we would like to hear from you.  You can leave a comment or reach me on twitter: @carlos_cajigas



Wednesday, September 10, 2014

Using Curl to Retrieve VirusTotal Malware Reports in BASH

   If you are in the DFIR world, there is a good chance that you often find yourself either submitting suspicious files to VirusTotal (VT) for scanning, or searching their database for suspicious hashes.  For these tasks and other neat features, VT offers a useful web interface were you can accomplish this.  If submitting one file or searching one hash at a time is enough for you, then their web interface should suffice for your needs.  Find the web interface at www.virustotal.com.

   If you are looking for a little bit more functionality or the ability to scan a set of suspicious hashes, you may want to look into using their public API.  VirusTotal's public API, among other things, allows you to access malware scan reports without the need to use their web interface.  Access to their API gives one the ability to build scripts that can have direct access to the information generated and stored by them. 

   To have access to the API you need to join their community and get your own API key.  The key is free and getting one is as simple as creating an account with them.  After joining their community you can locate your personal API key in your community profile. 

   In this article we will go through the process of communicating with their API form the Bourne-Again Shell (BASH) using the program curl.  The chosen format to communicate with the API is HTTP POST requests.  We will discuss a few curl commands, and once we become familiar with the commands, we will then incorporate the commands into a script to automate the process.  The command’s and the script were courtesy of a tip that I got from my co-worker John Brown.  He gave me permission to talk about his tip and permission to publish his script.   

   BASH is the default terminal shell in Ubuntu.  For the purposes of this article I used a VmWare Player Virtual Machine with Ubuntu 14.04 installed on it. 

Installing the tools:

   All of the tools that we will use are already included in Ubuntu by default.  You will not need to download and install any other tools.  If you want to follow along, make sure that you have your VT API key available.  Also, we are going to need suspicious hashes.  Feel free to use your own hashes, or copy these two md5 hashes that I will use for the article, e4736f7f320f27cec9209ea02d6ac695 and  7f16d6f96912db0289f76ab1cde3420b.  One of the hashes belongs to a fake antivirus piece of malware that I use for testing, and the other one is a hash of a text file that contains no malicious code.  One of the hashes will return hits the other one will not.  Let's get started. 

The test:

   Open a Terminal window, In Ubuntu you can accomplish this by pressing Ctrl-Alt-T at the same time or by going to the Dash Home and typing in “terminal”. 

   In order to communicate with the VT database to retrieve a file scan report we are going to need two things.  First we need to know the URL to send the POST request to.  That URL will be “https://www.virustotal.com/vtapi/v2/file/report.”  And second we will need to feed the curl command some parameters, your API key and a resource.  The API key will be the key that was given to you upon joining the VT community and the resource will be the md5 hash of the file in question.

   The following curl command should satisfy all of those requirements.

$ curl -s -X POST 'https://www.virustotal.com/vtapi/v2/file/report' --form apikey="c6e8f956YOURAPIKEY06a82eab47a0cb8cbYOURAPIKEY53aa118ba0db1faeb67" --form resource="e4736f7f320f27cec9209ea02d6ac695"


   Curl is the command that we will use to send the POST request to the specific VT URL.  The -s tells curl to be silent, to not print the progress bar.  The -X tells which request we want it to send, which in this instance is a POST request.  --form apikey= will be your API key, and --form resource is the MD5 hash of the aforementioned fake antivirus file.  These are my results.


   It looks like our fake antivirus file’s hash was located in the database and the file had been previously scanned by VT.  Lots of data with positive hits was returned.  The scanned file report currently contains no line breaks, so it was sent to our terminal window in a format that is difficult to read.  Let's see if we can fix that.  Notice that results from each individual antivirus solution start after each combination of a curly brace and a comma “},”  Armed with this information let’s add a new line character at the end of each one of those lines to separate the output so that we can see it better.   Run the same command as above, but this time let’s pipe it to sed 's|\},|\}\n|g'

$ carlos@vm:~$ curl -s -X POST 'https://www.virustotal.com/vtapi/v2/file/report' --form apikey="c6e8f9563e6YOURAPIKEY82eab47a0cb8cb9454824YOURAPIKEYba0db1faeb67" --form resource="e4736f7f320f27cec9209ea02d6ac695" | sed 's|\},|\}\n|g'

   The sed command is changing our standard output by switching every }, for a }n which is the curly brace followed by a newline.  The \ in the sed command is to escape the braces and the newlines so that the sed command can interpret these characters as literal characters and not as strings.  These are my results.


   We can now start to see information that we can work with.  From here you can redirect this data to a file or continue using grep, sed, awk and/or any other command line magic that you can throw at this output to continue editing it to your needs.  Personally, I am interested in the bottom area of the screen, the part that says "positives": 23,.  This tells me that this hash was recognized by 23 different antivirus engines on the VT database.  This is the data that I may need to pay attention to during an investigation.  That sed command was just an example of how to manipulate the output. 

   The next command will incorporate a combination of awk and sed pipes to filter the output to a final set of data that we felt comfortable working with.  We chose to filer the data with this combination of awk and sed commands.

$ carlos@vm:~$ curl -s -X POST 'https://www.virustotal.com/vtapi/v2/file/report' --form apikey="c6e8f9563e63YOURAPIKEY2eab47a0cb8cb9454824YOURAPIKEYba0db1faeb67" --form resource="e4736f7f320f27cec9209ea02d6ac695" | awk -F 'positives\":' '{print "VT Hits" $2}' | awk -F ' ' '{print $1$2$3$6$7}' | sed 's|["}]||g'



   The first awk command use the "positives" string as a field delimiter and tells it to print the string “VT Hits” followed by the second field, which is the 23 instances of positive hits.  The second awk command uses a space as a delimiter and tells it to print the first, second, third, sixth and seventh column to extract the string md5 and the md5 hash of the file from the output.  The last sed command is simply to remove any quotes and curly braces from the resulting output.  These are my results.



VTHits23,md5:e4736f7f320f27cec9209ea02d6ac695

   The end result is that we get a string of data that tells us the amount of antivirus solutions that recognize the file as being malicious plus the md5 hash of the file, so that we know which file is the suspicious file.

   If by now you are thinking that was way too long of a command to remember, or even wish to type again, then you are more like me.  For this reason, John has made a script available that automates this exact process, and is extremely easy to use.  Find the script here.

   After making the script executable, run the script and give it a hash value as an argument.  It will use the same command as above and will search the VT database for the hash that you fed it as an argument.  Run the script like this.

$ carlos@vm:~$ ./grabVThash.sh e4736f7f320f27cec9209ea02d6ac695

   These are my results.


   Same results as above.  The script automates the process of sending hash values to VT and sends the results to the screen.  It even has the ability to take a file containing multiple hashes as its input.  It will send 4 hashes per minute to VT as this is a limitation set by VT for its public access of the API.  You will need to add your API key to the script. 

Conclusion:

   VT gives us access to its database by allowing us to build scripts that can have direct access to the information generated and stored by them.  The script that we published is just one of many ways that we can add ease of access to the data stored by VT.  
   If this procedure helped your investigation, we would like to hear from you.  You can leave a comment or reach me on twitter: @carlos_cajigas



Thursday, July 3, 2014

Random Offset Hiding of TrueCrypt Containers

   Have you ever had to need to completely hide encrypted data and know that if you had to, you could successfully deny its existence?  Have you ever dreamed of hiding an encrypted container in unpartitioned free space?  No...  Neither have I, but we stumbled across a neat little trick and wanted to share with you.

   While playing around with with the command line, we wanted to see if one could use dd to consecutively write two individually created filesystems to a thumb drive (without partitioning the drive).  And if so, subsequently mount and use either one of the filesystems to store and save files.  We learned that the answer is yes, so we decided to make one of the filesystems an encrypted one.

   The process is a bit time consuming, but the end result is that you will end up with a device that will contain a fully functioning, usable, unencrypted filesystem followed by a completely hidden encrypted one.  The second encrypted filesystem will reside in the free space of the drive.  Since the device will not be partitioned, only the first filesystem will be seen by the operating system.  Without the existence of a partition table, not even forensic software will be able to spot the existence of the second encrypted filesystem, so don't forget its starting sector offset!

   At the time that we tested this theory, the credibility of the TrueCrypt software was still strong.  For that reason, we used TrueCrypt to encrypt the encrypted filesystem.  If your are of the belief that TrueCrypt is no no longer secure, then you are welcomed to use other tools.  The process should be repeatable and you can adjust it to your needs.  For the purposes of this article I used a VmWare Player Virtual Machine with Ubuntu 14.04 installed on it.  Let's get started. 

Installing the tools:

   With the exception of TrueCrypt, all of the tools that we will use are either included in Ubuntu by default, or can be downloaded from the Ubuntu Software Center.  TrueCrypt is still available for download.  Steve Gibson @SGgrc, the creator of SpinRite, stores the latest version of TrueCrypt on his website.  Find it here.  After download, if you want a bit of peace of mind, check the hashes against known file hashes independently stored on other sites, like this one

   The other tools that we will need to accomplish this scenario are dd, and sleuthkit.  Dd comes pre-installed in Ubuntu, so lets install sleuthkit. 

   We will install the tools form the command line.  Open a Terminal window, In Ubuntu you can accomplish this by pressing Ctrl-Alt-T at the same time or by going to the Dash Home and typing in “terminal”.  Type the following into the terminal to install sleuthkit from the apt-get repositories. 

$ sudo apt-get install hexedit sleuthkit



   You will be prompted for your root password.  Enter your root password and wait for the program(s) to install.  This procedure will install sleuthkit version 3.2.3, which will work fine for what we need.  As of this writing the latest version is 4.3.1.  If you want to use the latest, you have to compile it from source.  We can go over that a different day. 

   Now that you have the tools that we need, the next step is to prepare a working folder for us to store our working copies of our data.  Go to your desktop, right click on your desktop and select “create new folder”, name it “Test”.


   Navigate to the previously created Test folder on the desktop.  We will use the CD command to change directory into the desktop.  Type the following into the terminal.

$ cd /home/carlos/Desktop/Test/

   Replace “carlos” with the name of the user account you are currently logged on as.  After doing so, press enter.  You should receive these results. 


   This will be the directory were we will create our filesystems and/or whatever data we plan on using as a working copy of a file. 

The test:

   The first step in our process is to create the first filesystem that will later be written to our drive.  It will be an unencrypted filesystem that you can use to store any non-sensitive data, or any data for that matter. 

   Since two independent filesystems will be residing on our drive, you will have to decide how large each of your filesystems will be.  For the purposes of this article, I will be using a 64MB thumb drive to store our filesystems. I chose a very tiny thumb drive so that I could make the image available to you, if you wish to download it.  Find it here.  Each of my filesystems will be 25MB in size.   The password is "mtk".  Continuing...

   From the terminal, type the below command

$ dd if=/dev/urandom of=1st25mb_unencrypted.dd bs=512 count=50000

   Dd is a common Linux program whose primary purpose is the low level copying and conversion of raw data.  The if= tells dd to read from file.  In this instance the file is /dev/urandom, which is an interface to the kernel's random number generator.  The of= tells dd to write to a file, which will be a file called 1st25mb_unencrypted.dd.  Bs is the block size of data to be used, and count tells dd to write fifty thousand blocks of 512 bytes.  The result is that we will end up with a data chunk that contains exactly 50,000 blocks of 512 bytes.  Multiply 50,000 by 512 and then divide it by 1024 and you will get 25MB. 

   This should be the resulting data chunk.  The ls -l command is the command to list files with the -l argument that uses the long listing format.


   Once the data chunk has been created use the below command to format the data chunk into a functioning Fat-16 filesystem with volume label “MTK.”  Fat-16 worked best for the data chunk that we created.  You can use a different filesystem if you choose. 

$ mkfs.vfat -F 16 -n MTK -v 1st25mb_unencrypted.dd

   Mkfs.vfat is the command to create an MS-DOS filesystem under Linux, -F Specifies the type of file allocation tables used, -n sets the volume name (label) of the filesystem and -v will give us verbose output. 


   Success we have formated the data chunk.  If you want to look at the filesystems statistics of the data chunk, you can use the sleuthkit command fsstat, which will provide just that.  Type the below command and press enter

$ fsstat 1st25mb_unencrypted.dd


   The next step is to create our encrypted filesystem using TrueCrypt.  There are many tutorials online on how to do this.  I don't want to go over the steps of creating one here due to the many options available and how much longer it would make the write up.  A good article on how to create a container can be found on the Howtogeek site.  I went ahead and created a 25mb container using all the defaults and the password mtk (lowercase).  I named the container 2nd25mb_encrypted.dd. 


   I also mounted the encrypted container and added a file titled SecretFile.txt.  You can use the same article from Howtogeek for instructions on this.  Once you have added a file to the container use TrueCrypt to unmount it. 



   Now comes the fun part.  Find a thumb drive that we can write these filesystems to.  I inserted my 64mb thumb drive to my VM and ran the below command to identify it.

$ sudo fdisk -l

   Fdisk is a partition table manipulator for Linux.  The flag -l tells fdisk to list the partition table.  Sudo gives fdisk superuser privileges for the operations.  Press enter and type your root password (if needed).


   Ubuntu assigned my 64mb media as SDB.  SDA is the internal virtual HDD that Ubuntu is installed on. 

   We finally get to write these filesystems to the media.

   The below command will write each filesystem consecutively to my 64MB thumb drive.

$  cat 1st25mb_unencrypted.dd 2nd25mb_encrypted.dd | sudo dd of=/dev/sdb

   Cat is the command to concatenate (combine) the two chunks one after the other.  The “|” is known as a pipe.  A pipe is a technique in Linux for passing information from one program process to another.  Dd saw the two chunks that were passed to it and wrote them contiguously to the 64MB thumb drive assigned as physical device /dev/sdb.  


   Our thumb drive is now ready for deployment and use.  Both of the filesystems are now residing on the drive.  If you insert this drive into a Windows machine this is what you would see.  Windows only recognized the unencrypted filesystem and assigned it logical letter (F).  


   At this point the drive is fully functioning with a 25MB filesystem that is capable of storing data.  Even the “MTK” volume label that we assigned is being recognized.  To the untrained eye the drive looks perfectly normal and if forced to show your files, only the non-sensitive data would be seen.  The trained eye or anyone with a little knowhow will be able to see that a 64MB drive only contains a 25MB filesystem.  Inspection of the free space of the drive would only show noise and nothing else.  Similar to the noise in the unallocated space of the unencrypted filesystem, hence the reason why we chose /dev/urandom for that data chunk.  You can picture them scratching their heads, but that may be the extent of it.  Have you ever heard of an examiner segregating the free space of a drive into its own file to look for encrypted containers and run brute force attacks?  If you have, please tell us about it in the comments.

Accessing the encrypted container:

   To access the encrypted container we are going to have to create a loop device and then mount that loop with TrueCrypt.  Because the free space of the drive contains an entire filesystem, that specific offset of the drive can then be mounted as if it were a disk device.

   Accomplish it with the below command.

$ sudo losetup -o 25600000 /dev/loop0 /dev/sdb

   Losetup is the command to set up and control loop devices, -o is the argument to specify the offset to the encrypted container.  This location has to be specified in bytes.  The container starts 25,600,000 bytes into the drive, which was the 50000 sectors of the unencrypted filesystem times 512 bytes.  /Dev/loop0 is the first available loop mount and /dev/sdb is the 64MB thumb drive. 


   Lastly we need to decrypt the loop mount and mount it to a previously created directory using TrueCrypt.  This step can be accomplished with the below command.

$ sudo truecrypt -t /dev/loop0 /mnt/tc/

   Truecrypt -t is to use the TrueCrypt program in text mode, /dev/loop0 is the location of the loop and /mnt/tc is a previously created mount point that I created using mkdir (no screenshot). 



   By running ls -l on /mnt/tc we can see that our previously created file titled SecretFile.txt is now decrypted and available to us.  The loop device was mounted in readwrite mode, which means that any data that you add to this filesystem will be stored and encrypted upon dismount. 

Conclusion:

   This was a time consuming process of creating a hidden encrypted container to store sensitive data.  It probably doesn't offer any more security than the already available methods through TrueCrypt.  It simply was an exercise on how to manipulate data chunks and filesystems using the shell. 

   If this procedure helped your investigation, we would like to hear from you.  You can leave a comment or reach me on twitter: @carlos_cajigas

-----

Update 07/10/14:  Bored?  Notice that the last screenshot above tells us that "SecretFile.txt" contains 11 bytes of data.  Download the image, mount the encrypted container, and be the first one to tell us what data is stored in the "SecretFile.txt".  Leave it in the comments section.  Remember to share a tip.

-----

Update 07/18/14:  Gerry Stephen found the data inside of the txt file.  He went about finding the data using his preferred method and he even described his way.  Go to the comments section to see his tip.