TryHackMe | Juicy Details

6 min readJun 22, 2021


Juicy Details on TryHackMe

This challenge is listed as a free room on TryHackMe created by GEEZET1.

The backstory tells us we were hired as a SOC Analyst for one of the biggest Juice Shops in the world and an attacker has made their way into your network. We need to investigate and find out what happened.

The first task is to download the task files and take a look at them..

The room tells us an IT team has sent you a zip file containing logs from the server. Download the attached file, type in “I am ready!” and get to work! There’s no time to lose!

Q1: Are you ready

A1: I am ready!

Q2: What tools did the attacker use? (Order by the occurrence in the log)

A2: Provided below

The first thing we needed to do is to unzip the file to see what we have to work with.


Since we are looking for the tools the attacker used for reconnaissance it would make sense to look at the access.log first.

This looks like a standard Apache access.log, and right off the bat I can see the first tool as indicated in the screenshot above. Since we are looking at an Apache / webserver access.log the majority of the tools will be declaring a specific useragent string we can parse of out of the log.

We can parse out the useragent string using the cut command.

cat access.log | cut -d'"' -f6 | uniq

The above command will print out the access.log using cat and then pipe | it to cut. We will use " as the delimiter using the -d'"' and place it in ' single quotes because we are working in the shell and random quotes can be problematic if the are not escaped or enclosed in an additional set of quotes. Next we will tell cut to parse out the 6th field using the -f6 switch, this will show us only the user-agent string. Finally we will | pipe the command to uniq that will filter out consecutive duplicate user-agent strings.


Q3: What endpoint was vulnerable to a brute-force attack?

Well, if we are looking for a URL / Endpoint that was brute forced the logs would show that it would have been accessed a lot of times. We can use some command line magic using cut, sort and uniq to find the vulnerable endpoint.

cat access.log | cut -d'"' -f2 | sort | uniq -c | sort -n | tail -n 10

The above will print out access.log using cat and then | pipe it to cut using the " double quotes as the delimiter and print out only field number 2 using -f2 (field 2 is the GET/POST requests). We will then pipe | the output to sort and then use uniq -c to only print out the unique entries and use -c to print out the number of occurrences of each GET/POST requests in the log. The sort -n performs a numerical sort on the output and finally we use tail -n 10 to print out the last 10 lines of the output.

Brute forced URL

Q4: What endpoint was vulnerable to SQL injection?

A4: Provided below

Well we know from the previous questions and the user-agent in the logs the attacker is using sqlmap for their SQL injection attack tool.

First we use cat to print out the access log, then we use grep sqlmap to show only the GET requests with a user-agent string containing sqlmap. Next we grep 200 to grab only entries that have a response code of 200. Next we use cut -d'"' -f2' to use " as the delimiter and print the 2nd field. finally we use a “sort sandwichsort | uniq | sort and then just show the last 10 entries using tail -n10.

cat access.log  | grep sqlmap | grep 200 | cut -d'"' -f2 | sort | uniq | sort | tail -n 10

Q5:What parameter was used for the SQL injection?

A: We actually already found that from the previous command.

Q6: What endpoint did the attacker try to use to retrieve files? (Include the /)

A6: Provided below

For this I just took took a quick look at the access.log file and at the end of the file something popped out that fit with the scenario.

Q7: What section of the website did the attacker use to scrape user email addresses?

Here I just did a quick look through the log until I found something that popped out that would show an area that would contain user email addresses. I figured the attacker had moved on to either manual scraping or using another tool so I filtered out the noise of sqlmap and nmap with grep -Ev 'nmap|sqlmap.

The answer can be inferred from Products and Reviews.

Q8:Was their brute-force attack successful? If so, what is the timestamp of the successful login? (Yay/Nay, 11/Apr/2021:09:xx:xx +0000)

A8: See below…

cat access.log  | grep -i hydra | egrep -v '401|501|500'

If we cat the access log and grep fro hydra and then use egrep -v '401|501|500' to exclude all of the 401, 501,500 error messages it should show the successes.


Q9: What user information was the attacker able to retrieve from the endpoint vulnerable to SQL injection?

After some searching through the logs the answer showed up, but thinking back to the attack tools we know the attacker used curl to retrieve some information …maybe that’s the tool they used here ;)

Q10: What files did they try to download from the vulnerable endpoint? (endpoint from the previous task, question #6)

I believe this is in reference to the question regarding retrieving files, if this is the case it was a ftp server so we need to look at the vsftpd.log file.

cat vsftpd.log  | grep -i DOWNLOAD
Files Downloaded

Q11:What service and account name were used to retrieve files from the previous question? (service, username)

A11: ftp, anonymous

By inspecting the vsftpd.log we can find all the users that logged into the site and see how they authenticated

cat vsftpd.log | grep OK
vsftpd.log filtered with actual logins

We can see that all the logins contained withing the vsftpd.log file were all anonymous logins and the last anonymous login downloaded the files in question.

Q12: What service and username were used to gain shell access to the server? (service, username)

Shell access is going to be logged in the auth.log file so we should be able to filter out only successful attempts and go from there.

cat auth.log  | grep Accept

By running the cat auth.log | grep Accept we will be able to see both the service and the username that logged into the server

Great room! Thank you to GEEZET1 for a great room to sharpen up our log parsing skills.




…I have no idea what I’m doing.