Modern Honey Network & Raspberry Pi

I attended a talk years ago where Duke University was using a robust network of sensors managed via Modern Honey Network. It motivated me to reuse my old Raspberry Pi as a sensor, alerting on if anyone was scanning a network looking for live hosts in the reconnaissance phase.

Server Installation & Configuration

In this first section, we’ll install Modern Honey Network onto our main server. This server will collect logs/events and allow us to centrally manage our honeypots we will deploy.

Install MHN

user@mhn:~$ sudo apt install git
user@mhn:~$ cd /opt/
user@mhn:/opt$ sudo git clone https://github.com/threatstream/mhn.git

Cloning into ‘mhn’…
remote: Enumerating objects: 10, done.
remote: Counting objects: 100% (10/10), done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 7318 (delta 2), reused 0 (delta 0), pack-reused 7308
Receiving objects: 100% (7318/7318), 3.64 MiB | 15.93 MiB/s, done.
Resolving deltas: 100% (3956/3956), done.

user@mhn:/opt$ cd mhn/
user@mhn:/opt/mhn$ sudo ./install.sh

This will take a while and fill the screen with output.  Eventually you’ll get to the config portion:
===========================================================
MHN Configuration
===========================================================
Do you wish to run in Debug mode?: y/n n
Superuser email: user@domain.com
Superuser password: *************
Server base url [“http://1.2.3.4”]:
Honeymap url [“http://1.2.3.4:3000”]:
Mail server address [“localhost”]: 192.168.100.200
Mail server port [25]:Use TLS for email?: y/n n
Use SSL for email?: y/n n
Mail server username [“”]:
Mail server password [“”]:
Mail default sender [“”]:
Path for log file [“mhn.log”]:

user@mhn:/opt/mhn$ sudo /etc/init.d/nginx status
user@mhn:/opt/mhn$ sudo /etc/init.d/supervisor status
user@mhn:/opt/mhn$ sudo supervisorctl status

Enable HTTPS

user@mhn:/opt/mhn$ cd /etc/nginx/sites-enabled/
user@mhn:/etc/nginx/sites-enabled$ sudo vi default
server {
    listen               80;
    listen              443 ssl;
    server_name         _;
    ssl_certificate     /etc/ssl/private/mhn.radford.edu.crt;
    ssl_certificate_key /etc/ssl/private/mhn.radford.edu.key;

    if ($ssl_protocol = “”) {
        rewrite ^ https://$host$request_uri? permanent;
    }

    location / {
        try_files $uri @mhnserver;
    }

    root /opt/www;

    location @mhnserver {
      include uwsgi_params;
      uwsgi_pass unix:/tmp/uwsgi.sock;
    }

    location  /static {
      alias /opt/mhn/server/mhn/static;
    }
}

user@mhn:/etc/nginx/sites-enabled$ cd ~
user@mhn:~$ openssl genrsa -out mhn.domain.com.key 2048
user@mhn:~$ openssl req -new -key mhn.domain.com.key -out mhn.domain.com.csr
Send certificate request to certificate authority.
Once you get the cert, SCP it to the MHN server’s home directory.
Move key and cert to /etc/ssl/private/.
user@mhn:/opt/mhn$ sudo /etc/init.d/nginx restart
Access server at https://mhn.domain.com.

Enable Logging to Splunk

user@mhn:~$ cd /opt/mhn/scripts/
user@mhn:/opt/mhn/scripts/$ sudo ./install_hpfeeds-logger-splunk.sh
This will configure logs to go to /var/log/mhn/mhn-splunk.log
Configure Splunk Universal Forwarder to look at this file.

Honeypot Installation & Configuration

In this next section, we’ll setup our honeypot. Install Debian-based OS on honeypot. I used a Raspberry Pi with Raspbian for my honeypot. Ensure HTTPS, TCP/3000 and TCP/10000 are open from the honeypot to MHN server.

On Rasbperry Pi… install your CA certificate so that it’ll connect to the MHN server via HTTPS…
pi@honeypi:~ $ mkdir /usr/share/ca-certificates/local
pi@honeypi:~ $ cd /usr/share/ca-certificates/local
pi@honeypi:~ $ openssl x509 -inform PEM  -in mhn_domain_com_interm.cer -outform PEM -out mhn_domain_com_interm.crt
pi@honeypi:~ $ sudo cp mhn_radford_edu_interm.crt /usr/share/ca-certificates/local/ pi@honeypi:~ $ sudo dpkg-reconfigure ca-certificates

Go to server https://mhn.domain.com > Deploy > select OS of honeypot.
The command to configure the honeypot will look like this, but the ID will differ:
pi@honeypi:~ $ wget “https://mhn.domain.com/api/script/?text=true&script_id=2” -O deploy.sh && sudo bash deploy.sh https://mhn.domain.com SidrT31L

This installation can take a while, especially if you’re using an old Raspberry Pi like me with little horsepower. My RPi2 took almost 30 minutes to install/configure!

Fix UTC Timezone in Logs

By default, your logs will be in UTC timezone but if your log server is expecting logs to be a certain timezone, then the logs will show up in the future (or past, either way it won’t have the correct timestamp). To fix this, you’ll need to modify the timezone of the logs on your honeypot. Refer to https://github.com/threatstream/mhn/issues/148 for more details…

pi@honeypi:~ $ sudo vi /opt/mhn/server/mhn/__ init __.py
Add the following:
  # fix the UTC timezone confusing logs
  from mhn.common.templatetags import format_local_date
  mhn.jinja_env.filters[‘fldate’] = format_local_date

pi@honeypi:~ $ sudo vi /opt/mhn/server/mhn/common/templatetags.py
Replace existing with the following: from dateutil import tz
def format_local_date(dt):
    from_zone = tz.tzutc()
    to_zone = tz.tzlocal()
    utc = dt
    utc = utc.replace(tzinfo=from_zone)
    dt2 = utc.astimezone(to_zone)
    return dt2.strftime(‘%Y-%m-%d %H:%M:%S’)

Back on your MHN server, modify the GUI so the logs display correctly locally on the dashboards:
user@mhn:/opt/mhn/server/mhn/common$ cd /opt/mhn/server/mhn/templates/ui
Change rules.html and attacks.html from this:
<td>{{ ru.Rule.date|fdate }}</td>
To this:
<td>{{ ru.Rule.date|fldate }}</td>

Fix mhn-splunk.log timestamps:
sudo cp /opt/hpfeeds-logger/lib/python2.7/site-packages/hpfeedslogger/formatters/splunk.py /opt/hpfeeds-logger/lib/python2.7/site-packages/hpfeedslogger/formatters/splunk.py.orig
sudo vi /opt/hpfeeds-logger/lib/python2.7/site-packages/hpfeedslogger/formatters/splunk.py


Change this line:
timestamp = datetime.datetime.isoformat(datetime.datetime.utcnow())
to:
timestamp = datetime.datetime.isoformat(datetime.datetime.now())

Restart MHN: sudo supervisorctl restart all

Troubleshooting

You may notice logs not coming in from one of your honeypots occasionally. I haven’t pinpointed it yet, but this happens to me all the time from the celery worker. Here are steps to troubleshoot.

Check MHN server’s status to confirm running and look for any FATAL processes (especially mhn-celery-worker):
sudo supervisorctl status
If you see that the celery worker is failed, restart it:
sudo supervisorctl restart mhn-celery-worker
If mhn-celery-worker STILL won’t start, check these logs:
tail /var/log/mhn/mhn-celery-worker.err
tail /var/log/mhn/mhn-celery-worker.log
tail /var/log/mhn/mhn.log

user@mhn:/var/log/mhn$ tail mhn-celery-worker.err
    mhn.config[‘LOG_FILE_PATH’], maxBytes=10240, backupCount=5)  File “/usr/lib/python2.7/logging/handlers.py”, line 117, in __init__
    BaseRotatingHandler.__init__(self, filename, mode, encoding, delay)  File “/usr/lib/python2.7/logging/handlers.py”, line 64, in __init__ 
   logging.FileHandler.__init__(self, filename, mode, encoding, delay)  File “/usr/lib/python2.7/logging/__init__.py”, line 920, in __init__
    StreamHandler.__init__(self, self._open())
  File “/usr/lib/python2.7/logging/__init__.py”, line 950, in _open    stream = open(self.baseFilename, self.mode)
IOError: [Errno 13] Permission denied: ‘/var/log/mhn/mhn.log’
If you see the permission denied error, reset the log file ownership:
cd /var/log/mhn/
sudo chown www-data mhn.log
sudo supervisorctl start mhn-celery-worker

Conclusion

Modern Honeypot Network is an amazing free tool to place sensors on your network. My honeypot is setup such that anyone trying to port-scan it will see all ports opened. Attempts to connect to the honeypot (and even the original port-scan) will send logs to the MHN server, which in turn sends those to my Splunk server. Splunk has an alert setup so that any logs from the MHN server will send an email alert to let me know activity without me having to keep MHN open all day waiting. The infrequency of anyone tripping the honeypot sensor makes the alerts a necessity for me. But if you had more honeypots deployed, especially public ones, then you could spend a good chunk of your day looking at logs. To take it up a notch, I’d like to deploy Docker containers as my honeypots, especially if I allowed an attacker to login and try and compromise the honeypot. Just throw it away and spin up a new container, taking the honeypot logs as lessons learned on the attacker’s methods.

2 Comments

Add a Comment

Your email address will not be published. Required fields are marked *