Deploy a Django app under systemd
This blog is the first part of a series of articles intended for interns inside UbuViz but also anyone interested in programming as we do it at UbuVuzi.
First of all, you need to know that I’m Allan Stockman Rugano. I have been building Open Source softwares since 2013 in Bujumbura (Burundi) and I picked, back then, Linux as my main operating system for daily usage and; since then, I haven’t looked back. Linux helped me, as an African dude, to understand the quirks and complexity of building something so efficient as an OS. And more importantly, its open sourced model allowed me to realize that there is nothing so special about building an OS. Even you and I can build this stuff.
That's why I made it my goal to share knowledge with my peers and today let us discuss how we build some of the softwares at UbuViz.
Backend logic
We normally deploy many servers on DigitalOcean running Linux based OS. And as I mentioned back in 2014, we have been influenced by the stack of technologies that were back then behind a platform I was running for UNICEF in Burundi named U-report:
-
PostgreSQL for the DB,
-
Redis for caching,
-
Celery as workers for asynchronous tasks …
- and a few other services such as CloudFlare for network traffic, Let's Encrypt for SSL, fail2ban, Wireguard and StrongSwan for VPN …
The beauty of this is … all of those technologies help you launch your business and they are free to use and you can actually look into how they were made and adapt them to your needs, without paying any fee.
We mainly like to deploy our Django apps using a better version of the famous tutorial on DigitalOcean on How To Set Up Django with Postgres, Nginx, and Gunicorn on Ubuntu 20.04 . Very handy and you should follow how they deploy a web app using that tutorial as reference. But we like to make some modifications to the way Gunicorn and Redis are handled by Systemd. Previously we used Supervisord but this has been lately less used in favor of Systemd, because mainly, the new way allows us to centralize all the logs under a common system of the OS.
Gunicorn process under systemd
We usually create an file at /home/sammy/myprojectdir/myprojectenv/bin/gunicorn_execute that the user www-data can execute and put the following lines:
#!/bin/bash
NAME="myproject" # Name of the application DJANGODIR=/home/sammy/myprojectdir # Django project directory SOCKFILE=/home/sammy/myprojectdir/myprojectenv/myproject.sock # we will communicate using this unix socket USER=www-data # the user to run as GROUP=www-data # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=myproject.settings # which settings file should Django use
DJANGO_WSGI_MODULE=myproject.wsgi # WSGI module name
VIRTUALENV=/home/sammy/myprojectdir/myprojectenv
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source $VIRTUALENV/bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec $VIRTUALENV/bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--bind=unix:$SOCKFILE \
--log-level=debug \
--timeout 300
And we put that in under systemd by creating a file named /etc/systemd/system/myproject.service and enter the code bellow:
#!/bin/sh
[Unit]
Description=myproject
After=network.target
[Service]
PIDFile=/var/run/cric.pid
User=www-data
Group=www-data
ExecStart=/bin/bash /home/sammy/myprojectdir/myprojectenv/bin/gunicorn_execute
Restart=on-abort
# make sure log directory exists and owned by syslog
PermissionsStartOnly=true
ExecStartPre=/bin/mkdir -p /var/log/myproject
ExecStartPre=/bin/chown syslog:adm /var/log/myproject
ExecStartPre=/bin/chmod 755 /var/log/myproject
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=myproject
[Install]
WantedBy=multi-user.target
It is really important that we separate the logs of a particular process from the logs of all the OS system, thus we can easily monitor our process without hassle. Notice that we have instructed systemd to run the process as the user “www-data”, who is created whenever we install Nginx/Apache2 web servers.
We then run :
sudo systemctl start myproject.service
to start the process, and :
sudo systemctl enable myproject.service
to enable the process to be started whenever we restart our server. You should then start monitoring the logs by tailing the systemd journal:
sudo journalctl -f -u myproject
The journal is stored as a binary file, so it cannot be tailed directly. But we have syslog forwarding enabled on the systemd side, so now it is just a matter of configuring our syslog server. While ‘journalctl’ does provide the logs, what we really want is to have the logs available in the standard “/var/log/<service>” location. So we will tell systemd to send it to syslog, and then have syslog write our files out to disk. Finally, the service should be part of the boot process, so that it automatically starts after reboot.
To do that, first modify “/etc/rsyslog.conf” and uncomment the lines below which tell the server to listen for syslog messages on port 514/TCP.
module(load="imtcp")
input(type="imtcp" port="514")
Then, create “/etc/rsyslog.d/30-myproject.conf” with the following content:
if $programname == 'myproject' or $syslogtag == 'myproject' then /var/log/myproject/myproject.log
& stop
Now restart the rsyslog service and you should see the syslog listener on port 514, restart the myproject.service, and now you should see log events being sent to the file every few seconds.
$ sudo systemctl restart rsyslog
$ netstat -an | grep "LISTEN "
$ sudo systemctl restart myproject
$ tail -f /var/log/myproject/myproject.log
You should see the logs coming in.