Tuesday, June 18, 2019

Development Update on the Unofficial OTRS docker images

Lately there have been some work with the unofficial OTRS docker images with the help of some new contributors. I have added some improvements and some new Pull Requests have been merged (thanks for those !). I'm going to review the latest updates done to the images:

OTRS container

Docker secrets support

Merged PR #60 that adds support for docker swarm secrets, allowing to use files to store variable values instead of adding them as environment variables when in a docker swarm cluster.

Email sending configuration is more flexible

Merged PR #64 that makes OTRS SMTP settings completely configurable. Now these settings aren't hardcoded to use this SMTP relay container to send emails (It is still added in the example .env file as an example if you plan to use it). 

Now addtional to the SMTP server, port, username and password values, you can also set the email module to use, using the variable OTRS_SENDMAIL_MODULE=(SMTP, SMPTS, Sendmail) to one of the supported values by OTRS.

The reason why we are not setting a default anymore for this settings is because any setting set on Config.pm becomes read-only on the SysConfig GUI, and some people like using SysConfig instead of modifying configuration files.

So now if you don't manually configure the SMTP settings, email sending will not work until you configure an SMTP server through SysConfig.

Major updates improved

 There also have been some improvements in the major version upgrade process: part of the code was reorganized to avoid some rare upgrade cases I was facing, now addons are upgraded before running the database upgrade script.

Also some new variables were added to improve the control during the upgrade:
  • New environment variable OTRS_UPGRADE_BACKUP=(yes|no) to enable/disable automatic backup before starting a major version upgrade. Default is yes. 
  • Load additional sql files before db upgrade.
  • Added new environment variable OTRS_UPGRADE_XML_FILES=(yes|no) to enable upgrading of XML configuration files during an major version upgrade.

Some more general changes

  • Another new feature is that addons can be installed at container start. Just map  /opt/otrs/addons to a directory in your host, download the addons from whatever repository you like and put the .opm files there. The container will pick them up on boot and install them (and automatically upgraded when a new version of OTRS is released).
  • Also now all pasword values that are printed into stdout are masked by default so they aren't displayed on container boot.
  • Merged PR #57 that adds a new variable MYSQL_ROOT_USER to configure the database root username that was hardcoded to root before.
  • Merged PR #66 that adds an example systemd service file to run the containers using docker-compose at server boot.

Database container

For a long time there was an issue with the database container when starting a new OTRS  service. The container expected the user owner of the database mount point to be the same as the user running the database process, so you had to manually set the correct permissions first on the docker host prior to starting it.

Now that's not needed to be done, the container will work without the need to set up filesystem permissions. We also moved from CentOS/MariaDB image to the official MariaDB image.

That's all for now, check the complete CHANGELOG for the complete list of changes done.

Monday, May 20, 2019

New docker images for upcoming mageia 7

I have added new docker images for the upcoming mageia 7 release. Thanks to the latest work on our image build tools, the images are available in all architectures mageia 7 supports:
  • x86_64
  • armv7hl
  • aarch64
The images are based on mageia 7 beta3, and will be periodically updated when new releases are available.

Next step, automation.

Friday, May 10, 2019

armv7hl support for mageia docker official images

After some months of on and off work with @Conan-Kudo on improving mageia's docker images build tools to support multi-arch builds, we finally were able to add armv7hl support to mageia 6.

Usage is completely transparent to the user, when pulling the image, the docker daemon will take care to download the correct image according to the host server architecture.

Also, now that our build tools support multiarch builds, the moment mageia 7 is available armv8 images will be available too, at the same time of the x86_64 image.

We are also working on having a periodically updated cauldron build, but we are still working on that. With the latest changes to the build tools it should be easier to automate a cauldron build for example, a weekly or daily.

Sunday, March 17, 2019

Some improvements on my docker simple SMTP relay

I have not done any improvements to one of my first docker containers, a SMTP relay, despite the fact that it has resulted quite popular on docker hub (a 1M+ downloads to date !!). Lately I have done some work and received some pull requests with some improvements that I want to talk about.

The first feature I want to introduce is the addition of rsyslog to enable logging the container output to stdout and to log to remote logging systems. This means that now you can see the logs of the emails being processed by the container on its standard output with the docker logs command.

Also, for the modification of postfix's config file, now instead of using sed to modify it we use the postconf command so it is safer. There are also some new configuration options:


Added by PR #4. Adds a new variable to configure the server SMTP port to use.


Added by PR #7. This will add a header for tracking messages upstream. Helpful for spam filters. It will appear as:
RelayTag: yourheadertag

in the email headers.

Added by PR #12 . This variable will set postfix parameter mynetworks, allowoing you to add additional, comma seperated, subnets to use the relay. The default value allows the following networks:


The value you set will append the entered values to that list, so no need to re-add them to the variable.

Many thanks to all the authors for them !!

Saturday, December 15, 2018

Automating backups of OTRS docker containers

Another missing feature of my OTRS docker containers was the automation of the backup files. OTRS already includes a backup script, and we also included a convenience script that runs that script with default parameter values (full backups, gzip compressed), which you could run on an already running container or automate it from the docker  host using cron and docker exec

This approach wasn't flexible enough and needed configuration on the host side, which in some way goes against the point of containing an application configuration and runtime in one place. 

Backing Up

So now a new feature has been added to configure and run the backup process which can be controlled with the following environment variables:

  • OTRS_BACKUP_TIME: Sets the backup excecution time, in cron format. If set to "disable" automated backups will be disabled.
  • OTRS_BACKUP_TYPE: Sets the type of backup, it receives the same values as the OTRS backup script:
    • fullbackup: Saves the database and the whole OTRS home directory (except /var/tmp and cache directories). This is the default. 
    • nofullbackup: Saves the database and the whole OTRS home directory (except /var/tmp and cache directories).
    • dbonly: Only the database will be saved.
  • OTRS_BACKUP_COMPRESSION: Sets the backup compression method to use, also it receives the same values as the OTRS backup script (gzip|bzip2). The default is gzip.
  • OTRS_BACKUP_ROTATION: Sets the number of days to keep the backup files. The default is 30 days.
So for example, to change the backup time to database only backups, compress them using _bzip2_ and run twice each day set those variables on your docker-compose.yml file like this:

  OTRS_BACKUP_TIME="0 12,12 * * *"

This feature is available since the 6.0.15 build.


To restore a backup  file (not necessarily created with this container) the following environment variables must be added to docker-compose.yml (or env file if using one as you should do):
  • OTRS_INSTALL=restore Will restore the backup specified by OTRS_BACKUP_DATE environment variable.
  • OTRS_BACKUP_DATE is the backup name to restore. It can have two values:
    • Uncompressed backup: A directory with its name in the same date_time format that the OTRS backup script uses.
    • Compressed backup file: A gzip tarball of the previously described directory with the backup files. These tarballs are created by this container when doing a backup.
For example setting OTRS_BACKUP_DATE=2018-12-15_00-30 will look for that directory inside /var/otrs/backups/, and restore the backup files inside it. Or you could set OTRS_BACKUP_DATE=otrs-2018-12-15_00_30-fulbackup.tar.gz and the script will look for that file in the same directory and restore it.
A backup file created with this image or with any OTRS installation will work (the backup script creates the directory with that name). This is useful when migrating from another OTRS install to this container. I know that some of the variable names are a little confusing, probably will be renamed at a later version.

Monday, May 7, 2018

Doing major version upgrades of OTRS docker containers

One missing feature of my OTRS docker containers was automating a major version upgrade without any manual configuration. It should be as easy to do as it is launching it. Minor versions upgrades are easy: just pull the new image and restart your containers. 

For example, if you are running OTRS 6.0.1 and want to upgrade to the latest version (6.0.7 ATM):
sudo docker-compose -f docker-compose-prod.yml pull
sudo docker-compose -f docker-compose-prod.yml stop
sudo docker-compose -f docker-compose-prod.yml rm -f -v
sudo docker-compose -f docker-compose-prod.yml up   
That's it. Minor version upgrades only involve security and bug fixes, so there are no database schema or module upgrades needed, the new image contains the latest OTRS software packages with the latest fixes.

Major version upgrades need much more work done, as mentioned before, additional to updating the software components there are others components that need updating too:
  • Database schema
  • Cronjobs
  • Configuration rebuild
  • Cache delete
So I have added a new major version upgrade feature controlled by the environment variable OTRS_UPGRADE=yes. When this variable is set in the docker-compose file, the major version upgrade process will be started.  Also modify your docker-compose file and make sure that the OTRS docker image has the latest tag (juanluisbaptiste/otrs:latest) on both the otrs and its data container. 

Then like with the minor version upgrade, pull the new OTRS image and restart your containers:
sudo docker-compose -f docker-compose-prod.yml pull
sudo docker-compose -f docker-compose-prod.yml stop
sudo docker-compose -f docker-compose-prod.yml rm -f -v
sudo docker-compose -f docker-compose-prod.yml up   
The upgrade procedure will pause the OTRS container boot process for 10 seconds to give the user the chance to cancel the upgrade. 

The first thing done by the upgrade process is to do a backup of the current version before starting. The backup will be stored on /backups (don't forget to map that directory to one in your host so you can access it).

Then it will follow the official upgrade instructions: 
  • Run database upgrade scripts
  • Upgrade cronjobs
  • Upgrade modules 
  • Fix file permissions 
  • Rebuild OTRS configuration and delete cache.
The software components were updated when pulling the new  mage. Also, there was no need to stop/start services as the upgrade occurs on container bootup, which means no services are started yet. Also remember to remove the OTRS_UPGRADE variable from the docker-compose file afterwards.

You could use this container to upgrade from non docker installations too, btw.

This feature was added to both OTRS 5 & 6 images, so upgrades from OTRS 4 can be preformed too.

WARNING: this feature is experimental and is still in heavy testing, use at your own risk !!

Wednesday, May 2, 2018

Ansible installation role for BigBlueButton

I was looking for an ansible role to install BigBlueButton with SSL support, but it seems there's not much roles out there for this. Searching I the Internet the most complete one I found was this one, but it was outdated (last commit from two years ago) and broken, and the PR to fix it was from almost a year ago with no answer from the developer, so I figured it was abandoned and forked it.

Additional to fixing the broken stuff, this fork has the following additional features:
  • Installs latest BigBlueButton stable version, currently 1.1, but it will be updated to 2.0 when it comes out of beta.
  • Installation behind a firewall (NAT setup support).
  • Automatic SSL configuration using LetsEncrypt certificates.
  • Optionally installs the bbb-demo and bbb-check packages.

Lets see an example playbook to do a BigBlueButton install with SSL support:
- hosts: bbb
  remote_user: ansible
  become: True
  become_user: root
  gather_facts: True
    - role: ansible-bigbluebutton
      bbb_server_name: bbb.example.com
      bbb_configure_ssl: True
      bbb_ssl_email: foo@bar.com
Replace bbb_server_name with your server's hostname and bbb_ssl_email with your email address for the LetsEncrypt certificate generation, that's it. 

The role will install BigBlueButton according to the official installation instructions, generate SSL certificates using LetsEncrypt, and configure BigBlueButton to use those certificates. Remember, your hostname has to resolve to a public IP address, if not then LetsEncrypt certificate generation will not work.

If your server is behind a firewall the variable bbb_configure_nat: True needs to be added to the playbook to enable NAT configuration:
- hosts: bbb
  remote_user: ansible
  become: True
  become_user: root
  gather_facts: True
    - role: ansible-bigbluebutton
      bbb_server_name: bbb.example.com
      bbb_configure_ssl: True
      bbb_ssl_email: foo@bar.com
      bbb_configure_nat: True
This will reconfigure BigBlueButton components to use the local IP address instead of the one the server publicly resolves to. 

If you want to install the demo package or the health check package you can use bbb_install_demo: True and bbb_install_check: True respectevly to install them.

There is still some missing stuff I want to do before I consider this role complete:
  • Push it to Ansible Galaxy.
  • Install the new greenlight interface to create meetings.
  • Install the new HTML5 client for testing.

As an alternative to greenlight there's another project called Mconf-web, which is a web portal from were you can create public and private rooms that have their own videoconference room in the BigBlueButton server. Check my mconf-web docker container for an easy way to use it.