Macadamian Blog

Sending Error Logs using Docker, Vagrant and SumoLogic

Christian Nadeau

The main goal was to stay as close to the production environment as possible to avoid “It works on my dev machine!” when it just crashes in production.

Sending Error Logs to Sumo Logic from Docker

The software development team at Macadamian is continually exploring new ways to improve our DevOps practices. Over the past few weeks, as we were building some small internal apps, we soon noticed that there was no easy way to access our error logs. Our initial approach was to deploy our NodeJS server to a VM and log errors with console.error logs. When something went wrong it was easy enough to ssh to the VM, switch to the web app server user, and open the error log file from the terminal: (ssh user@server;sudo su — webUser;less path/filename.log).

In retrospect, it was not that easy to access the error logs using this approach, especially as the team grows. Our approach did not provide an easy path to receiving alerts when something goes wrong. We realized that having ssh open on our web server is not that secure, and perhaps we shouldn’t use a vanilla VM to start from. This is where Docker and Sumo Logic come in. We decided to try using Docker to package our app into something that would be easy to deploy and integrates with Sumo Logic using their agent to send our logs to their server.

Using docker-compose, it became really simple to redirect a container’s syslog and application logs to Sumo Logic. It already provides docker images and you can create a simple NodeJS app using default node images. Here is the final result:

version: ‘2’
services:
 nodejsapp:
 # Folder containing the Dockerfile to build the NodeJS app
 build: ./nodejsdocker
 ports: 
  — ‘8080:8080’
 volumes:
  # Create a mapping between host and container to write log files
  — /var/log/container-logs:/var/log/container-logs
 links:
  — ‘sumodocker’
 # configure the nodejsapp container to log any syslog to sumologic collector locally (available since we linked this container witht he sumodocker one)
 logging:
  driver: syslog
  options:
  # TODO use a wildcard address or DOCKER_HOST
  syslog-address: “tcp://localhost:514”
sumodocker:
 # Folder containing Dockerfile to build the sumologic collector    image
 build: ./sumodocker
 ports:
  — ‘514:514’
 volumes:
  # Create a mapping between host and container to read log files
  — /var/log/container-logs:/var/log/container-logs
 environment:
  SUMO_ACCESS_ID: ‘YOUR_ACCESS_ID’
  SUMO_ACCESS_KEY: ‘YOUR_ACCESS_KEY’
  SUMO_COLLECTOR_NAME: ‘nodejsapp-collector’
  SUMO_SOURCES_JSON: ‘/etc/sumo-sources.json’

And the sumo-sources.json are able to send to tcp/udp port 514 and watch for log files into /var/log/containers-logs.

{
 “api.version”: “v1”,
 “sources”: [
  { 
   “sourceType”: “Syslog”,
   “name”: “syslog-collector-container-tcp”,
   “port”: 514,
   “protocol”: “TCP”,
   “encoding”: “UTF-8”,
   “forceTimeZone”: false,
   “category”: “collector-container”
  },
  { 
   “sourceType”: “Syslog”,
   “name”: “syslog-collector-container-udp”,
   “port”: 514,
   “protocol”: “UDP”,
   “encoding”: “UTF-8”,
   “forceTimeZone”: false,
   “category”: “collector-container”
  },
  {
   “sourceType” : “LocalFile”,
   “name”: “localfile-collector-container”,
   “pathExpression”: “/var/log/container-logs/my_app_*.log”,
   “multilineProcessingEnabled”: false,
   “automaticDateParsing”: true,
   “forceTimeZone”: false,
   “category”: “collector-container”
  }
 ]
}

The Sumo Logic dashboard allows us to view our error logs in a format that you can configure and add additional collectors depending on your needs. The dashboard also delivers alerts that are really useful when the system goes down or when anomalies are detected. This is definitely less stressful and more revealing about what is going on than an email from the CEO.

Given that we wanted to get our development process to be more modular and easy to deploy into a continuous development and integration model, Docker seemed to fulfill this need.  The main goal was to stay as close to the production environment as possible to avoid “It works on my dev machine!” when it just crashes in production. We did, however, encounter a few problems using Docker out of the box:

  1. Docker came with boot2docker image that was creating a VM locally to run containers, which is great. Using it on a Unix base system is a no-brainer, but we mainly develop on Windows and OS X, so a limitation of boot2docker image is that it blocks us from using it as a development environment: “localhost” is not accessible directly!
  2. A Dockerfile alone is great to describe part of a containerized system, but it’s not useful for a system’s deployment. This is where docker-compose comes in to expose and link all services of the whole system in a single easy to understand file.
  3. We would have to set up VM’s shared folders manually each time to sync the code into the VM then into the containers.

During this process, we came across some blog posts from ActiveLamp and Delicious Brains which pointed out Vagrant is basically a portable environment creator tool. This really helped us building what we needed.  Vagrant allowed us to:

  • build virtual machines from public images
  • install docker and docker-compose automatically
  • sync local files into the VM easily
  • configure CPU, RAM and expose ports easily

Despite using Vagrant, the team ended up encountering the following problems:

  1. When using the Vagrant built-in synced folders, the performance was not acceptable. We turned off the synced folder in favor of rsync, which let us sync local files to VM directly in a much more efficient way.
  2. The installation of Vagrant on MacOS is very simple in terms of downloading, installing and everything working out of the box; however on Windows, it’s kind of tricky. To address this, first you need Git to be installed with GitCmd available. The reason is that GitBash does not seem to respect the Windows environment variables. This prevents Vagrant’s default rsync from working properly. We also had to modify Vagrant’s rsync manually because there is a known bug with it. Once all this patching is done, the solution works well.

Once this was completed, Docker launched something amazing to replace all of this, Docker for Mac and Docker for Windows which basically run a native application that:

  • allows us to deploy containers locally
  • sync files easily and quickly using the same volumes we already configured
  • does not use localhost, but docker.local (mac) or docker as URL as local endpoints.

This solution is now in beta. Once it is officially released, it is sure to become our main development environment. This will help us to achieve our main goal which was to deploy containerized applications that run the same code in development and in production.

Once everything was deployed, we needed to get some feedback about what was going on given that there were several options available. Since we were already familiar with Sumo Logic, and it had a Docker container that could be deployed alongside our solution we decided to use it.  All we did was to use their Dockerfile, add our API keys, and we received logs for the containers deployed.

Author Overview

Christian Nadeau

Christian is a veteran software developer at Macadamian with specialties in .NET ( WPF, Silverlight, Window Phone 8 ), java (J2EE, JBoss), C++ (Qt, BB10). He holds a bachelor's degree in computer engineering from the University of Sherbrooke.