All posts by andrewmallett

Installing MongoDB on Ubuntu 14.04

Download PDF

Installing MongoDB on Ubuntu 14.04


Evermore we are seeing the rise in so called NoSQL Database Servers supporting the needs of Big Data. First of all we need to get some understanding of what we mean by NoSQL before we look at installing MongoDB, a NoSQL Database Server.


Well if SQL is the structured query language then NoSQL is unstructured. For example, or at least a quick and easy demonstration:


  • in SQL to select data from a table we would use: SELECT * FROM employees
  • in NoSQL we would select data similar data using: db.employees.find()


I do agree this is very much semantics; however there is less in the Query Language with NoSQL and more in the way the data is stored.

Another difference demonstrating the reduced structure is that in SQL the table and schema has to be defines before we add data to it. In SQL this would be the table with defines columns. In NoSQL we use collections and the collections store documents instead of rows. A collection is created when a document is added to it. If you ere add to a document to an existing collection and the new document were to have additional fields, (elements is the document array), those fields are would be dynamically added.


Installing MongoDB

We will be installing MongoDB onto a Ubuntu 14.04 LTS system. In this demonstration we will look at installing MongoDB from the standard repositories. In a later tutorial we can see how this is achieved using the latest version of MongoDB by adding their repositories.

The standard repo will supply version 2.4 of MongoDB. Later versions of the database use a different format to the configuration file; when we edit the configuration we use the 2.4 version as supplied by Ubuntu. When we use the latest version of MongoDB supplied by we will see the new style configuration

It is important to note that MongoDB will not start with the default configuration unless you have in excess of 30GB of free disk space available to it. For a live system that is not an issue but if you are installing MongoDB for testing and learning purposes you may hit a stumbling block here. We will show how to adjust the configuration to support less disk space so that you can run and test MongoDB with minimal configuration.


To install MongoDB 2.4 from the standard repositories:

$ sudo apt-get update
$ sudo apt-get install mongodb


installing mongodb on ubuntu 14.04

The mongodb package is a meta-package that will install the server, client and shared components.

Server Fails to Start

Running through this you will now be rather annoyed with me thinking this is just another blog that hasn’t been tested or written by someone who has never used the database. The server will fail to start and if we check the server script we will see that the server failed to start.

Using the following command we should see that the server has failed:

$ /etc/init.d/mongodb status

mongodb-failIf we check the log file which will be /var/log/mongodb/mongodb.log we should be able to see that it failed because of insufficient disk space:


Fix the Configuration

One solution would be to give an extra 30GB of disk space to the database directory; alternatively we could configure the option smallfiles. I kind of like the latter option for a small test system. The configuration file for version 2.4.9 that we use in /etc/mongodb.conf and is a straightforward flat file. Later versions change to more hierarchical settings within the configuration.

We need to edit this file as root or with sudo and add in the following line:

smallfiles = true

The file should look something similar to this:

mongodb-conf-2-4Note: This is for the 2.4 version on MongoDB and earlier later versions use a different format.

Start the Server and Test Access

With the settings saved we can now start the MongoDB server:

# /etc/init.d/mongodb start
# /etc/init.d/mongodb status


The server should now report as being running. By default it will listen on the localhost interface only. We can connect to the server using the mongo shell client. Just type mongo as a standard user:

$ mongo

The output should be similar to the following screenshot:


We can test the version later, even though the client does report when it starts:


We connect to the test database by default and we can return data on that using:


We can see the output in the following screenshot:


We can use the command exit to leave the mongo shell client.

Auditing Logins with last

Download PDF

Auditing Logins with lastLPIC-1 Objective 110-1 Auditing logins with last

In this module we take a look at the command /usr/bin/last and how we use it to audit user logins and system runlevel changes. The command last read from the data file /var/log/wtmp by default. This database has all of the login and logout details and runlevel changes for our system. In this way we can see that auditing logins with last is a simple procedure.

Basic usage

Just using the command last on its own without arguments or options will print detail from the file /var/log/wtmp. As to how far back it shows the login details will be down to how often the file is rotated. On my system it is rotated monthly so the current file will show me logins from the 1st September, I am writing this in September.

$ last

We can see from the final line of output when the file was started.

wtemp-lastIf we want to read from a previous file we can use the option -f and the path through to the file

$ last -f /var/log/wtmp.1

On my system this will show August’s logins

Show Reboots

To see reboots on the system and how long the system has been up we can use the following command:

$ last reboot

There is a pseudo user called reboot and we can see when the system has been rebooted, it at all. In the following screenshot we can see that the system has been up for 7 days and 17 hours with the last reboot being on September 12th:

last-rebootWe can display this also with last -x. Using last pi would show login details just for the user pi.

The following video steps you through a demonstration:

Docker Custom Images

Download PDF

Docker Custom ImagesCreating Docker Custom Images on the Raspberry Pi

In this tutorial we are expanding on the previous video where we looked at using Docker and the Docker engine on the Raspberry Pi. Here are still working with the basics of Docker at an overview level but we will gain a better understanding of how and why we use Docker by building Docker custom images. We will stick with the Raspberry Pi 2 has the Docker host but you may be using any host Docker system. As I am using ARM hardware I will use the armbuild/debian image as my base but you may just use Debian if you are using standard Intel hardware. At the end of this module you will be able to create Docker custom image from a Dockerfile.

Select Your Base Image

In Docker images are Read-Only templates that can be used to provision Containers. Containers and vaguely comparable to Virtual Machines in other technologies, but very vaguely . One major way that Docker Containers differ from traditional virtual machines is that they are designed to run one process only. This may be your web server or your database server etc. Containers have a thin read-write layer that overlays the underlying Image that it was provisioned from. In the scenario we want to deploy and Apache HTTPD server with PHP. We will use a Debian base image for this. Later we will add the required packages to the base image to create the Docker Custom image.

$ docker pull armbuild/debian:8.0 #I am using ARM hardware just debian:8.0 for Intel

Create the Dockerfile

A Docker file is a text file that contains instructions on how to build the new image. It has to be called Dockerfile in the exact case. In a perfect world you will create this in its own empty directory as contents from the directory that the Dockerfile is located in can be added to the image you are building.

For the purpose of this we will create a new test directory in or HOME directory:

$ mkdir $HOME/test ; cd $HOME/test

From with the new directory we can create the $HOME/test/Dockerfile with the editor of choice:

FROM armbuild/debian:8.0
RUN apt-get update && apt-get install php5 && apt-get clean
CMD ["/usr/sbin/apache2ctl","D","FOREGROUND"]


This instruction defines the image that we base the image from


These are instructions to run inside the temporary container during image creation. We have combined the commands together to reduce the amount of layers that are created in the resulting image. Installing PHP5 on the debian image will install the Apache HTTPD package as a dependency.


We tell the resulting container run from this image to listen on port 80 or to open port 80. We need this to talk to the wen server. We will later map port 80 on the host to port 80 on the Docker container to allow access from external hosts.


This defines the command that will be PID 1 or Process ID 1 when the container start. We can use CMD or ENTRYPOINT but CMD allows us to overwrite the command from the command line whereas ENTRYPOINT does not. This is useful sometime in faulting a container in that we can stat it with a bash shell when ENTRYPOINT is not defined.

Create the Docker Custom Image

Now we have the Dockerfile we can build the new image. From the test directory:

$ cd $HOME/test
$ docker build -t debian/web .

We use the -t option to set the tag or name of the image. As it is based on debian I use that and then /web as it is a web server image. These names are fine so long as you do not intend to upload them to Docker hub where they will need to named after your userid. The dot or period at the  end denotes that we look for the Dockerfile in the current directory. When we run the command it may take a few minutes installing the software

$ docker images

Using the above command we should see the new image once it is created.

Building Containers

We can run a test container to see that it works

$ docker run -d -p 80:80 --name test debian/web

We should now be able to browse to the Docker host on port 80 and see the standard Debian welcome page. To add our own content we need to ensure the website is available in a directory, such as $HOME/www. We will first stop the test container and then start a new one with the $HOME/www directory on the host mapped to /var/www/html/ on the container. Remember we must create the website in on the Dockerhost and mount it to the container at runtime

$ docker stop test
$ docker rm test
$ docker run -d -p 80:80 -v /home/pi/www:/var/www/html --name test debian/web

Now when browsing to the site we should see the content of the website we created in your brand new container.

The video follows please take a look….

Raspberry Pi Docker Host

Download PDF

RPI Docker HostUsing an RPI Docker Host

In this blog we look at using a Raspberry PI 2 and a Docker host device. Yes an RPI Docker Host. In the video we use the RPI 2 but I also have it running on a Model B with the single core and 512MB RAM. The Version 2 has 4 cores and 1GB RAM so is better suited to this type or work. But for simple learning the Model B or B+ is fine. I am also running this with just an 8GB SD card or MicroSD card in the model 2. We don’t need a lot of space wither for the host OS or docker containers and images.

Setting the Hostname

This is a little different in the HypriotOS. Their boot loader sets the hostname and we set it in the /boot partition. This means that it can be set before you boot the system if you access the /boot device on another system even Windows. As my system is up and running we can configure this by editing the /boot/occidentalis.txt file and changing the hostname from black-pearl to your hostname. A reboot will then configure the hostname and add entries to the /etc/hosts.

Docker is Pre-Installed

Docker 1.6 is pre-installed on this system so this is an RPI Docker Hosts out of the box. Whilst 1.8 is the latest version 1.6 is not old and better than many current distributions use. The user pi is a member of the docker group so can manage Docker. Any member of the docker group can manage the Docker host. . The password for pi is raspberry and the root password is hypriot.

Show the client version
$ docker -v
Show client server and golang version
$ docker version
Display more detailed information
$ docker info

What is Docker

Docker is a container virtualization product. Allowing quick and easy deployment of services is separated Micro Operating Systems that share the host kernel. The downloaded images are very small and customizable. If you want a web-server or MySQL server you can spin up a container in seconds and the service will be running.

Take a Look

We will now fire up a web-server in its own OS and IP Address. This will be hidden behind a NAT network on the Docker host. To access the web server running in the container we map port 80 on the host to 80 on the container. As we don’t have any images it will download and spin-up  with the one command:

$ docker run -d -p 80:80 hypriot/rpi-busybox-httpd

This literally takes a few seconds and then we can browser to the Docker host and see the website. Sure the content is not our own but it easy to add content to the container as it starts.

The video will step you through what we have discussed:

Configuring a CentOS 7 Kerberos KDC

Download PDF


Configuring a CentOS 7 Kerberos KDC

We will now configure a Kerberos KDC that we can use for authentication. In this tutorial we will configure a CentOS 7.1 host as a KDC and also use it as a Kerberos client to authenticate SSH logins. In a later tutorial we will add in a second client server. By the end of this tutorial you will be comfortable with configuring a CentOS 7 Kerberos KDC.

What is Kerberos

Kerberos is an authentication mechanism. Once authenticated to the Kerberos server a client is issued a token. This token can be used to authenticate the client to Kerberized Services such as SSH. In this way we can login to the Kerberos server once and use the token for password-less logins. In practice this means when using SSH to remotely manage many servers I can login once a day to the KDC and use the token to authenticate to all the others servers without the need of further passwords. By default a token lasts 24 hours but is adjustable.


Time services will need to be configured on the KDC and the servers that make use of it. In my scenario the servers that I will use are on the same VM host and will then share the same time. We will also need to resolve host names through DNS or in the demonstration we will use local host files. In this demo we will use just the one server but in the next tutorial we ill introduce the 2nd client server. The KDC will be and the 2nd server will be On both servers I have host file entries for both servers.

Install Software on KDC

# yum install -y krb5-server krb5-workstation pam_krb5

Edit Server Configuration Files

The configuration files for the server are located in the directory /var/kerberos/krb5kdc . We have two files to edit so we will move to this directory:

cd  /var/kerberos/krb5kdc

There will be two files in this directory:

  • kdc.conf
  • kadm5.acl

The conf file is the server configuration and the acl file, well the ACL. The ACL grants all privileges to anyone one with the admin role. In each file we need to change the Kerberos realm from EXAMPLE.COM to your own realm name. In out case this is TUP.COM.

The kadm5.acl is listed in the following screenshot:

kadmThe kdc.conf file is shown below:

kdcEdit the Client Configuration

The server that we are using as the KDC will also be a Kerberos client, allowing users to authenticate via Kerberos to services that is hosts. For this we edit the file: /etc/krb5.conf. In this file we remove the comment (#) at the start of any of the lines. We then again replace all instances of EXAMPLE.COM with our own realm. In my case TUP.COM. We do this both for the DNS name in lower-case and the realm name in upper-case. My file is shown in the following screenshot:

krb5confTake care in editing the file to ensure all changes have been made. This file can act as the default for any new file added to the realm. We simply copy this file to the next client.

Create the KDC Database

When creating the KDC database we will need an entropy pool of random data. Please check that your rngd service is running correctly as it does not by default in CentOS 7 or 7.1. you can read my post on setting this up here. You will be prompted to set a secure password to the database key when you run the following command.

kdb5_util create -s -r TUP.COM

Start and Enable Kerberos

We are now ready to start and enable the two services:

systemctl start krb5kdc kadmin
systemctl enable krb5kdc kadmin

Create Principals

Objects in the KDC database are known as principals and can be users or hosts. We will assign root admin rights and are standard user tux will be added in as a user so they can use Kerberized services:

# kadmin.local
kadmin.local: addprinc root/admin
kadmin.local: addprinc tup
kadmin.local: quit

We will now add in the host so that it may host Kerberized services. We must add principals for each user and server that will use Kerberos. We will also copy the  encrypted Kerberos keytab file to the new host.

# kadmin.local
kadmin.local: addprinc -randkey host/
kadmin.local: ktadd host/
kadmin.local: quit

Configure SSH Client

On the Kerberos client server we need to edit the SSH Client file to allow all clients by default to use Kerberos authentication. To do this edit the file /etc/ssh/ssh_config and add the lines:

GSSAPIAuthentication yes
GSSAPIDelegateCredentials yes

Allow Kerberos Authentication to the Server

From the command line we can allow Kerberos authentication either using authconfig-tui or simply with the command:

# authconfig  --enablekrb5 --update

This will make the changes tot he PAM configuration. I would also reload SSH at this stage to be certain everything is in place:

# systemctl reload sshd

I have now firewall in place but if you use a host based firewall you will need to open port 88 on TCP and UDP.

Test SSH Authentication Using Kerberos

Login to as a the standard user you added to the Kerberos database. In may case this was the user tux. Connect to the KDC and authenticate to receive a token with the following command:

$ kinit

You can list your token with klist:

$ klist

The output from klist is shown in the following screenshot:


You will now be able to authenticate using Kerberos to the SSH server:

$ ssh

You may have to accept the public key if this is the first connect but you will not be prompted for a password. If we have more SSH servers in the realm then the behavior would be the same. We would run kinit just the once and we would be able to access all of the servers in the realm with the one the single password authentication.

The following video can step you through the process:

The Urban Penguin is your comprehensive provider for professional Linux software development, training and services. Every day decision makers are barraged with information on Windows vs. Open Source. Making a decision on which platform to bet your business on is a critical decision and significant investment. We offer industry-leading cost-effective business solutions using the Linux platform. World-renown Linux expert, Andrew Mallett, believes in the Open Source platform. Let The Urban Penguin help you make the best decisions for your software development and business needs.

CentOS 7 rngd Will Not Start

Download PDF

When rndg Will Not Start

Do you ever have one of those Monday mornings. Yes, one of those! You would believe that with CentOS 7.1 no less little niggles will have been ironed out and the world would be a wonderful place. Apparently not, and we find that on CentOS 7 rngd will not start by default.

OK, there is a lot to look after and perfection is never there, even with my spelling. So believe me I am not throwing rocks but want to get it out there of how and why we start the rngd service.

Firstly: The Why

Many user and system programs in Linux will need entropy when working with cryptography. Entropy in Linux is defined as randomness collected by the Operating System. Originally this was collected from the pseudo-device /dev/random from data generated by device drivers and services. The data sent to /dev/random is known as the entropy pool and when the pool is empty the cryptographic service or user program may stop. This would not be great on your HTTPS enabled web server.

To ensure the entropy pool is not exhausted the device /dev/urandom is now used by default before failing back to /dev/random. Rather than collecting data from device drivers /dev/urandom will have random data directly fed to it from the rngd service. This is part of the rng-tools package on both Debian and Red Hat based systems.

A simple demonstration to show the exhaustion of the entropy pool when the service is not running we try to try to generate at new gpg key, (GNU Privacy Guard). If this is executed whilst the service is failed or not running entropy will be gathered from /dev/random and will most often prompt for more random data.

entropyIf the rngd service is running there is always enough entropy in the pool.

Secondly: The Problem

OK, so I am sold on the idea of a limitless entropy pool. What is the problem with the service.  It doesn’t start, that is what the problem is!

rngdThe command that the service is running from the service unit is: rngd -f

This is just a little wrong. First of all we would like ti to run is the background as a daemon service. The error is that the unit file does not specify the -r option or the path to the device file to use. This will default to /dev/hwrandom which does not exist.

Thirdly: The Fix

We can easily rectify the problem by editing the service  unit file: /usr/lib/systemd/system/rngd.service. The ExecStart line should be edited so that it reads as in the following:

ExecStart=/sbin/rngd -f -r /dev/urandom

This is also shown in the following screenshot:


We will need to reload the unit file once it has been edited. We can use the following command to achieve this:

# systemctl daemon-reload

With the new unit loaded we can now start the service and check the status:

# systemctl start rngd
# systemctl status rngd

The following video will step you through the process.

Training Tasters with Linux Format

Download PDF

Linux Format Host Videos From TheUrbanPenguin

Linux Format Publishes theurbanpenguin Videos

If you want a little taster of my training style then you will find my work on YouTube where we have over 800 videos and the home of the penguin since 2009. This month Linux Format magazine has commissioned 16 videos to be included in LXF202, the September 2015 issue.

The videos are based on Ubuntu 15.04 and include tutorials on

  • End User
    • LibreOffice, calc, writer and impress
    • Chrome and Firefox
    • Installing Ubuntu
  • Developer
    • Using vim
    • GoogleGo
    • Python
    • Perl
  • Administrator
    • Systemd
    • ufw
    • apt

In this way you can taste my teaching style and have the videos at home to keep.

TheUrbanPenguin is your comprehensive provider for professional Linux software development, training and services.  Every day decision makers are barraged with information on Windows vs. Open Source.  Making a decision on which platform to bet your business on is a critical decision and significant investment.  We offer industry-leading cost-effective business solutions using the Linux platform.  World-renown Linux expert, Andrew Mallett, believes in the Open Source platform.  Let The Urban Penguin help you make the best decisions for your software development and business needs.

Kernel Runtime Management and Troubleshooting

Download PDF

LPIC-2-201This objective of LPIC-2 201 is mainly about understanding how to load and unload kernel modules. Of course much of the time this is handled automatically as the hardware is detected and there is little that we have to do. But of course those times it goes wrong or where we perhaps are testing different drivers we have to know what happens behind the scenes. To begin we can run the command :

uname -r

This will report the kernel version. We could also read this from the file in the procfs:

cat /proc/version

The uname command can be useful. If we want to go the the directory for the running kernel modules we can use the command:

cd /lib/modules/$(uname -r)

To list the running modules, modules loaded by the kernel we can use the command:



From the output we can see that the module cdrom is in use by the module sr_mod, note the used by column. We can can’t unload the cdrom module until the module sr_mod is unloaded. We can also view the loaded modules via the file /proc/modules.

To unload module we can use rmmod or modprobe -r:

modprobe -r sr_mod OR
rmmod sr_mod
modprobe -r cdrom OR
rmmmod cdrom

Loading the modules can be made with modprobe or inssmod. Modprobe is far more convenient:

  • Does not need to full path whereas insmod does
  • Will load the required modules where insmod needs you to load each module in the correct order

To show you this we can use modprobe with the -v option. With both sr_mod and cdrom modules unloaded:

modprobe -v sr_mod


We can see the insmod is used in the background that needs the full path to the module and the dependency module is also loaded. The dependencies are maintained via the file:

/lib/modules/$(uname -r)/modules.dep

The command modinfo can be used to display information about a module including the full path to the module file and and options that it may accept during the load phase. These options can be applied automatically each time the module loads though configuration files in /etc/modprobe.d/. We can also apply aliases to module names here, this is often when a program will call one module but you would like another used:

alias kangaroo cdrom

Now we can use

modprobe kangaroo

And the cdrom module will load.


Compiling a Linux Kernel on CentOS 6.5

Download PDF

LPIC-2-201Getting ready for the LPIC-2 201 exam we look at objective 201.2 compiling a Linux Kernel. For this we use the 3.14 kernel on CentOS 6.5. Changing from the original 2.6.32 kernel that ships with CentOS.

Having downloaded the kernel we expand it in the the directory /usr/src

tar -Jxvf  linux.314.3.tar.xz -C /usr/src/

From there the steps are:

  1. cd /usr/src
  2. ln -s linux.3.24.3 linux
  3. cd linux
  4. yum install gcc ncurses-devel
  5. make mrproper
  6. make menuconfig
  7. make bzImage
  8. make modules
  9. make module_install
  10. make install

The lest step make install is the greatest time save. This copies the kernel to /boot , creates a grub entry and uses dracut to make a new initramfs. Should we want to check on the make targets we can use:

make help

uptime, w, top sar -q

Download PDF

LPIC-2-201The final elements of objective 200.1 of the LPIC-2 exam 117-201 that we will look at is the Load Averages that we can read from from a range of tools on the Linux command line interface. using CentOS 6.5 will will show examples of this use uptime, w, top and sar -q. We will also throw a little memory monitoring in with free and swapon -s.


Starting of with uptime the output will show, well the uptime but will end with the Load Average over the last 1, 5 and 15 minutes. The values for these are considered high if the exceed the number of CPUs we have for any lengthened amount of time. For 1 CPU we do not want to see a Load Value of 1 certainly in the 15 Minute column.


This same load average information can be seen with the command top and w. If you have the package sysstat installed then sar -q will display load averages over a time period.


You may also read the load averages from the file /proc/loadavg