Openstack Cloud

From MCS Wiki
Jump to: navigation, search

Contents

Overview

The purpose of this project is to deploy a functional openstack cloud across five servers.

Motivation
Our motivation for building the openstack cluster comes from the original cluster which has now become far to dated to keep up with the demands most networked systems have to manage today. The old cluster was what's considered a "Beowulf" and this standard has become much less popular over the years in favor of cloud computing platforms. Therefore by implementing a cloud computing cluster we have given ourselves the opportunity to learn about large scale server deployment using current standards and given something useful to the college which will be helpful to them after we have graduated.
Prior to receiving the systems needed to implement the Openstack we spent several weeks gathering information as follows
  • What are some of the common industry standards for implementing cloud computing?
  • What advantages do specific deployment choices have and do they fit functionality the college will find useful.
  • How easy will it be to maintain for future uses of the college such as LDAP integration and academic web hosting services such as moodle with apache.
The options we came to were AWS (Amazon Web Services), VMWare and Openstack. The advantages for Openstack over VMWare or AWS were that with openstack we could deploy a service called JuJu which is described below, this meets our third question concerning maintainability. The second reason that we chose openstack was the ease of integration especially for contingencies concerning the failure of a node as the system is built with these problems in mind. We now also have the option to run much more dynamic virtual machines using MaaS which has the best documentation in conjunction with Openstack.

Technical Specifications

We have deployed 5 systems so far each mirroring the following specifications.

Dell PowerEdge 420
Intel Xeon E5-24300 @ 2.20GHz
16G Ram
6 NIC's
(2)1TB HDD set up with RAID 1 for redundancy.

The reason for this hardware is to have a system capable of supporting multiple virtual machines while also having redundancy in case of failure. It was more important to have five systems with the same specifications than it was to have more powerful systems, concerning this we have tried to find equipment that satisfied both these needs.

Software

MaaS - Metal as a Service
Metal as a Service is defined as a "provisioning construct" which allows you to automate the deployment of nodes as a centralized pool of resources which allow for dynamic scaling based on need which is ideal for systems with large workloads such as a cloud.


juju - Service Managment
JuJu is a service managment software that runs along side a deployed OpenStack main node allowing for a drag and drop interface to connect services and oversee problems quickly using visual elements. It also supports a Command-Line interface for quick remote troubleshooting of the various services and status logging for failed or stopped services or there connection to other services or nodes.
OpenStack - Cloud Deployment
OpenStack is a suite of software from various developers designed to work in conjunction to quickly deploy a cloud computing platform. The central componants with there application are as follows.
Horizon
The Horizon component of OpenStack deals with the OpenStack Dashboard giving users and administrators a quick and flexible way to manager a deployed cloud cluster with support for multiple accounts with different levels of access as well as powerful features such as custom scripting and cron job control.
Glance
Glance includes a registry and database for control of the images which will be pushed onto boxes as they are added or replaced. In our case we only have two images because we feel that attempting to maintain several distributions working in conjunction is problematic therefore all the nodes run from Ubuntu LTS (12.04) and the MaaS node runs off of Ubuntu 13.01 for the best support for JuJu.
Nova
Nova includes an api and console for the compute node, which has the role of doing calculations among boxes for various functions such as mass commands and the computations behind the data relations among services.
Cinder
Cinder is the volume node which includes a database and scheduler for writing the bulk data for the cloud to a dynamically allocated chunk of space.
Quantum
Quantum includes the Quantum-sever and handles the networking between boxes provided proper DNS and DHCPCD settings have been configured.
Keystone
Keystone has the job of maintaining authentication across services for several different parts of the OpenStack Backend such as the KVS, SQL, and login services such as PAM or LDAP.

The following is a diagram denoting these services and there relationships within an OpenStack platform. Openstack-logical-arch-folsom.jpg

Procedures and Complications

In order to deploy the Openstack cloud we had to install the suite of software upon a main system that would act as the head of the cluster. We had the option of declining to use MaaS which acts as separate software to maintain the nodes connections to one another and manage resources.

Pros
If we use MaaS then we can use JuJu which must be deployed in conjunction with the Openstack and MaaS, allowing for maintainability for both ourselves and the college while also providing us with a good troubleshooting tool and quick recovery solution.
Cons
We were unfamiliar with MaaS and were unsure of how easy it would be to configure and the complications involved in tying it to the Openstack software. Another thing was the large number of people having trouble with various details of this configuration and launchpad provided no real help to most problems encountered.

Despite our reservations we agreed that in the end it would be much better to attempt to implement MaaS and if we realized it was not going to be feasible we could choose a cut off and move forward with either manually configuring node connections or selecting a less powerful but likely easier to configure set of software.

Problems about MaaS
As we had predicted the deployment of MaaS did not go smoothly at first. We went through several problems in the course of the first month including issues with DNS being unable to resolve which we attempted to fix by manually filling in the /etc/hosts and /etc/resolv.conf files which control the DNS resolutions for specific ip addresses.
In the meantime we were still able to bring the MaaS browser interface up using the following commmands.
apt-get install maas maas-dhcp maas-dns ##installed base componants of MaaS
maas createadmin --username=*user* --email=*email* --password=*password* ##Creates main user in control of MaaS
maas-cli login maas http://*ip address of host*/MAAS/api/1.0 *the key generated* ##Authenticate with the MaaS api
maas-cli maas node-groups import-boot-images ##Downloads the images used to install Ubuntu 12.04 LTS(our choice) to the nodes.
maas-import-pxe-files ## Imports and generates the PXE-Boot files used by MaaS to boot nodes with specific configurations.

The second issue which resulted from the manual edit of /etc/resolv.conf and /etc/hosts was that we could get access to the internet through the main node but no proceeding node. After several weeks of frustration with both the online community, canonical, and the vast number of syntax changes across revisions we finally repaired the DNS by emptying the Default Domain field in the MaaS interface which had previously been set to 'local'. This allowed MaaS to appropriately configure the DNS regions/zones which in turn fixed the networking issues as they turned out to be the result of ip addresses becoming unresolved across nodes. The only remaining issue at this point was port forwarding which we fixed using the following script. This script allows us to use MaaS as a router by moving network traffic through through the main node into the preceding ones.
  • Note We did not write this we simply found it convenient for our use.
echo -e "\n\nLoading simple rc.firewall-iptables version $FWVER..\n"
DEPMOD=/sbin/depmod
MODPROBE=/sbin/modprobe

EXTIF="em1"
INTIF="em2"
#INTIF2="em1"
echo "   External Interface:  $EXTIF"
echo "   Internal Interface:  $INTIF"

#======================================================================
#== No editing beyond this line is required for initial MASQ testing == 
echo -en "   loading modules: "
echo "  - Verifying that all kernel modules are ok"
$DEPMOD -a
echo "----------------------------------------------------------------------"
echo -en "ip_tables, "
$MODPROBE ip_tables
echo -en "nf_conntrack, " 
$MODPROBE nf_conntrack
echo -en "nf_conntrack_ftp, " 
$MODPROBE nf_conntrack_ftp
echo -en "nf_conntrack_irc, " 
$MODPROBE nf_conntrack_irc
echo -en "iptable_nat, "
$MODPROBE iptable_nat
echo -en "nf_nat_ftp, "
$MODPROBE nf_nat_ftp
echo "----------------------------------------------------------------------"
echo -e "   Done loading modules.\n"
echo "   Enabling forwarding.."
echo "1" > /proc/sys/net/ipv4/ip_forward
echo "   Enabling DynamicAddr.."
echo "1" > /proc/sys/net/ipv4/ip_dynaddr 
echo "   Clearing any existing rules and setting default policy.." 

iptables-restore <<-EOF
*nat
-A POSTROUTING -o "$EXTIF" -j MASQUERADE
COMMIT
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
-A FORWARD -i "$EXTIF" -o "$INTIF" -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT 
-A FORWARD -i "$INTIF" -o "$EXTIF" -j ACCEPT
-A FORWARD -j LOG
COMMIT
EOF

echo -e "\nrc.firewall-iptables v$FWVER done.\n"
Problems with openstack
Our first problem was that there were several revisions of openstack and often we would look for solutions which would involve outdated syntax such as the network configuration module which was renamed from Neutron to Nova after a legal dispute which caused syntax errors we could not determine for two days. Other issues involved
Problems with JuJu

Most of the problems that we encountered during our efforts to integrate JuJu with openstack and MaaS were related to python and it's problems with changes in syntax across revisions. In order to make JuJu work we had to use Joomla which required that we roll back the version of python that had been installed to a previous one.

Future Plans

Currently we have deployed the openstack cloud with five total systems. The remaining jobs to accomplish include LDAP integration with the keystone authentication module such that we may examine the capability of using the cloud to host home directories and more. In addition to this we will be planning to bridge our network connections such that we may asign ip addresses to virtual machines and have remote access to them through the cloud. Lastly if time allows we would like to examine the possible limits of our cluster and what loads it can handle while still providing optimal efficiency.

Personal tools