In this document you will find instruction on how to build Ubuntu, Fedora, and CentOS images with Apache Hadoop versions 1.x.x and 2.x.x.
As of now the vanilla plugin works with images with pre-installed versions of Apache Hadoop. To simplify the task of building such images we use Disk Image Builder.
Disk Image Builder builds disk images using elements. An element is a particular set of code that alters how the image is built, or runs within the chroot to prepare the image.
Elements for building vanilla images are stored in Sahara extra repository
To create vanilla images follow these steps:
Clone repository “https://github.com/openstack/sahara-image-elements” locally.
Run the diskimage-create.sh script.
You can run the script diskimage-create.sh in any directory (for example, in your home directory). By default this script will attempt to create cloud images for all versions of supported plugins and all operating systems (subset of Ubuntu, Fedora, and CentOS depending on plugin). This script must be run with root privileges.
sudo bash diskimage-create.sh
NOTE: If you don’t want to use default values, you should set your values of parameters.
Then it will create required cloud images using image elements that install all the necessary packages and configure them. You will find created images in the current directory.
Note
Disk Image Builder will generate QCOW2 images, used with the default OpenStack Qemu/KVM hypervisors. If your OpenStack uses a different hypervisor, the generated image should be converted to an appropriate format.
VMware Nova backend requires VMDK image format. You may use qemu-img utility to convert a QCOW2 image to VMDK.
qemu-img convert -O vmdk <original_image>.qcow2 <converted_image>.vmdk
For finer control of diskimage-create.sh see the official documentation or run:
$ diskimage-create.sh -h