Table of Contents
- Custom Images for Cloudbreak
This section covers advanced topics for building Custom Images.
In case you plan to use your own hardened base image, you should meet the following requirements.
/tmp/**
directory is used by our packer provisioners to copy temporary install scripts and other binaries, therefore it is required to provide write access for Packer.- Packer communicates with their plugins via RPC calls. By default they will try to allocate ports in the range of
10,000 - 25,000. If for any reason there are security restrictions applied in your custom image you can modify these values with the environment variables listed here.
PACKER_PLUGIN_MIN_PORT
-PACKER_PLUGIN_MAX_PORT
https://www.packer.io/docs/other/environment-variables.html
This section presents the customizing possibilities of the image burning process.
You can override the disk size (in GB-s, by default 25GB) of the VM used for Packer burning by modifying the IMAGE_SIZE
parameter in the Makefile
.
For example:
IMAGE_SIZE = 50
After saving the Makefile the modified value is applied to all subsequent image burns.
You can set the cloud provider regions for the image to be copied over by editing the value of the following parameter in the Makefile
. By default, burnt images are copied over to all the available regions.
Cloud Provider | Parameter Name | Default value |
---|---|---|
AWS | AWS_AMI_REGIONS | ap-northeast-1,ap-northeast-2,ap-south-1,ap-southeast-1,ap-southeast-2,ap-southeast-3,ca-central-1,eu-central-1,eu-west-1,eu-west-2,eu-west-3,sa-east-1,us-east-1,us-east-2,us-west-1,us-west-2,ap-east-1,eu-south-2,eu-central-2 |
Azure | AZURE_STORAGE_ACCOUNTS | East Asia, East US, Central US, North Europe, South Central US, North Central US, East US 2, Japan East, Japan West, South East Asia, West US, West Europe, Brazil South, Canada East, Canada Central, Australia East, Australia South East, Central India, Korea Central, Korea South, South India, UK South, West Central US, UK West, West US 2, West India |
After saving the Makefile
the modified values are applied to all subsequent image burns.
Note: If you experience a failure during the SSH connection step, you might need to adjust the SUBNET_ID and VPC_ID environment variable values.
E.g.:
export SUBNET_ID=subnet-aaaaaaaaaaaaaaaa
export VPC_ID=vpc-aaaaaaaaaaaaaaaa
By default the transient EC2 instance will be created in the same VPC and Subnet as the process building the image.
You have the option to burn a prewarmed image compatible with FreeIPA service, to do so, you have to export the following variable before invoking the image burning make target.
export CUSTOM_IMAGE_TYPE=freeipa
Running this will install the necessary packages for FreeIPA and will not run the modifications required only for the Cloudbreak compatible custom images.
If you would like to start from a customized image, you could either:
- Set Packer to start from your own custom image
- Add your custom logic - either as custom script or as custom Salt state
- Using preinstalled JDK
You have the option to start from your own pre-created source image, you have to modify the relevant section in the builders
in the packer.json file.
The following table lists the property to be modified to be able to start from a custom image:
Cloud Provider | Builder Name | Base Source Image Properties |
---|---|---|
AWS | aws-centos7 | source_ami: "ami-061b1560" and "region": "..." |
Azure | arm-centos7 | driven by input parameters:image_publisher , image_offer and image_sku |
Azure | arm-redhat7 | driven by either input parameter image_url for vhd image, or image_publisher , image_offer and image_sku for Marketplace image |
Note: For Azure, you can list popular images as written in documentation, but please note that only CentOS and RedHat is supported.
- For this example, suppose you have your own CentOS 7 AMI
ami-XXXXXXXX
in regionus-east-1
in your AWS account. - Open the packer.json file.
- Find the section for
builders
and the section"name": "aws-centos7"
. - Modify the properties
source_ami
andregion
to match the AMI in your AWS account. - Save the packer.json file.
- Proceed to AWS and run the Build Command for CentOS 7.
There is the possibility in Cloudbreak to use custom repositories to install Ambari and the HDP cluster, the easiest way to configure these is to place the necessary repo files (ambari.repo and hdp.repo files are necessary for installing the cluster) to your image and start the custom image creation by setting that image as base image.
For more information on how to set up a local repository please refer to the documentation.
Cloudbreak uses SaltStack for image provisioning. You have an option to extend the factory scripts based on custom requirements.
Warning: This is very advanced option. Understanding the following content requires a basic understanding of the concepts of SaltStack. Please read the relevant sections of the documentation.
The provisioning steps are implemented with Salt state files, there is a placeholder state file called custom
. The following section describes the steps required to extend this custom
state with either your own script or Salt state file.
- Check the contents of the following directory:
saltstack/salt/custom
, it provides extension points for implementing custom logic. The contents of the directory are the following:
Filename | Description |
---|---|
init.sls |
Top level descriptor for state, it references other state files |
custom.sls |
Example for custom state file, by default it contains the example of copying and running custom.sh with some basic logging configured |
/usr/local/bin/custom.sh |
Placeholder for custom logic |
- You have the following options to extend this state:
- You can place your scripts inside
custom.sh
- You can copy and reference your scripts like
custom.sh
is referred fromcustom.sls
. For each new file, afile.managed
state is needed to copy the script to the VM and acmd.run
state is needed to actually run and log the script. - You can create and reference your state file like
custom.sls
is referred frominit.sls
. You can include any custom Salt states, if your new sls files are included ininit.sls
, they will be applied automatically
Warning: Please ensure that your script runs without any errors or mandatory user inputs
By default, OpenJDK is installed on the images. Alternatively, if you have an image with preinstalled JDK you can pass it's JAVA_HOME variable which would disable installation of OpenJDK.
To set your custom JAVA_HOME export PREINSTALLED_JAVA_HOME
environment variable:
export PREINSTALLED_JAVA_HOME=/path/to/installed/jdk
Cloudbreak allows you to register an existing database instance to be used for a database for some supported cluster components. If you are planning to use an external database, specifically MySQL or Oracle, you must download the JDBC connector's JAR file and provide it to Cloudbreak. Typically, this is done when registering the database with Cloudbreak by providing the "Connector's JAR URL".
However, if you are burning your own custom image, you can simply place the JDBC driver in the /opt/jdbc-drivers
directory. If you do this, you do not need to provide the "Connector's JAR URL" when registering an external database.
To set an additional level of security, you can enable noexec setting for /tmp partition, which does not allow execution of any binaries on /tmp folder.
By default it is turned off, to enable it you have to set the OPTIONAL_STATES
environment variable as following:
export OPTIONAL_STATES="noexec-tmp"
By default all Packer postprocessors are removed before build. This behaviour can be changed by setting the:
export ENABLE_POSTPROCESSORS=1
For example a postprocessor could be used to store image metadata into HashiCorp Atlas for further processing.
If you don't know how postprocessors are working then you can safely ignore this section and please do NOT set ENABLE_POSTPROCESSORS=1 unless you know what you are doing.
Salt will be installed in a different Python environment using virtualenv. You can specify Salt version using SALT_VERSION environment variable. Salt services are running with Python of the virtual environment. Hence you cannot execute salt related commands by default, you have to activate the environment.
source /path/to/environment/bin/activate
By default, the path of the virtual environment will be /opt/salt_{SALT_VERSION}
.
Or you can use the predefined binary to activate the environment.
source activate_salt_env
If you are finished with your work with salt, you have to deactivate the environment:
deactivate
If you want to upgrade salt installation, you have to activate the environment then execute the following command:
pip install salt=={DESIRED_SALT_VERSION} --upgrade
Do not forget to deactivate the environment:
deactivate
Be aware that the ZMQ versions should match on every instance within a cluster, so if they differ, you have to install manually ZMQ using package manager. To do so, package manager should contain a repository which can provide the desired ZMQ package.
After the update you should restart salt related services:
Service type | Command |
---|---|
systemd | systemctl restart salt-master |
systemctl restart salt-api |
|
systemctl restart salt-minion |
|
amazonlinux | service salt-master restart |
service salt-api restart |
|
service salt-minion restart |
|
upstart | initctl restart salt-master |
initctl restart salt-api |
|
initctl restart salt-minion |
To upgrade salt-bootstrap, you have to download (with curl or with other preferred way) the new release package from releases of salt-bootstrap github repository and extract it into folder /usr/sbin/salt-bootstrap
.
Then you have to restart service:
Service type | Command |
---|---|
systemd | systemctl restart salt-bootstrap |
amazonlinux | service salt-bootstrap restart |
upstart | initctl restart salt-bootstrap |