Deploy Single-Node RedHat OpenShift 4.9 Cluster on AWS

Poojan Mehta
7 min readJul 12, 2022

In this article, I’ll guide you through the installation steps in order to deploy OpenShift 4.9 on a single node EC2 server.

Before Jumping into the details, let’s understand what OpenShift is actually. OpenShift is a enterprise grade container orchestration tool by RedHat built on Kubernetes. This product provides much more than core Kubernetes capabilities and covers almost every phase of the container management lifecycle. In this article, we will install the Opensource flavour of the OpenShift container platform 4.9. Basic understanding of containers and Kubernetes is suggested to learn and work on OpenShift. We will use the OpenSource version of the OpenShift container platform which is also known as OKD project. Also, we will use the Full-stack Automation approach to install the single-node cluster over AWS EC2 service.

Basically, the Full-Stack Automation approach will allow us to utilize the power and availability of the existing cloud services and club multiple things together in a Terraform script and deploy a cluster-based solution with just a few clicks. Also, the user can customize the cluster configuration and set a few parameters within the script to get the customized cluster solution. As we consider OpenShift as an enterprise version of Kubernetes, it comes up with more features and pre-requisites. One of them is a public domain if you’re looking forward to deploy the solution on a public cloud provider and planning to deploy workloads that are publicly accessible.

I’ve already purchased a domain and maintaining a DNS Hosted Zone within Route53 Service which implies the pre-requisite is fulfilled. If you’ve purchased a domain from other provider than Route53, you should consider creating a Hosted zone and alter the nameservers in the original domain provider site.

We will provision a single-node openshift cluster via an EC2 instance as a client node. The client node will simply work as an intermediate between the cluster and the admin and will enable the communication between them.

Step: 1) Create SSH key in the client node

→ The first step is to create one ssh key in the client node to further bind it with the full stack automation script. The SSH key is created in case we need to login inside the master node in future and perform some cluster operations. Follow the steps below to create one.

ssh-keygen
eval “$(ssh-agent -s)”
ssh-add id_rsa

Give appropriate name to the SSH key (optional) and click enter. You can verify the key has been created by going to the ~/.ssh folder. With the ssh-add command, we are adding the identity of the created keyfile with the system.

Generate and add SSH key

Step:2) Configure AWS Profile for the programmatic access

→ In this step, we will configure one AWS profile with Access and Secret keys so that the Full stack automation script can make the API calls to the AWS console on our behalf and preform infrastructure provisioning tasks.

aws configure — — profile dev
export AWS_PROFILE=dev

Configure AWS Profile

After running this command, pass the AWS access and secret keys🔒 and then run the export command to set the current profile for the shell.

Step:3) Install OpenShift binaries from GitHub

→ Considering that we are using the OpenSource version of the OpenShift, in order to get the installation binaries, we will take reference of the official GitHub Repo of the OKD Project. In this demo we are downloading openshift v4.9 as zip files. Verify the same with the screenshot below. Install the binaries and extract the zip files using the tar command.

tar -xvf openshift-client-linux-4.9.0–0.okd-2022–02–12–140851.tar.gz
tar -xvf openshift-install-linux-4.9.0–0.okd-2022–02–12–140851.tar.gz

OpenShift installer zip file

Step:4) Create a installation config YAML file using the script

→ By using the OpenShift install executable file with the required arguments, we will now create one configuration file in YAML format. Run the below command to initiate the config file creation and select the respective options from the dropdown menu.

./openshift-install create install-config — dir=.

→Don’t forget to select the SSH key we created in Step 1. In the next option, we have to select the resource provider. This Full-Stack automation approach supports almost all major cloud providers and we will select AWS in this case.
→ Afterwards, the script expects AWS region and the domain name for the cluster which I’ve given ap-south-1 and my own domain having a hosted zone in route53. In addition, we’ve to provide a unique name of the cluster which will be binded with the base domain and eventually be provided as a publicly routed URL for the console or the deployed workloads over the cluster.

→ The last remained field is Pull Secret. Earlier, OpenShift 4.x Versions were not opensource. Hence, back then, in order to pull the OpenShift images from the registry, the user had to authenticate with a unique pull secret obtained from their respective RedHat developer account. The option became obsolete after the consideration of the OKD project under OpenSource guidelines but still the script seeks the pull secret. A workaround to bypass this authentication is mentioned in this GitHub Issue . I’ve followed the same and passed null authentication secret which is {“auths”:{“fake”:{“auth”:”aWQ6cGFzcwo=”}}}.

→After feeding all the inputs, a config file named install-config.yml will be created in the target directory. Here we can see the current directory as the target dir.

OpenShift Install script

Step:5) Customize the existing file and edit the number of nodes

→ Open the file editor and change the node number for Worker from 3 to 0. Also, change the node number for master node from 3 to 1. This concludes that the provisioned cluster will operate on a single node which means the same node will work as both master and slave.

→ We are keeping all other fields as default much there is much more to customize based on requirements. You can refer the article for detailed explanation on customized installation.

Edited YAML file

Step:6) Run the create cluster command and initiate the installation

→ We’ve fulfilled the pre-requisites in order to launch our single-node OpenShift v4.9 over an EC2 instance. The script is edited with respect to our current requirement. Now let’s run the cluster creation command.

→ Here we are giving reference of the created config file as the — dir argument. So, the create cluster command will take the inputs from the config file and provision the single node cluster by using the access and secret keys provided by the admin.

./openshift-install create cluster — dir=.

2 nodes provisioned. One master and one bootstrap.

→ The installation will take somewhere around 30–40 mins. Hold tight and grab a coffee until then.
→ Also, the installation script will launch an intermediate node for bootstrapping. This node is responsible for configuring the cluster and making all the cluster components up and running. The bootstrap node will be destroyed after the master node is working and in healthy state.

→During the installation process, the cluster will perform internal health checks to make sure each components are up and running in a operatable state.

→ Also, the installation will come up with a default cluster user named Kubeadmin and the corresponding password (~/auth/kubeadmin-password). We can use this user and password to login in the cluster and perform the operations. Also, a publically routable URL for the OpenShift Web UI will be printed in the shell output.

post Installation output

→ Now, export the Kubeconfig file so that the oc client will be able to fetch the cluster information and context from that file and perform cluster operations to the mentioned endpoints.

export KUBECONFIG=/root/okd/auth/kubeconfig

Step:7) Copy the OC binaries to the executable directory

→ In order to run the OC client from the command line, we have to first copy the client binaries to the executable location in the linux system. Use the below command for the same.

cp oc /usr/bin/
cp kubectl /usr/bin/

oc client

Finally, all the pre-requisites and the cluster installation process is completed and now we have a running single-node OpenShift v4.9 working as expected. We can verify the same by running the status command.

oc status

→Also, with the printed console URL, we can login inside the WebUi using the temporary Kubeadmin user and explore the console.

Done and Dusted. The OpenShift installation is completed. That’s it from my side for today. There is a lot more to learn in the ecosystem.

THANKS, A LOT FOR READING THIS SO ATTENTIVELY

I’ll be grateful to have connections like you on Linkedln 🧑‍💼

--

--

Poojan Mehta

In a continuous process of Technical Writing. Gathering, Organizing, Crafting the things that make sense.