Getting CKA Certified

This post is several months overdue, I actually got my CKA in September 2019. As you may have heard, Kubernetes is everywhere and will sooner or later take over the world.

I’ve been working with Kubernetes and containerized workloads since July 2017 and it has been a blast. Back in 2017, it was fun to talk to customers who were told to containerize their workloads but didn’t really know how to fully operationalize containers, forget monitoring or securing such workloads. Initially, I got my hands dirty with Docker Swarm and eventually learned to use Kubernetes and some of it’s modified versions, such as RedHat OpenShift.

When I joined Sysdig in October of 2018, I started out as Principal Technical Account Manager before I took over Professional Services. Our Professional Services team is made up of experienced and highly technical Cloud Engineers/Architects, who know there way around Kubernetes pretty damn well. As I started working with them, I realized I knew a lot about Kubernetes but not nearly as much as they did. I do believe to truly bring change to an organization, you first have to fully understand the in and outs of the people you’ll lead. With me being fairly technical, I thought: “What better way to understand what these folks do on a daily basis, than shadowing them and eventually try to replicate their work in a lab environment.”

Sure enough, I spend several months deploying Kubernetes, breaking it and re-deploying. Eventually, I learned the patterns in which Kubernetes breaks and how to troubleshoot it rather quickly, without lengthy Google searches.

I purchased the exam on September 1st 2019 and took the exam on September 3rd 2019. I was confident that it cannot be as hard as people described on the many blogs out there. A couple of hours after taking the exam, I received an email stating that I failed by reaching 72% out of 74% – required to pass. Seriously…2%…

The next day, I rescheduled the exam for September 9th and doubled down on the questions I previously could not answer. Obviously, I did pass on the second try.

Lastly, here are some common questions I’ve received from friends and coworkers.

Why should I get CKA certified?

As I mentioned at the beginning, Kubernetes is taking over the world and became the orchestrator of choice for many. I believe the number of people who are CKA certified is still limited but it’s increasing steadily. I believe achieving the CKA certification, still helps you to stand out of the crowd. However, this will surely change over the next 9-18 months.

What was your exam experience?

This exam was a bit weird because someone is actually watching your screen and webcam while you take the exam from your own computer at home. Overall, it worked well and I didn’t have any technical difficulties and like the fact that you can take the exam from home. Another interest aspect of the exam was that it’s very very hands-on.

How did you prepare for the exam?

I highly recommend Cloud Native Certified Kubernetes Administrator (CKA) course from Linux Academy. It was well done and covered pretty much everything which was part of the test.

Additionally, my #1 tip would be to familiarize yourself with the Kubernetes docs and how to successfully find things. Why? Because you can use the docs during the exam and if you know to search for things, it will greatly speed up the time it takes to find answers. I used the docs probably for 2-3 exercises and was able to quickly find the solutions.

Any tips for passing the exam?

Be creative! Remember your Kubernetes commands. Don’t try to write all the yamls from scratch, instead, remember the -o yaml option in kubectl. It will save you a lot of time and avoids syntax errors.

Running Usenet Stack as Docker Containers

In Running Usenet Stack on Kubernetes, I covered how to deploy a Usenet stack onto Kubernetes.

Image result for docker container

As it turned out, Kubernetes has won the race as the orchestrator of choice but not everyone is running it in their home lab just yet. I received multiple requests how to translate my YAML files into Docker run commands.

This image has an empty alt attribute; its file name is text256.png

sudo docker run -d –name=radarr -e PUID=1000 -e PGID=1000 -e TZ=America/New_York -p 7878:7878 -v change_me:/config -v change_me:/movies -v change_me:/downloads –restart unless-stopped linuxserver/radarr

The above command will launch a docker container from the linuxserver/radarr image and publish the application on port 7878.
Before you just run the above command, please make sure to change following paths:

/config
stores the configuration files

/movies
location of the downloaded movies after it has been moved from the /downloads folder

/downloads
download folder where your NZBGet or SABnzbd app will store the downloads

sudo docker run -d –name=sonarr -e PUID=1000 -e PGID=1000 -e TZ=America/New_York -p 8989:8989 -v change_me:/config -v change_me:/tv -v change_me:/downloads –restart unless-stopped linuxserver/sonarr

This is basically doing the same as the container for Radarr, except this application will be launched on port 8989. As with Radarr, Sonarr all needs some paths updated before launching the above command:

/config
stores the configuration files

/tv
location of the downloaded tv shows after they have been moved from the /downloads folder

/downloads
download folder where your NZBGet or SABnzbd app will store the downloads

Image result for nzbget

sudo docker run -d –name=nzbget -e PUID=1000 -e PGID=1000 -e TZ=America/New_York -p 6789:6789 -v change_me:/config -v change_me:/downloads –restart unless-stopped linuxserver/nzbget

In my other post, I covered SABnzbd but I’ve recently chosen to go with NZBGet on Docker as it has been more reliant in my lab. NZBGet’s default port is 6789 and if you don’t have a good reason, I would just keep it on the default port. NZBGet has one configuration less than Sonarr and Radarr:

/config
stores the configuration files

/downloads
download folder where your NZBGet is going to store all downloads. This folder needs to be accessible by Sonarr and Radarr.

Create Rubrik SLAs from a CSV File

If you own a Brik, you are familiar with creating new SLAs as it is one of the first things you would do after getting a Brik deployed.

In the world of Rubrik, everything is build around Service Level Agreements. With Rubrik’s unique approach to use SLAs for Backup and Recovery tasks, we have dramatically simplified the Backup Admin’s daily work. Traditionally a Backup Admin would create multiple backup jobs for full, incremental, hourly, daily, weekly, monthly and yearly backups. Additionally the backup admin would create some archival and replication jobs. With Rubrik’s SLA approach all of this can done within a single SLA and dramatically simplify the operational overhead associated with backup jobs.

Sometimes, a single or even a handful of SLAs might not be enough. In this case using the Rubrik interface becomes time-consuming. Luckily Rubrik has an API-First Architecture which means everything you can do in the GUI, can also be done via the APIs.

To make Rubrik’s API even easier to digest, we have a build-in API Explorer. You can get access to it via https:///docs/v1/playground

Continue reading “Create Rubrik SLAs from a CSV File”

Running Usenet Stack on Kubernetes

Some of the most common applications in a Usenet Stack are SABnzbd, Sonarr and Radarr. SABnzbd is a binary newsreader which handles download from Usenet. Sonarr acts a PVR for TV shows and Radarr is a fork of Sonarr is a PVR for Movies. This post will show to deploy a Usenet Stack on Kubernetes

I recently helped out a friend to get these apps deployed on Kubernetes and I published the YAML files. You can find the files on my Github repo but I will share them further down as well.

 

Continue reading “Running Usenet Stack on Kubernetes”

Kubernetes with Mixed CPU Architecture

Creating a Raspberry Pi Kubernetes cluster is straight forward, however it becomes more complicated when you want to mix and match kubernetes nodes of different CPU architectures.

I recently deployed a new Kubernetes cluster in my home lab which initially consisted of two Ubuntu nodes running kubeadm and are  VMs on VMware ESXi. Last night I tried to add a Raspberry Pi  and ran into a couple of issues which I resolved and describe further down to save you some time.

kubectl get nodes -o wide
NAME               STATUS    ROLES     AGE       VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION      CONTAINER-RUNTIME
k8s-raspberry-01   Ready         4h        v1.11.2   192.168.1.216           Raspbian GNU/Linux 9 (stretch)   4.14.62-v7+         docker://18.6.1
k8s-ubuntu-01      Ready     master    5d        v1.11.2   192.168.1.127           Ubuntu 18.04.1 LTS               4.15.0-33-generic   docker://18.6.1
k8s-ubuntu-02      Ready         5d        v1.11.2   192.168.1.156           Ubuntu 18.04.1 LTS               4.15.0-33-generic   docker://18.6.1

Continue reading “Kubernetes with Mixed CPU Architecture”