This blog entry has some command line examples in it that you can try yourself. If you go to the OpenShift Origin (the open source community edition of OpenShift), you will find various ways to run OpenShift directly on your own machine. If you go the oc cluster up route, you can check out my troubleshooting if you run into problems. You can also run the examples using the OpenShift Interactive Tutorial, powered by Katacoda.
We’re Finally Here!
Last week I talked about Kubernetes, and now I’ll talk about OpenShift. This blog series has been a long journey, starting with the concepts of containers, then to images that are the pieces you use to build containers, on to how you define images, and we just passed orchestration and workload management. That’s a lot of things that go together to build and run container applications. Conceptually it isn’t terribly hard, but when you start doing the process from scratch and you try to migrate an existing application into containers, it can get complicated. You might need some help.
The folks at Red Hat have a solution for you. It’s called OpenShift, and it is what has inspired me to write this whole blog series. It’s open-source and uses open standards like OCI so that you aren’t locked into using any particular container vendor for building your images. I was asked to research it and prepare some demos on how to use it. When I presented my first demo, the audience was stunned. Building and deploying apps is often a complicated process, but I took a simple app that some interns of mine had written over the summer, built an image consisting of a JBoss EAP base and my war, and then deployed it onto the OpenShift platform. The whole process literally only took a few commands on the command line and less than 5 minutes.
Before I dive in and show you OpenShift, it is important that I talk a little bit about the architecture. Like Kubernetes, there is a master that has an API server, an etcd database, a scheduler, and other management services. There are also node servers where the containers run. It kind of looks like this:
Why does OpenShift resemble Kubernetes so much? Because it actually is Kubernetes under the hood. OpenShift builds upon the Kubernetes base, adding developer and operations-centric tools to accelerate development, allow for easier deployment and scaling, and provide increased security. The versions of OpenShift are tied closely to the versions of Kubernetes, so as new versions of Kubernetes are released, updated versions of OpenShift follow close behind.
Note: OpenShift before version 3 did not use Kubernetes, but now that Kubernetes is the defacto standard for container orchestration, I wouldn’t recommend using anything less than version 3, and ideally at least 3.6. I won’t be talking about the previous technology stack used before version 3 (i.e., Gears).
Building an Application With OpenShift on the Command Line
Thankfully, because we have already discussed a lot of the details of Kubernetes, we can focus more on what OpenShift adds on to the platform. We’re going to start with the command line because OpenShift brings a really powerful tool called oc. The oc command is similar to the kubectl command, and anyone already using kubectl will find migrating to oc relatively simple. I’m going to use that command to demonstrate building an app in OpenShift, and you are going to be find that the experience, at least on the command line, is highly similar to the experience using kubectl.
Starting an OpenShift Instance
You can easily use oc to start your own OpenShift instance. The great thing is that it uses containers to do this, so once you have something like Docker installed, you are good to go. All you need to do is run this:
$ oc cluster up
Using Docker shared volumes for OpenShift volumes
Using 127.0.0.1 as the server IP
Starting OpenShift using openshift/origin:v3.9.0
OpenShift server started.The server is accessible via web console at:
https://127.0.0.1:8443You are logged in as:
Password: <any value>To login as administrator:
oc login -u system:admin
If you see some warnings, I have a troubleshooting page that addresses some of these.
Hello From Nginx in OpenShift
Now I’m going to start the same demo I did in Kubernetes. As you recall, this was to launch a container running nginx.
Before continuing on, I would suggest running this command:
$ oc completion bash > ~/.oc_completion.sh
$ source ~/.oc_completion.sh
This will enable bash completion for oc commands and will save you valuable time. This completion even works for the names of resources like pods, which often have randomly generated names.
Also, OpenShift wisely refuses to let containers run normally when they require some kind of root privilege. Our nginx container doesn’t require much, but it does require opening a privileged port by default (80). We could extend the default nginx image to use a non-privilged port, but for the purposes of this demo inside our simple cluster, we’ll just tell OpenShift it is ok if any user runs privileged containers. Use this with care, however, and normally you want to avoid having containers ever do anything as root. To do this, you can run:
$ oc adm policy add-scc-to-user anyuid -z default
scc "anyuid" added to:
We did that with a single comment in Kubernetes:
$ kubectl run demo-nginx --image=nginx --port=80
We can do it in one line in OpenShift too:
$ oc new-app docker.io/nginx --name=demo-nginx
--> Found Docker image cd5239a (2 days old) from docker.io for "docker.io/nginx"* An image stream will be created as "demo-nginx:latest" that will track this image
* This image will be deployed in deployment config "demo-nginx"
* Port 80/tcp will be load balanced by service "demo-nginx"
* Other containers can access this service through the hostname "demo-nginx"
* WARNING: Image "docker.io/nginx" runs as the 'root' user which may not be permitted by your cluster administrator--> Creating resources ...
imagestream "demo-nginx" created
deploymentconfig "demo-nginx" created
service "demo-nginx" created
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose svc/demo-nginx'
Run 'oc status' to view your app.
Just like with Kubernetes, it has created a bunch of resources for us:
$ oc get deploymentconfigs
NAME REVISION DESIRED CURRENT TRIGGERED BY
demo-nginx 1 1 1 config,image(demo-nginx:latest)
$ oc get pods
NAME READY STATUS RESTARTS AGE
demo-nginx-1-gdqmz 1/1 Running 0 10m
$ oc get replicationcontrollers
NAME DESIRED CURRENT READY AGE
demo-nginx-1 1 1 1 11m
$ oc get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-nginx ClusterIP 172.30.145.143 <none> 80/TCP 11m
$ oc get imagestreams
NAME DOCKER REPO TAGS UPDATED
demo-nginx 172.30.1.1:5000/myproject/demo-nginx latest 12 minutes ago
So now you may be wondering what some of these resources are that you didn’t see when you were using kubectl. The first was deployment controllers, which essentially is the same as the deployment resource in Kubernetes. The next new one is replication controllers which are similar to replication controllers in Kubernetes, but as I mention in the Kubernetes post, replica sets are replacing replication controllers (they add a more powerful way of selecting the things that get replicated). The last one is imagestreams, which is essentially a resource that maps to an image (it contains info about where the image is pulled from).
Unlike Kubernetes, when we created our resources, it also created a service for us. Remember that we did this as an additional step when we ran:
$ kubectl expose demo-nginx
You can run that command in OpenShift also, but it actually does something different.
$ oc expose svc/demo-nginx
route "demo-nginx" exposed
You could think that OpenShift essentially takes Kubernetes a step further. The service that was created as part of our new-app command exposes our nginx server at a 172.30.x.x IP. If we are on the same network as the cluster, we can curl the service IP, just like we did with Kubernetes:
$$ curl 172.30.145.143<!DOCTYPE html>
<title>Welcome to nginx!</title>
Now, however, we have a resource called a route:
$ oc describe routes
Created: 16 minutes ago
Requested Host: demo-nginx-default.2886795390-80-simba02.environments.katacoda.com
As you can see, we now not only have an IP, we have a hostname that is exposed externally (don’t try to go to this one, it won’t be there when I am done here). I did this all from the command line — built a container, service, deployment and even a URL route to it with just a handful of commands.
I’m not going to dive too far into depth here, because OpenShift resources are just Kubernetes resources. You can get the YAML for them just like you can in Kubernetes:
$ oc get svc/demo-nginx -o yamlapiVersion: v1
You can delete all the resources and recreate them, just like you can with Kubernetes. A lot of the resources have shortcuts like Kubernetes. And you can explain all the resources, just like you can with Kubernetes. Why? Because OpenShift is Kubernetes.
But Wait, There’s More!
There’s a whole lot more in OpenShift to talk about. I have kept you long enough for this week, so I’m going to continue the series on to next week where I will talk about the OpenShift UI which allows you to do a lot of what I have described but in a more visual way. I’ll see you next week after I’m done with DockerCon in San Francisco.
Originally published on June 11, 2018.