Using OpenShift Routes with Microshift
Microshift is a research project that aims to downsize OpenShift to the extreme so that it can run on IoT and resource-constrained machines.
I’m working on a GitHub action that makes Microshift available as an OpenShift cluster in a GitHub workflow.
Making Routes available with Microshift
Since Microshift is a new project, it still has rough edges and does not provide the same seamless experiences as Minikube and CRC yet.
You can create an OpenShift Route, but it won’t be available right away with the assigned hostname.
(This article assumes that you have installed Microshift on Vagrant.)
To test this, let’s deploy an app to Microshift and publish a route for it.
oc create deploy hello --image=quay.io/tasato/hello-js
oc expose deploy hello --port 8080
oc expose svc hello
A service and a route will be created.
$ oc get svc,route
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello ClusterIP 10.43.122.106 <none> 8080/TCP 6s
...NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/hello hello-default.cluster.local hello 8080 None
However, we can’t still access hello-default.cluster.local
as it is.
Exposing the Router
The first thing we need to do is to expose the OpenShift Router as a NodePort
service, which is already running in the openshift-ingress
namespace.
$ oc get pods -n openshift-ingress
NAME READY STATUS RESTARTS AGE
router-default-6d8c9d8f57-wjfk7 1/1 Running 1 4d21h
Publish this as a NodePort
service with 30080->80
(HTTP) and 30443->443
(HTTPS). This will allow access to the Router from outside the cluster on ports 30080
and 30443
.
$ cat <<EOF | oc apply -f -
apiVersion: v1
kind: Service
metadata:
name: router
namespace: openshift-ingress
spec:
type: NodePort
selector:
ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30080
- name: https
port: 443
targetPort: 443
nodePort: 30443
EOF
(Thanks Miguel Angel Ajo Pelayo for the original solution.)
By the way, in addition to the router service we just created there is also a ClusterIP
service called router-internal-default
.
$ oc get svc -n openshift-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
router NodePort 10.43.205.107 <none> 80:30080/TCP,443:30443/TCP 4d2h
router-internal-default ClusterIP 10.43.95.166 <none> 80/TCP,443/TCP,1936/TCP 36d
If you just want to access Route
from within a Microshift cluster node (e.g. for usages with GitHub Actions), you can use the router-internal-default
without creating a router
service.
Configuring the client-side DNS
At this point, you should be able to access hello-default.cluster.local
from port 30080
on the Vagrant VM. Given that the IP address of the VM is 192.168.99.11
, you can access it by specifying the route name in HTTP Host
header as the dispatch destination, as follows:
$ curl -H 'Host: hello-default.cluster.local' 192.168.99.11:30080
Hello ::ffff:10.42.0.51 from hello-64c47c44c8-5bbjg
Now, all you need is to make the name resolved to the Vagrant VM IP address 192.168.99.11
when you access hello-default.cluster.local
on the client side.
dnsmasq
It is most straightforward to use dnsmasq
when you want to assign a specific IP address to dynamically created URLs *.cluster.local
. If dnsmasq
is not installed yet, install it first.
Fedora/RHEL/CentOS:
$ sudo dnf install dnsmasq
Ubuntu:
$ sudo apt install dnsmasq
Then edit /etc/dnsmasq.conf
and assign the IP address of the Vagrant VM 192.168.99.11
to the cluster.local
domain by adding a line address=/cluster.local/192.168.99.11
.
Also, uncomment #bind-interfaces
to enable interface binding.
/etc/dnsmasq.conf
address=/cluster.local/192.168.99.11
...
bind-interfaces
It is because otherwise dnsmasq
won’t start as it conflicts with 127.0.0.53:53
which is occupied by the default DNS service systemd-resolved
.
Restart dnsmasq
.
$ sudo systemctl restart dnsmasq
systemd-resolved
For the recent Linux distributions, systemd-resolved
is the default DNS service.
Modify /etc/systemd/resolved.conf
so that it looks up dnsmasq
(127.0.0.1
).
/etc/systemd/resolved.conf
DNS=127.0.0.1 8.8.8.8
Then switch the symlink /etc/resolv.conf
from /run/systemd/resolve/stub-resolv.conf
to /run/systemd/resolve/resolv.conf
.
$ sudo ln -sf /run/systemd/resolve/resolv.conf /etc/resolv.conf
/etc/nsswitch.conf
If you are using Fedora, you will need to tweak the settings in /etc/nsswitch.conf
.
Since *.local
resolution is primarily handled by mDNS, if your nsswitch.conf
has the settings to return immediately after mDNS resolution, you will need to change it as well.
Specifically, if your nsswitch.conf
has [...=return]
instructions in hosts:
as follows, remove all of them so that the host resolution always reaches dns
.
/etc/nsswitch.conf (before change)
hosts: files myhostname mdns4_minimal [NOTFOUND=return] resolve [!UNAVAIL=return] dns
/etc/nsswitch.conf (after change)
hosts: files myhostname mdns4_minimal resolve dns
Restart systemd-resolved
.
$ sudo systemctl restart systemd-resolved
Configuring port-forwarding on VM side
By now, you can access the application as hello-default.cluster.local:30080
.
$ curl hello-default.cluster.local:30080
Hello ::ffff:10.42.0.51 from hello-64c47c44c8-5bbjg
To make it accessible with curl hello-default.cluster.local
, you need to set up port forwarding from 30080
to 80
on the VM side.
redir
The easiest way to do this job is to use redir
. Let’s install redir
.
Fedora/RHEL/CentOS
$ sudo dnf install redir
Ubuntu
$ sudo apt install redir
Start redir
in the background on the VM.
sudo redir -l debug :80 localhost:30080
You can check the log of redir
in the syslog on the VM.
$ sudo tail -f /var/log/syslog | grep redir
Oct 18 06:29:31 ubuntu-focal redir[142824]: peer IP is 192.168.99.1
Oct 18 06:29:31 ubuntu-focal redir[142824]: peer socket is 33402
Oct 18 06:29:31 ubuntu-focal redir[142824]: target IP address is 127.0.0.1
Oct 18 06:29:31 ubuntu-focal redir[142824]: target port is 30080
Oct 18 06:29:31 ubuntu-focal redir[50856]: target is localhost:30080
Oct 18 06:29:31 ubuntu-focal redir[50856]: Waiting for client to connect on server socket ...
Oct 18 06:29:31 ubuntu-focal redir[142824]: Connecting 192.168.99.1:33402 to 192.168.99.1:30080
Oct 18 06:29:31 ubuntu-focal redir[142824]: Entering copyloop() - timeout is 0
Oct 18 06:29:31 ubuntu-focal redir[142824]: Disconnect after 0 sec, 332 bytes in, 91 bytes out
Conclusion
Now you will be able to access it with hello-default.cluster.local
.
$ curl hello-default.cluster.local
Hello ::ffff:10.42.0.51 from hello-64c47c44c8-5bbjg