Accessing your application using a browser

Last modified by Tomas Terälä on 2025/06/16 13:17

Accessing your application

This page will be a simple explanation on how your application can be accessed with a web browser. For the technical explanation, check How internet connects to pods in OpenShift.

TL;DR

OpenShift has a concept of Routes, which allows a normal user to create a url that can be accessed via a website. Each Route connects the browser to a Pod using a Service. For Tike Container Platform clusters, the possible Routes are of form wildcard.< apps | ext >.cluster-name.k8s.it.helsinki.fi. As long as the wildcard portion is not already in use, the Route will be accepted and a url will be generated. Adding the below snippet gives the service a wildcard certificate, which is managed by cluster administration.

spec:  
 tls:    
   termination: edge    
   insecureEdgeTerminationPolicy: Redirect

By default, the Route uses the Ingress apps, which is accessible only from University of Helsinki networks or by using a VPN. The Ingress ext is open to Internet, and the Route can be used to target it by labeling it with type: external.

If you want a custom url, you will need to provide your own Certificate and also a DNS record that points to host.< apps | ext >.cluster.k8s.it.helsinki.fi. For example
$ nslookup luupeli-test.it.helsinki.fi
...
Non-authoritative answer:
luupeli-test.it.helsinki.fi canonical name = host.ext.ocp-test-0.k8s.it.helsinki.fi.
Name: host.ext.ocp-test-0.k8s.it.helsinki.fi
Address: 128.214.137.153

Longer explanation

Since Pods are usually managed by an owner resource and they are meant to be temporary, their names and IP-addresses keep changing. To combat this, Kubernetes uses objects called Services, which handle the distribution of traffic to the actual backend. Services have a static IP-address and name, so they can be referenced without knowledge of the underlying Pod's names or IP-addresses.

Inside the same cluster, the name of the service can be used to access the application by using the naming <service>.<namespace>.svc.cluster.local. For example connecting to the Service "dummyroutes" in the namespace "test-dummies" would require us to connect to dummyroutes.test-dummies.svc.cluster.local.

Information

In Tike OpenShift clusters, the admins have created NetworkPolicy-objects that deny all incoming traffic from other namespaces. At the bottom of the page are examples for creating a NetworkPolicy that allows another namespace to connect to your Services.

Traffic from outside the cluster

Route

The easiest way to expose your Service to the outside world is by creating a Route. This can be done manually, or by using the command

# a default Route
oc expose service <servicename>

# Route that is publicly available
oc expose service <servicename> --labels type=external

# Route with a lot of customizations
oc expose svc servicename -n projectname --name routename --labels type=external --hostname <service>.ext.<clustername>.k8s.it.helsinki.fi

By adding the label type: external, the Route will be accessible to the public internet, otherwise only from University of Helsinki networks and VPN.

Information

The mechanism that manages Routes uses a Route objects creation date when deciding if a certain hostname is available. This causes the inability to edit spec.host for all Routes.

Routes can also use the default certificates provided by the platform administration, as long as the hostname matches is of form my-app.apps.cluster.k8s.it.helsinki.fi.

Ingress

While Route is an OpenShift specific object, Ingress is the vanilla Kubernetes implementation of bringing in traffic from outside a cluster.

Port forwarding

When diagnosing issues in Routes or Services, oc provides the port-forward -command.

oc port-forward <pod-name> <local-port>:<pod-port>

Custom host names

In order to use a custom hostname, there needs to be DNS-record that targets the correct Ingress and the correct OpenShift cluster. DNS-records that end with .helsinki.fi can be requested using the Onify-form. More info here.

For other custom names, the user needs to create the DNS-record somewhere else. The table below lists the currently available Ingresses.

Cluster + accessDNS-record CNAME
test-cluster, HY-network + VPNhost.apps.ocp-test-0.k8s.it.helsinki.fi
test-cluster, public internethost.ext.ocp-test-0.k8s.it.helsinki.fi
prod-cluster, HY-network + VPNhost.apps.ocp-prod-0.k8s.it.helsinki.fi
prod-cluster, public internethost.ext.ocp-prod-0.k8s.it.helsinki.fi

Since the custom names differ from the wildcard certificates, the user also needs to provide certificates. Let's Encrypt and the ACME-protocol are recommended over manually created certificates. More info in here

NetworkPolicies

Since the traffic between Namespaces(Projects) is limited by the container administration, users need to create a NetworkPolicy that allows traffic into a Namespace. The easiest and least secure method is to allow connections from all Pods in a namespace, in this example all Pods from namespace-b can connect to all Services in namespace-a.

Information

Traffic being blocked only refers to traffic within the cluster from other namespaces. Traffic from Routes or Ingresses is not limited at all by the administration, since these are an intentional choice.

The NetworkPolicy only needs to be created in the receiving end, as requests outside the namespace are not limited.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
 name: allow-from-namespace-b
 namespace: namespace-a
spec:
 ingress:
  - from:
    - namespaceSelector:
       matchLabels:
         kubernetes.io/metadata.name: namespace-b
     podSelector: {}
 podSelector: {}
 policyTypes:
  - Ingress

Common issues

If everything looks fine but traffic is not going through, check the following things:

  • The pod is actually working. Port forward directly to the port serving the pod:
    • oc port-forward <pod-name> <local-port>:<pod-port>
    • and in another terminal: curl localhost:<local-port>
  • Your Service has correct endpoints. Optionally the selectorLabel in your Service is not matching to the correct pods. The command “oc get endpoints” should list all Services and their endpoints. At least one endpoint resembling 10.12.*:<pod-port> should exist.
  • The software in your container has bound to listen to correct network interface (as seen from inside container). localhost may not work, try 0.0.0.0 instead.
  • Port not available and similar errors inside OpenShift.
    • Only port above 1024 are allowed for customer applications.
Information

The EXPOSE command in a Dockerfile does not actually do anything, it's only there to help a user know what ports the application SHOULD be listening to.