Introduction
An OpenShift cluster that uses the OpenShift Software Defined Network supports the use of network policies to control ingress and egress traffic. This allows teams to create rules to manage the communication between pods running in the same namespace, between pods running in different namespaces, and between pods and external entities.
This article presents a simple example to show how network policies can be used.
Access to Source Assets
The source assets used in this example are located here. To use the assets, clone the repository, change to the new cloned repository directory and then ensure that you switch to the branch called cross-project using the command :
git checkout cross-project
Change to the new cloned network-policy directory using the command :
cd network-policy
All assets required are in this directory.
Alternatively, simply copy the contents of the network-policy directory to your local machine.
Application Architecture
The application architecture used in the example is shown in figure 1, using names of people to indicate the required communication path. Communication should flow alphabetically from liam to mark to richard and finally to stuart.
Figure 1 : Cross project communications
The left and right namespaces each contain two applications and associated services. Each service routes traffic to the similarly named application and the applications route traffic to the next service to the right such that:
- In the left namespace the route called liam forwards traffic to the service called liam
- Service liam routes traffic to application liam
- Application liam communicates with service mark
- Service mark routes traffic to application mark
- Application mark communicates with service richard in the right namespace
- Service richard routes traffic to application richard
- Application richard communicates with service stuart
- Service stuart routes traffic to application stuart
The right namespace has no routes defined and all communication to it will originate within the cluster. This represents an isolated service for which external communication, such as a database platform, is not required. In the rogue namespace, a rogue application has been created to try to get access to the information held within the applications in the right project (for example, data held within a database). This application has an external route for a third party to use to interact with the rogue application.
Each of the applications (liam, mark, richard, and rogue) are aware of the service to which they need to communicate and the namespace that contains the service. The application called stuart does not have a service to which it wishes to send requests, so the communication chain will stop there. The applications are configured to send requests via the OpenShift service names in the format :
<service-name>.<service-namespace>.svc.cluster.local
When each application is created, environment variables are used to identify the name of the application and the name of the service and namespaces to which requests should be sent.
Creating the Sample Applications
To create the sample applications, execute the shell script asset-creation.sh. This will use the OpenShift source-2-image process to create the five applications and five services. Commands within the script will then expose the routes for the liam and rogue services.
Check that the build process has been completed with the commands :
oc get pods -n left | grep Running
oc get pods -n right | grep Running
oc get pods -n rogue | grep Running
Two running pods should be reported in the left and right namespaces and one in the rogue namespace.
Testing Communications
To test the communications identify the routes using the commands :
export LEFTROUTE=$(oc get route -n left \
-o jsonpath='{.items[0].spec.host}{"/call-layers"}')
export ROGUEROUTE=$(oc get route -n rogue \
-o jsonpath='{.items[0].spec.host}{"/call-layers"}')
Testing the Allowed Path
Use the curl command to send a request to the left route using :
curl $LEFTROUTE
The result should be similar to:
liam (v1) [10.129.4.230] ----> mark (v1) [10.129.4.231] ----> richard (v1) [10.129.4.229] ----> stuart (v1) [10.129.4.228]
The above shows the chain of communication from liam to mark, then across the project boundary to richard and then stuart.
Testing the Rogue Path
Use the curl command to send a request to the rogue route using:
curl $ROGUEROUTE
The result should be similar to:
rogue (v1) [10.129.4.232] ----> richard (v1) [10.129.4.229] ----> stuart (v1) [10.129.4.228]
The above shows the chain of communication from the rogue application across the project boundary to richard and then stuart.
Protecting Namespaces and Pods
The strategy adopted for the protection of critical resources within the microservice application is to control what is allowed to connect to resources in the ‘right’ namespace. By instructing pods within that project to only accept connections from specific locations, it is possible to create a firewall around the project.
Network Policy Resource
The network policy resource is described in detail in the Kubernetes documentation here. The critical questions to be answered when constructing a network policy are:
- To which project (namespace) should the policy apply?
- To which pod within the project should the policy apply. and how will the pod be identified?
- What type of control is to be applied? Ingress controls the incoming data, and egress controls outgoing data.
- If controlling incoming connections, from where do we want to allow traffic? This can be controlled based on:
- the IP address of the source
- the namespace (also known as the project on OpenShift)
- a pod based on applied labels
The control can be based on one of the criteria above, an AND relationship between two or three elements, or alternatively an OR relationship between two or three elements. If a pod-based selector is used, then the namespace selector must also exist as an OR relationship. To use a pod from any namespace, add the namespace selector of
Namespace: { }
This is shown in the second example below.
- If controlling outgoing connections, where do we want to allow traffic to flow? This can be controlled in the same manner as described in point 4 above. Outgoing control is useful to control the communication within the cluster, but it is also key for controlling connectivity to external resources beyond the cluster boundary.
Network Policy Examples
A number of network policies are presented below that provide varying levels of control over the communication between the projects and pods shown in figure 1.
Allow Traffic From a Specific Namespace
To allow traffic from only the left namespace, create the network policy shown below. This file is also included in the GitHub repository described above.
File : np-allow-from-left-namespace.yaml
Yaml content
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-left-namespace
spec:
podSelector:
matchLabels:
deployment: richard
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
projectName: left
The above rules ensure that the communication can only target the pod identified as richard and the inbound communication must come from the namespace with the label projectName=’left’.
Apply the network policy using the command shown below:
oc project right
oc apply -f np-allow-from-left-namespace.yaml
Switch to the OpenShift web user interface and select the project called ‘right.’ As shown in figure 2, select the administrator view (step 1) on the left-hand side menu and then select Networking (step 2) and then NetworkPolicies (step 3). You will then see the single network policy that has been applied, and if selected (step 4), you can view the yaml for the policy (step 5).
Figure 2 : Viewing network policy in OpenShift web UI
The yaml view shown in figure 2 is a good place to make quick changes to a policy while in the process of development. However, remember that any policy should be stored in a secure repository for use in the future.
Testing
To test the connectivity, repeat the process described above to issue curl commands against the $LEFTROUTE and $ROGUEROUTE urls. You should see that left is successful and the rogue route no longer works and will eventually timeout.
Feel free to make a change to the network policy., For example, change the target pod in line 33 to stuart and test again. Both the tests should fail. Switch the target pod back to richard and change line 45 from projectName: left to projectName: rogue and you should see that the rogue test will work and the left test will fail.
Further Restricting the Scope for Communication
To enhance the firewall effect of the network policy, it is possible to add further rules to the policy. As shown below, the new policy also includes a restriction to the pod from which communication may come alongside the restriction of the namespace.
File : np-allow-from-left-namespace-specific-pod.yaml
Yaml content
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-left-namespace-from-specific-pod
spec:
podSelector:
matchLabels:
deployment: richard
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
projectName: left
podSelector:
matchLabels:
deployment: mark
The above rule now uses an AND relationship between the two elements of the ingress rule, further tightening the scope of the control.
To apply this network policy, first remove the old policy using the command:
oc project right
oc delete networkpolicy/allow-from-left-namespace
You may want to use the curl commands to test that communication is once again wide open:
curl $LEFTROUTE
curl $ROGUEROUTE
Apply the new policy with the command:
oc apply -f np-allow-from-left-namespace-specific-pod.yaml
Test the communication again, and you should see that the left route is successful and the rogue route fails.
Switch to the web user interface of OpenShift, and once again open the YAML of the network policy. Extend the policy rule by adding an additional rule as shown in figure 3.
Figure 3: Adding an OR selector in the ingress rule
The rule can now be described as:
[A pod with the label “deployment=mark” AND within a project that has the label “projectName=left”] OR [a pod within a namespace that has the label “projectName=rogue”]
Test the communication once again to be sure that the left route works and the rogue route works.
If you want to allow network communication from the pod with the label “deployment=mark” from any namespace, then you may think that you can simply remove the three lines of the namespace selector to leave a rule as shown below:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-left-namespace-from-specific-pod
spec:
podSelector:
matchLabels:
deployment: richard
policyTypes:
- Ingress
ingress:
- from:
podSelector:
matchLabels:
deployment: mark
Unfortunately, the above will not work, and a blank namespace selector is required as shown in the snippet below:
ingress:
- from:
podSelector:
matchLabels:
deployment: mark
namespaceSelector: {}
Egress Rules
Egress rules operate in a similar manner to what has been shown above for ingress rules. Frequently the IP Block egress rule is used to control the connections that can be made to the world outside the cluster. The IP Block rules are created using CIDR IP address definitions as shown in the example below:
egress:
- to:
- ipBLock:
cidr: 192.173.0.0/16
Except:
- 192.173.10.12
The above network address definition will create a range of addresses from 192.173.0.0 to 192.173.255.255 which will be accepted for communication from the pods associated with the network policy from which the snippet above is taken. The address 192.173.10.12 is an exception for which communication will be blocked.
Summary
This article has shown how network policies can be used to create firewalls around microservices within specific namespaces. As with all resources, there are a variety of ways the rules can be constructed for the restriction of communication.
Red Hat Advanced Cluster Security for Kubernetes can be used to validate and create network policies within clusters. For more information on that, please take a look here.
Sobre el autor
Navegar por canal
Automatización
Las últimas novedades en la automatización de la TI para los equipos, la tecnología y los entornos
Inteligencia artificial
Descubra las actualizaciones en las plataformas que permiten a los clientes ejecutar cargas de trabajo de inteligecia artificial en cualquier lugar
Nube híbrida abierta
Vea como construimos un futuro flexible con la nube híbrida
Seguridad
Vea las últimas novedades sobre cómo reducimos los riesgos en entornos y tecnologías
Edge computing
Conozca las actualizaciones en las plataformas que simplifican las operaciones en el edge
Infraestructura
Vea las últimas novedades sobre la plataforma Linux empresarial líder en el mundo
Aplicaciones
Conozca nuestras soluciones para abordar los desafíos más complejos de las aplicaciones
Programas originales
Vea historias divertidas de creadores y líderes en tecnología empresarial
Productos
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Servicios de nube
- Ver todos los productos
Herramientas
- Training y Certificación
- Mi cuenta
- Soporte al cliente
- Recursos para desarrolladores
- Busque un partner
- Red Hat Ecosystem Catalog
- Calculador de valor Red Hat
- Documentación
Realice pruebas, compras y ventas
Comunicarse
- Comuníquese con la oficina de ventas
- Comuníquese con el servicio al cliente
- Comuníquese con Red Hat Training
- Redes sociales
Acerca de Red Hat
Somos el proveedor líder a nivel mundial de soluciones empresariales de código abierto, incluyendo Linux, cloud, contenedores y Kubernetes. Ofrecemos soluciones reforzadas, las cuales permiten que las empresas trabajen en distintas plataformas y entornos con facilidad, desde el centro de datos principal hasta el extremo de la red.
Seleccionar idioma
Red Hat legal and privacy links
- Acerca de Red Hat
- Oportunidades de empleo
- Eventos
- Sedes
- Póngase en contacto con Red Hat
- Blog de Red Hat
- Diversidad, igualdad e inclusión
- Cool Stuff Store
- Red Hat Summit