Continuing our closer look at Day 2 Operations, in part two of two today, we reviewed the network and storage configuration tasks to be done after the cluster is deployed. No matter how well and thoroughly the cluster is configured for the application workload, if the application administrators, developers, and users can't access those resources, it's all meaningless! We ran out of time before we could look in-depth at authentication and authorization, but please find the important links below - and we’re scheduling a dedicated session on that topic in the future!
As always, please see the list below for additional links to specific topics, questions, and supporting materials for the episode!
If you’re interested in more streaming content, please subscribe to the OpenShift.tv streaming calendar to see the upcoming episode topics and to receive any schedule changes. If you have questions or topic suggestions for the Ask an OpenShift Admin Office Hour, please contact us via Discord, Twitter, or come join us live, Wednesdays at 11am EDT / 1500 UTC, on YouTube and Twitch.
Episode 25 recorded stream:
Supporting links for today’s topic:
As we promised on the stream, here is the full list of topics / links discussed related to day 2 node and cluster configuration. You can find last week’s links in the blog post here. Please remember this is only a subset of the information found in the documentation under the “post-installation configuration” section.
- Network
- Configure secondary+ interfaces that weren’t configured during deployment, preferably using the NMstate Operator
- If needed, add DNS forwarders using the DNS Operator
- If you’re using OVN Kubernetes and want to use IPSec encryption, enable it
- Replace the certificates for API and Ingress with your own
- Configure Ingress sharding and scale up if needed
- Review optimizing networking and routing docs sections
- Storage
- If you’re using OpenShift Container Storage (OCS), and/or other hosted storage offerings from the partner ecosystem, deploy them now
- Configure default storage class - OpenShift will deploy and configure a default storage class with IPI and UPI. You can customize that storage class based on the infrastructure type the cluster is deployed to.
- Configure additional CSI provisioners, for example from partners, as needed for your environment
- Monitoring
- Utilize the OpenShift monitoring dashboards to make adjustments and decisions as applications are deployed and utilization goes up
- Users
- Configure authentication and authorization for your cluster
- Configure the default Project template, used whenever new Projects are created
- Using the Image configuration, add allowed, forbidden, and insecure registries which may be needed for your applications
- Prepare to help educate and onboard users by explaining to them how resource allocations work, along with how Pod scheduling hints and Pod priority affect availability
Other links and materials referenced during the stream:
- Use this link to jump directly to where we start talking about today’s topic
Questions answered during the stream:
- What options do I have if I don’t want to use the built-in load balancer deployed with on-prem IPI?
- Can the Ingress controllers be scaled across multiple infra nodes?
- When deploying using IPI, how do I use a DHCP reservation for control plane nodes when I don’t know the MAC addresses they’re using before deployment? Red Hat encourages using DHCP reservations for control plane nodes to prevent the IP from changing, which could lead to etcd failing. The answer is both simple and a bit frustrating: convert the lease to a reservation after deployment using your DHCP server’s tools.
- If you’re using bare metal IPI and the DHCP lease time is set to “infinite”, then the node will automatically configure a static IP address. But, this only works with bare metal IPI!
- Is is required to have DNS entries for the OpenShift nodes?
- How can I configure the node network on day 2+, particularly configuring static IPs for secondary+ interfaces?
Über den Autor
Nach Thema durchsuchen
Automatisierung
Das Neueste zum Thema IT-Automatisierung für Technologien, Teams und Umgebungen
Künstliche Intelligenz
Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen
Open Hybrid Cloud
Erfahren Sie, wie wir eine flexiblere Zukunft mit Hybrid Clouds schaffen.
Sicherheit
Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren
Edge Computing
Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen
Infrastruktur
Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen
Anwendungen
Entdecken Sie unsere Lösungen für komplexe Herausforderungen bei Anwendungen
Original Shows
Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten
Produkte
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud-Services
- Alle Produkte anzeigen
Tools
- Training & Zertifizierung
- Eigenes Konto
- Kundensupport
- Für Entwickler
- Partner finden
- Red Hat Ecosystem Catalog
- Mehrwert von Red Hat berechnen
- Dokumentation
Testen, kaufen und verkaufen
Kommunizieren
Über Red Hat
Als weltweit größter Anbieter von Open-Source-Software-Lösungen für Unternehmen stellen wir Linux-, Cloud-, Container- und Kubernetes-Technologien bereit. Wir bieten robuste Lösungen, die es Unternehmen erleichtern, plattform- und umgebungsübergreifend zu arbeiten – vom Rechenzentrum bis zum Netzwerkrand.
Wählen Sie eine Sprache
Red Hat legal and privacy links
- Über Red Hat
- Jobs bei Red Hat
- Veranstaltungen
- Standorte
- Red Hat kontaktieren
- Red Hat Blog
- Diversität, Gleichberechtigung und Inklusion
- Cool Stuff Store
- Red Hat Summit