Openshift v4.5 Setup Hints
All config and cmd in this blog has been verified and tested against Openshift 4.5 release
How to Install
Follow their official guide of how to install Openshift on Bare Metal servers: installing-bare-metal-network-customizations Most parts of their guide are streight forward, I will only point out few confusing points:
- New solution requires a temporary bootstrap server(bare metal) to be used to initializes the cluster control plane, this server can be removed after bootstrap finished and rejoin cluster as worker node later.
- For multi-network Bare Metal environment, I would prefer to use 1G mgmt bond for PXE, and 10G bond for cluster network(App Data, so default gw is here), and another 10G bond for ceph storage.
- You don’t really need to have a fixed bastion server to install, each part(DHCP, httpd for PXE, DNS) can be distributed somewhere else, even in a CICD pipeline.
openshift-install
will create manifest and digest and convert it to igition file. User can modify it anytime. However some useful parameters such as configuring bond interface are not in there yet. These type of missing config can be added through networkd or Storage file.- PXE boot on Bare Metal servers may have trouble if interfaces not defined properly, server will take a long time to wait for IPs. A solution is to have them predefined in PXE config so that they have all settings in place during first boot and igition can take over control with same IP after server fully boot up.
- Openshift redefines ingress controller pod to use hostnetwork port 80 and 443, and configure HAproxy to use them as endpoint for its Service Expose. This results in conflict potential with nodes’ local services on port 80 and 443.
A working example of PXE config with bond:
default menu.c32
prompt 1
timeout 9
ONTIMEOUT 1
menu title ######## PXE Boot Menu ########
label 1
menu label ^1) Install master1 Node
menu default
kernel rhcos/kernel
append initrd=rhcos/initramfs.img nomodeset console=tty0 console=ttyS0 rd.neednet=1 ip=10.240.x.101:::255.255.255.0::eno1:none nameserver=172.17.x.5 bond=bond1:eno49,eno50:mode=802.3ad,lacp_rate=fast,miimon=100,updelay=1000,downdelay=1000 vlan=bond1.2556:bond1 ip=172.17.x.101::172.17.x.1:255.255.255.0:master1:bond1.2556:none coreos.inst=yes coreos.inst.install_dev=sda coreos.inst.image_url=http://10.240.x.5:8080/install/bios.raw.gz coreos.inst.ignition_url=http://10.240.x.5:8080/ignition/master1.ign
LDAP Authenticaiton
Enable debug logging for oauth pods
oc patch authentications.operator.openshift.io cluster --type=merge -p '{"spec":{"logLevel": "Debug"}}'
Removing debug logging for oauth pods
oc patch authentications.operator.openshift.io cluster --type=json --patch '[{ "op": "remove", "path": "/spec/logLevel"}]'
Synchronising ldap groups
LDAP Syncing – Sync LDAP AD Config
We use an AugmentedActiveDirectoryConfig for group synchronisation, memberOf:1.2.840.113556.1.4.1941:
is fixed value and works for AD.
ldap-sync-config.yaml
kind: "LDAPSyncConfig"
apiVersion: "v1"
url: "ldap://x.x.x.x"
insecure: true
bindDN: "admin"
bindPassword: "<password>"
augmentedActiveDirectory:
groupsQuery:
derefAliases: "never"
pageSize: 1000
groupUIDAttribute: "dn"
groupNameAttributes: ["cn"]
usersQuery:
baseDN: "DC=xx,DC=xx"
scope: "sub"
derefAliases: "never"
filter: "(objectClass=user)"
pageSize: 1000
userNameAttributes: ["sAMAccountName"]
groupMembershipAttributes: ["memberOf:1.2.840.113556.1.4.1941:"]
whitelist.txt
CN=admins,OU=CloudOps,DC=xx,DC=xx
CN=Apps,OU=CloudOps,DC=xx,DC=xx
Testing group synchn run the command without the –confirm flag
oc adm groups sync --sync-config=ldap-sync-config.yaml --whitelist=whitelist.txt
apiVersion: v1
items:
- metadata:
annotations:
openshift.io/ldap.sync-time: 2020-10-08T17:19:17-0400
openshift.io/ldap.uid: CN=admins,OU=CloudOps,DC=xx,DC=xx
openshift.io/ldap.url: x.x.x.x:389
creationTimestamp: null
labels:
openshift.io/ldap.host: x.x.x.x
name: tdlabadmins
users:
- admin
- openshift
- metadata:
annotations:
openshift.io/ldap.sync-time: 2020-10-08T17:19:17-0400
openshift.io/ldap.uid: CN=Apps,OU=CloudOps,DC=xx,DC=xx
openshift.io/ldap.url: x.x.x.x:389
creationTimestamp: null
labels:
openshift.io/ldap.host: x.x.x.x
name: Apps
users: null
kind: List
metadata: {}
Creating a rhel7 pod to testing
oc run rheltest --image=registry.access.redhat.com/rhel7/rhel-tools --restart=Never --attach -i --tty
Updating Cluster to trust a new PKI root/intermediate CA.
Replacing Default Ingress Certificate
Steps to be followed.
- Create a configmap in openshift-config containig the ca cert bundle
- Update proxy/cluster spect to have a trustedCA which references the configmap we created in the previous step.
If installing a new cluster you can allways added the ca cert bundle to the install-config.yaml
If you need pods scheduled in the cluster to be abel to trust certificates issued by the custom PKI
- Create an empty configmap name “trusted-ca” (can be any name)
- Label the configmap to have the label - config.openshift.io/inject-trusted-cabundle=true
- Update your pod definitions to mount this config map to the path /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
eg:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-example-custom-ca-deployment
namespace: my-example-custom-ca-ns
spec:
...
spec:
...
containers:
- name: my-container-that-needs-custom-ca
volumeMounts:
- name: trusted-ca
mountPath: /etc/pki/ca-trust/extracted/pem
readOnly: true
volumes:
- name: trusted-ca
configMap:
name: trusted-ca
items:
- key: ca-bundle.crt
path: tls-ca-bundle.pem
Working with and controlling default project creation
- adding quotas when projects are created
- adding default network policies