Authentication and Authorization

Create local htpasswd file:

htpasswd -c -B -b ./htpasswd admin redhat
htpasswd -b ./htpasswd developer developer

Then login as admin and create secret:

oc create secret generic localusers \
 --from-file htpasswd=./htpasswd \
 -n openshift-config

Can also use set to udpate secret:

oc set data secret/localusers -n openshift-config --from-file htpasswd=./htpasswd

Add cluster-admin role to new admin account, and it’s ok to ignore warnings since admin is not existed in the system yet:

oc adm policy add-cluster-role-to-user cluster-admin admin
Warning: User 'admin' not found
clusterrole.rbac.authorization.k8s.io/cluster-admin added: "admin"

Create identityprovider.yaml:

apiVersion: config.openshift.io/v1
kind: OAuth
...output omitted...
spec:
  identityProviders:
  - htpasswd:
      fileData:
        name: localusers
    mappingMethod: claim
    name: myusers
    type: HTPasswd

And apply changes with replace cmd to make sure pods in ns openshift-authentication respawned. Here the most important key is the fileData secret name and the htpasswd keyword in side secret, any wrong name will cause failure for Oauth Operator to detect changes.

oc replace -f identityprovider.yaml

Now you’d be able to login as admin and dev and notice that only admin can see cluster resources, like oc get node. And use this to pull password content and save to local file:

oc extract secret/localusers -n openshift-config --to ~/ --confirm

RBAC

You can simply add single user cluster/role binding:

# admin is role key, leader is username
oc policy add-role-to-user admin leader

Or create group and add group to role

# dev-group is group name, developer is username
oc adm groups new dev-group
oc adm groups add-users dev-group developer
oc policy add-role-to-group edit dev-group

Here oc policy can only add role binding, oc adm policy can add both cluster/role binding. Since no name was given when creating role/cluster binding, the role name will become the rolebinding name.

Temperoraly remove ability to create project for all users.

oc adm policy remove-cluster-role-from-group  self-provisioner system:authenticated:oauth

Deployment

To create a deploy right from git repo, use folllowing cmd:

oc new-app --name hello-world-nginx https://github.com/RedHatTraining/DO280-apps --context-dir hello-world-nginx
oc expose service hello-world-nginx --hostname hello-world.apps.ocp4.example.com

OC will automatically map svc port to dockerfile defined exposed ports.

Application Security

Note: By default, OpenShift prevents pods from starting services that listen on ports lower than 1024. This is because the random user ID used by the restricted SCC cannot start a service that listens on a privileged network port (port numbers less than 1024)

Secret and ENV

Secret can be created from value or file(generic), and cert/key(tls) file.

oc create secret generic secret_name  --from-literal key1=secret1 --from-literal key2=secret2
oc create secret generic ssh-keys --from-file id_rsa=/path-to/id_rsa --from-file id_rsa.pub=/path-to/id_rsa.pub
oc create secret tls secret-tls --cert /path-to-certificate --key /path-to-key

Secret can then be accessed by using yaml env definition:

env:
  - name: MYSQL_ROOT_PASSWORD 
    valueFrom:
      secretKeyRef: 
        name: demo-secret    #secret name
        key: root_password   #secret content key name

or set as env:

oc set env deployment/demo --from secret/demo-secret  --prefix MYSQL_

or mount as volume:

oc set volume deployment/demo \ 
    --add --name=v1 --type secret \ 
    --secret-name demo-secret \ 
    --mount-path /app-secrets 

results in:

apiVersion: v1
metadata:
  name: demo
spec:
  template:
    spec:
      volumes: 
        - name: v1
          secret:
            secretName: demo-secret
            defaultMode: 420
      containers:
        volumeMounts: 
            - name: v1
              mountPath: /data

oc new-app comes with templated deployment and svc, auto fills fields like label, strategy, spec and etc, but leave env empty. Using oc set env --from secret/review-secret --prefix MYSQL_ on top of it will cause system automatically map container image exposed env opts to deployment yaml(the detection may based on image or database on redhat, not sure), e.g: a mysql image which has DATABASE, PASSWORD and USER env exposed in its image file will end up with mapping these keys with defined secret value. Here --prefix MYSQL_ will add default capital keys with defined prefix. If the actual value in secret doesn’t exist, the pod will have no env input injected.

# secret review-secret needs to have database, password, and user key fields 
env:
  - name: MYSQL_DATABASE
    valueFrom:
      secretKeyRef: 
        name: database    
        key: review-secret   
  - name: MYSQL_PASSWORD
    valueFrom:
      secretKeyRef: 
        name: password    
        key: review-secret  
  - name: MYSQL_USER
    valueFrom:
      secretKeyRef: 
        name: user   
        key: review-secret 

Of cause you can also set all these env with

oc new-app --name mysql --image registry.redhat.io/rhel8/mysql-80:1 MYSQL_DATABASE=xxx MYSQL_PASSWORD=yyy MYSQL_USER=zzz

SCC

A lot of time the images downloaded from public may not be compatible with OCP, this is mainly because of SCC. When pod can’t be loaded properly, try use scc-subject-review to check if pod requires special SCC, and most importantly, this requires cluster-admin privileged to show correct SCC result. Take a look at the following example, a pod fails to load due to using restricted SCC, the regular ns admin can’t show correct required SCC name, only cluster-admin can get it right.

# cluster-admin
oc get pod/wordpress-68c49c9d4-wq46g -o yaml | oc adm policy scc-subject-review -f -
RESOURCE                        ALLOWED BY
Pod/wordpress-68c49c9d4-wq46g   anyuid
# project-admin
oc get pod/wordpress-68c49c9d4-wq46g -o yaml | oc adm policy scc-subject-review -f -
RESOURCE                        ALLOWED BY
Pod/wordpress-68c49c9d4-wq46g   restricted

The solution is to assign proper SCC to a service account and run pod with it:

oc create serviceaccount wordpress-sa
oc adm policy add-scc-to-user anyuid -z wordpress-sa
oc set serviceaccount deploy/wordpress  wordpress-sa

CNI

As the way how k8s controls most of its resources, CNI network policy commit controls over labels too.

Network Policy example:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Here podSelector is empty, meaning all pods qualifies.

Following exmample uses empty egress rule, meaning all egress allowed on all pods. Since ns by default has no rule meaning it allows everything, having following rule won’t block ingress traffic.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all-egress
spec:
  podSelector: {}
  egress:
  - {}
  policyTypes:
  - Egress

Notice the following difference in a network policy yaml:

# AND
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: dev
      podSelector:
        matchLabels:
          app: mobile
# OR
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: dev
    - podSelector:
        matchLabels:
          app: mobile

The 1st example is logical AND while the second example is putting conditions in from list which ends up with logical OR.

If the default Ingress Controller uses the HostNetwork endpoint publishing strategy, then the default namespace requires the network.openshift.io/policy-group=ingress label.

Scheduling and Quota

Pod scheduling based on NodeSelect or Affinity rules as I mentioned in another blog. Openshift provide a way to override this by using project annotation. If a project has openshift.io/node-selector=xxx in its annotation, then pod won’t follow whatever defined in its deployment yaml.

Few ways to limit resource usage in a project/namespace:

  • Resource limits: uses directly on deploy/pod, with request and limit keys.
  • Quotas: uses against namespace/project, define top value of resources can be used. Also has clusterquota can be used cluster wide for multiple projects.
  • Limit Ranges: defines how much resource can a pod use by default if it’s not defined in Resource limits.

To Setup a project quota:

oc create quota project-quota --hard cpu="3",memory="1G",configmaps="3" -n schedule-limit

HPA

The following cmd can setup hpa policy.

oc autoscale deployment/loadtest --name loadtest --min 2 --max 40 --cpu-percent 70

it’s equivalent to generate yaml:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
 name: loadtest
spec:
 scaleTargetRef:
   apiVersion: apps/v1
   kind: Deployment
   name: loadtest
 minReplicas: 2
 maxReplicas: 40
 targetCPUUtilizationPercentage: 70