-
Bug
-
Resolution: Done-Errata
-
Major
-
None
-
4.14
-
None
-
Important
-
Yes
-
False
-
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
[root@m1326001 content]# oc get ccr|grep FAIL ocp4-pci-dss-api-server-api-priority-gate-enabled FAIL medium ocp4-pci-dss-kubeadmin-removed FAIL medium ocp4-pci-dss-node-master-file-groupowner-ovs-conf-db-lock FAIL medium ocp4-pci-dss-node-master-file-groupowner-ovs-sys-id-conf FAIL medium ocp4-pci-dss-node-master-file-permissions-cni-conf FAIL medium ocp4-pci-dss-node-worker-file-groupowner-ovs-conf-db-lock FAIL medium ocp4-pci-dss-node-worker-file-groupowner-ovs-sys-id-conf FAIL medium ocp4-pci-dss-node-worker-file-permissions-cni-conf FAIL medium ocp4-pci-dss-ocp-allowed-registries FAIL medium ocp4-pci-dss-ocp-allowed-registries-for-import FAIL medium 1) lets look for these failed rule [root@m1326001 content]# oc debug node/master-0.ocp-m1326001.lnxero1.boe Temporary namespace openshift-debug-gbd6n is created for debugging node... Starting pod/master-0ocp-m1326001lnxero1boe-debug-92gjg ... To use host binaries, run `chroot /host` Pod IP: 10.13.26.3 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-5.1# ls- l /etc/openvswitch/conf.db.~lock~ sh: ls-: command not found It should be ls -l /etc/openvswitch/.conf.db.~lock~. //////file structure should be like this and this affectinng scan and not getting remediated sh-5.1# ls -l /etc/openvswitch/.conf.db.~lock~ -rw-------. 1 openvswitch openvswitch 0 Apr 17 10:05 /etc/openvswitch/.conf.db.~lock~ [root@m1326001 content]# oc describe ccr/ocp4-pci-dss-node-master-file-permissions-cni-conf Name: ocp4-pci-dss-node-master-file-permissions-cni-conf Namespace: openshift-compliance Labels: compliance.openshift.io/check-severity=medium compliance.openshift.io/check-status=FAIL compliance.openshift.io/scan-name=ocp4-pci-dss-node-master compliance.openshift.io/suite=pci-compliance Annotations: compliance.openshift.io/rule: file-permissions-cni-conf API Version: compliance.openshift.io/v1alpha1 Description: Verify Permissions on the OpenShift Container Network Interface Files To properly set the permissions of /etc/cni/net.d/* , run the command: $ sudo chmod 0600 /etc/cni/net.d/* Id: xccdf_org.ssgproject.content_rule_file_permissions_cni_conf Instructions: To check the permissions of /etc/cni/net.d/*, you'll need to log into a node in the cluster. As a user with administrator privileges, log into a node in the relevant pool: $ oc debug node/$NODE_NAME At the sh-4.4# prompt, run: # chroot /host Then,run the command: $ ls -l /etc/cni/net.d/* If properly configured, the output should indicate the following permissions: -rw------- Kind: ComplianceCheckResult Metadata: Creation Timestamp: 2024-04-30T10:58:26Z Generation: 1 Owner References: API Version: compliance.openshift.io/v1alpha1 Block Owner Deletion: true Controller: true Kind: ComplianceScan Name: ocp4-pci-dss-node-master UID: 9321c8fe-b873-4bea-8399-c364fead7764 Resource Version: 6257674 UID: 89ca4140-e8d5-4364-837f-647c91fe585c Rationale: CNI (Container Network Interface) files consist of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. Allowing writeable access to the files could allow an attacker to modify the networking configuration potentially adding a rogue network connection. Severity: medium Status: FAIL Events: <none> [root@m1326001 content]# oc debug node/master-1.ocp-m1326001.lnxero1.boe Temporary namespace openshift-debug-tclhm is created for debugging node... Starting pod/master-1ocp-m1326001lnxero1boe-debug-wfbr5 ... To use host binaries, run `chroot /host` Pod IP: 10.13.26.4 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-5.1# ls -l /etc/cni/net.d/* -rw-r--r--. 1 root root 469 Apr 29 05:53 /etc/cni/net.d/100-crio-bridge.conflist -rw-r--r--. 1 root root 129 Apr 29 05:53 /etc/cni/net.d/200-loopback.conflist sh-5.1# exit exit sh-4.4# exit exit Removing debug pod ... Temporary namespace openshift-debug-tclhm was removed. [root@m1326001 content]# oc debug node/master-0.ocp-m1326001.lnxero1.boe Temporary namespace openshift-debug-qtvkf is created for debugging node... Starting pod/master-0ocp-m1326001lnxero1boe-debug-5ggv4 ... To use host binaries, run `chroot /host` Pod IP: 10.13.26.3 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-5.1# ls -l /etc/cni/net.d/* -rw-r--r--. 1 root root 469 Apr 29 05:41 /etc/cni/net.d/100-crio-bridge.conflist -rw-r--r--. 1 root root 129 Apr 29 05:41 /etc/cni/net.d/200-loopback.conflist sh-5.1# exit exit sh-4.4# exit exit Removing debug pod ... Temporary namespace openshift-debug-qtvkf was removed. [root@m1326001 content]# oc debug node/master-2.ocp-m1326001.lnxero1.boe Temporary namespace openshift-debug-chms5 is created for debugging node... Starting pod/master-2ocp-m1326001lnxero1boe-debug-zsngn ... To use host binaries, run `chroot /host` Pod IP: 10.13.26.5 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-5.1# ls -l /etc/cni/net.d/* -rw-r--r--. 1 root root 469 Apr 29 05:47 /etc/cni/net.d/100-crio-bridge.conflist -rw-r--r--. 1 root root 129 Apr 29 05:47 /etc/cni/net.d/200-loopback.conflist sh-5.1# exit exit sh-4.4# exit exit Removing debug pod ... Temporary namespace openshift-debug-chms5 was removed. [root@m1326001 content]# oc debug node/worker-0.ocp-m1326001.lnxero1.boe Temporary namespace openshift-debug-zzqzf is created for debugging node... Starting pod/worker-0ocp-m1326001lnxero1boe-debug-svksf ... To use host binaries, run `chroot /host` Pod IP: 10.13.26.6 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-5.1# ls -l /etc/cni/net.d/* -rw-r--r--. 1 root root 469 Apr 29 05:40 /etc/cni/net.d/100-crio-bridge.conflist -rw-r--r--. 1 root root 129 Apr 29 05:40 /etc/cni/net.d/200-loopback.conflist sh-5.1# exit exit sh-4.4# exit exit Removing debug pod ... Temporary namespace openshift-debug-zzqzf was removed. [root@m1326001 content]# oc debug node/worker-1.ocp-m1326001.lnxero1.boe Temporary namespace openshift-debug-p5hj7 is created for debugging node... Starting pod/worker-1ocp-m1326001lnxero1boe-debug-jl6dr ... To use host binaries, run `chroot /host` Pod IP: 10.13.26.7 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host sh-5.1# ls -l /etc/cni/net.d/* -rw-r--r--. 1 root root 469 Apr 29 05:44 /etc/cni/net.d/100-crio-bridge.conflist -rw-r--r--. 1 root root 129 Apr 29 05:44 /etc/cni/net.d/200-loopback.conflist sh-5.1# exit exit sh-4.4# exit exit Removing debug pod ... Temporary namespace openshift-debug-p5hj7 was removed. [root@m1326001 content]# oc annotate compliancescans/ocp4-pci-dss-node-worker compliance.openshift.io/rescan= compliancescan.compliance.openshift.io/ocp4-pci-dss-node-worker annotate [root@m1326001 content]# oc annotate compliancescans/ocp4-pci-dss-node-master compliance.openshift.io/rescan= compliancescan.compliance.openshift.io/ocp4-pci-dss-node-master annotate But still failed [root@m1326001 content]# oc get ccr |grep FAIL ocp4-pci-dss-api-server-api-priority-gate-enabled FAIL medium ocp4-pci-dss-kubeadmin-removed FAIL medium ocp4-pci-dss-node-master-file-groupowner-ovs-conf-db-lock FAIL medium ocp4-pci-dss-node-master-file-groupowner-ovs-sys-id-conf FAIL medium ocp4-pci-dss-node-master-file-permissions-cni-conf FAIL medium ocp4-pci-dss-node-worker-file-groupowner-ovs-conf-db-lock FAIL medium ocp4-pci-dss-node-worker-file-groupowner-ovs-sys-id-conf FAIL medium ocp4-pci-dss-node-worker-file-permissions-cni-conf FAIL medium ocp4-pci-dss-ocp-allowed-registries FAIL medium ocp4-pci-dss-ocp-allowed-registries-for-import FAIL medium Not as expected [root@m1326001 content]# oc describe ccr/ocp4-pci-dss-api-server-api-priority-gate-enabled Name: ocp4-pci-dss-api-server-api-priority-gate-enabled Namespace: openshift-compliance Labels: compliance.openshift.io/check-severity=medium compliance.openshift.io/check-status=FAIL compliance.openshift.io/scan-name=ocp4-pci-dss compliance.openshift.io/suite=pci-compliance Annotations: compliance.openshift.io/rule: api-server-api-priority-gate-enabled API Version: compliance.openshift.io/v1alpha1 Description: Enable the APIPriorityAndFairness feature gate To limit the rate at which the API Server accepts requests, make sure that the API Priority and Fairness feature is enabled. Using APIPriorityAndFairness feature provides a fine-grained way to control the behaviour of the Kubernetes API server in an overload situation. To enable the APIPriorityAndFairness feature gate, make sure that the feature-gates API server argument, typically set in the config configMap in the openshift-kube-apiserver namespace contains APIPriorityAndFairness=true. Note that since Kubernetes 1.20, this feature gate is enabled by default. As a result, this rule is only applicable to OpenShift releases prior to 4.7 which was the first OCP release to ship Kubernetes 1.20. Id: xccdf_org.ssgproject.content_rule_api_server_api_priority_gate_enabled Instructions: To verify that APIPriorityAndFairness is enabled, run the following command: oc get kubeapiservers.operator.openshift.io cluster -o json | jq '.spec.observedConfig.apiServerArguments["feature-gates"]' The output should contain "APIPriorityAndFairness=true" Kind: ComplianceCheckResult Metadata: Creation Timestamp: 2024-04-30T10:57:14Z Generation: 1 Owner References: API Version: compliance.openshift.io/v1alpha1 Block Owner Deletion: true Controller: true Kind: ComplianceScan Name: ocp4-pci-dss UID: f371b4c9-9eac-4784-a9a7-56f220856c74 Resource Version: 6256903 UID: 8e7beab3-c988-42c6-999b-167c5f38f3bc Rationale: The APIPriorityAndFairness feature gate enables the use of the FlowSchema API objects which enforce a limit on the number of events that the API Server will accept in a given time slice In a large multi-tenant cluster, there might be a small percentage of misbehaving tenants which could have a significant impact on the performance of the cluster overall. It is recommended to limit the rate of events that the API Server will accept. Severity: medium Status: FAIL Events: <none> [root@m1326001 content]# oc get kubeapiservers.operator.openshift.io cluster -o json | jq '.spec.observedConfig.apiServerArguments["feature-gates"]' [ "AdminNetworkPolicy=false", "AdmissionWebhookMatchConditions=false", "AlibabaPlatform=true", "AutomatedEtcdBackup=false", "AzureWorkloadIdentity=true", "BuildCSIVolumes=true", "CSIDriverSharedResource=false", "CloudDualStackNodeIPs=true", "DynamicResourceAllocation=false", "EventedPLEG=false", "ExternalCloudProvider=false", "ExternalCloudProviderAzure=true", "ExternalCloudProviderExternal=true", "ExternalCloudProviderGCP=false", "GCPLabelsTags=false", "GatewayAPI=false", "InsightsConfigAPI=false", "MachineAPIOperatorDisableMachineHealthCheckController=false", "MachineAPIProviderOpenStack=false", "MaxUnavailableStatefulSet=false", "NodeSwap=false", "OpenShiftPodSecurityAdmission=false", "PrivateHostedZoneAWS=true", "RetroactiveDefaultStorageClass=false", "RouteExternalCertificate=false", "SigstoreImageVerification=false", "VSphereStaticIPs=false", "ValidatingAdmissionPolicy=false" ]
Actual results:
Expected results:
Additional info:
- links to
-
RHBA-2025:3728 OpenShift Compliance Operator 1.7.0
- mentioned on