Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Now, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Increase memory limits for cluster components¶
When any MOSK component reaches the limit of memory resources usage, the affected pod may be killed by OOM killer to prevent memory leaks and further destabilization of resource distribution.
A periodic recreation of a pod killed by OOM killer is normal once a day or
week. But if the alerts frequency increases or pods cannot start and move to
the CrashLoopBack state, adjust the default memory limits to fit your
cluster needs and prevent critical workloads interruption.
Note
For StackLight resources limits, refer to StackLight configuration parameters.
To increase memory limits on a MOSK cluster:
In the spec:providerSpec:value: section of cluster.yaml, add the
resources:limits parameters with the required values for necessary
MOSK components:
kubectl --kubeconfig <pathToManagementClusterKubeconfig> -n <projectName> edit cluster <clusterName>
The limits key location in the Cluster object can differ depending on
component. Different cluster types have different sets of components that
you can adjust limits for.
The following sections describe components that relate to a specific cluster
type with corresponding limits key location provided in configuration
examples. Limit values in the examples correspond to default product values.
Note
For StackLight resources limits, refer to Resource limits.
Limits for common components of any cluster type¶
No limits are set for the following components:
storage-discovery
The memory limits for the following components can be increased on the management and MOSK clusters:
client-certificate-controllermetrics-servermetallb
Note
For
helm-controller, limits configuration is not supported.For
metallb, the limits key incluster.yamldiffers from other common components.
Component name |
Configuration example |
|---|---|
|
spec:
providerSpec:
value:
helmReleases:
- name: client-certificate-controller
values:
resources:
limits:
memory: 500Mi
|
|
spec:
providerSpec:
value:
helmReleases:
- name: metallb
values:
controller:
resources:
limits:
memory: 200Mi
speaker:
resources:
limits:
memory: 500Mi
|
Limits for management cluster components¶
No limits are set for the following components:
baremetal-operatorbaremetal-providercert-manager
The memory limits for the following components can be increased on a
management cluster in the
spec:providerSpec:value:kaas:management:helmReleases: section:
|
|
The memory limits for the following components can be increased on a management cluster in the following sections:
spec:providerSpec:value:kaas:regional:provider:baremetal:helmReleases:spec:providerSpec:value:kaas:regionalHelmReleases:
|
|
Component name |
Configuration example |
|---|---|
|
spec:
providerSpec:
value:
kaas:
management:
helmReleases:
- name: release-controller
values:
resources:
limits:
memory: 200Mi
|
|
spec:
providerSpec:
value:
kaas:
regional:
- provider: baremetal
helmReleases:
- name: baremetal-provider
values:
cluster_api_provider_baremetal:
resources:
requests:
cpu: 500m
memory: 500Mi
|
|
spec:
providerSpec:
value:
kaas:
regionalHelmReleases:
- name: lcm-controller
values:
resources:
limits:
memory: 1Gi
|
|
spec:
providerSpec:
value:
kaas:
regionalHelmReleases:
- name: mcc-cache
values:
nginx:
resources:
limits:
memory: 500Mi
registry:
resources:
limits:
memory: 500Mi
kproxy:
resources:
limits:
memory: 300Mi
|
|
spec:
providerSpec:
value:
kaas:
regional:
- provider: baremetal
helmReleases:
- name: squid-proxy
values:
resources:
limits:
memory: 1Gi
|