Setting up Mobile Foundation on IBM Cloud Kubernetes Cluster using Helm
improve this page | report issueOverview
Follow the instructions below to configure a MobileFirst Server instance, , MobileFirst Analytics instance and MobileFirst Application Center instance on IBM Cloud Kubernetes Cluster (IKS) using Helm charts.
Below are the basic steps that will get you started:
- Complete the prerequisites
- Download the Passport Advantage Archive (PPA Archive) of IBM Mobile Foundation for IBM Cloud Private
- Load the PPA archive in IBM Cloud Kubernetes Cluster
- Configure and install the MobileFirst Server, MobileFirst Analytics (optional) and MobileFirst Application Center (optional)
Jump to:
- Prerequisites
- Download the IBM Mobile Foundation Passport Advantage Archive
- Using Mobile Foundation images from the Entitled Registry
- Load the IBM Mobile Foundation Passport Advantage Archive
- Environment variables
- Install and configure IBM Mobile Foundation Helm Charts
- Installing Helm Charts
- Verifying the Installation
- Sample application
- Deploying Elasticsearch helm chart for Mobile Foundation Analytics
- Upgrading Helm Charts and Releases
- Uninstall
- Troubleshooting
Prerequisites
You should have an IBM Cloud account and must have set up the IBM Cloud Kubernetes Cluster.
To manage the containers and images, install the following on your host machine as part of IBM Cloud CLI plugins setup:
- IBM Cloud CLI (
ibmcloud
) - Kubernetes CLI (
kubectl
) - IBM Cloud Container Registry plug-in (
cr
) - IBM Cloud Container Service plug-in (
ks
) - Install and setup Docker
- Helm (
helm
) To work with Kubernetes cluster using CLI, you should configure the ibmcloud client.- Make sure you log in to the Clusters page. (Note: IBMid account is required.)
- Click the Kubernetes cluster to which IBM Mobile Foundation Chart needs to be deployed.
- Follow the instructions in Access tab once the cluster is created.
Note: Cluster creation takes few minutes. After the cluster is successfully created, click Worker Nodes tab and make a note of the Public IP.
To access IBM Cloud Kubernetes Cluster using CLI, you should configure the IBM Cloud client. Learn more.
Download the IBM Mobile Foundation Passport Advantage Archive
The Passport Advantage Archive (PPA) of IBM Mobile Foundation is available here. The PPA archive of Mobile Foundation will contain the docker images and Helm Charts of the following Mobile Foundation components:
- MobileFirst Server
- MobileFirst Push
- MobileFirst Live Update
- MobileFirst Analytics
- MobileFirst Analytics Receiver
- MobileFirst Application Center
A MobileFirst DB Initialization component is used or facilitating the database intialization tasks. This takes care of creating Mobile Foundation Schema and Tables (if required) in the database (if it does not exist).
Using Mobile Foundation images from the Entitled Registry
Apart from loading the PPA images into the OpenShift internal image registry or any other external registry, one can use the images from the Entitled Registry (ER).
-
Get a key to the entitled registry. After you order IBM Cloud Pak for Applications, an entitlement key for the Cloud Pak software is associated with your MyIBM account. Get the entitlement key that is assigned to your ID.
- Log in to MyIBM Container Software Library with your IBMid and password, which is associated with the entitled software.
- In the Entitlement keys section, select Copy key to copy the entitlement key to the clipboard.
- Extract the installation configuration from the installer image on the Entitled Registry.
Using a command line, run the following commands.
-
Set the Entitled Registry information. Run export commands that set the following: ENTITLED_REGISTRY to cp.icr.io, ENTITLED_REGISTRY_USER to cp, and ENTITLED_REGISTRY_KEY to the entitlement key that you got from the previous step.
export ENTITLED_REGISTRY=cp.icr.io export ENTITLED_REGISTRY_USER=cp export ENTITLED_REGISTRY_KEY=<apikey>
-
Make sure you are able to log in to the Entitled Registry with the following
docker login
command.docker login "$ENTITLED_REGISTRY" -u "$ENTITLED_REGISTRY_USER" -p "$ENTITLED_REGISTRY_KEY"
-
-
Generate an image pull secret using the Entitled Registry details.
-
Use the following command:
oc create secret docker-registry -n <my_project_name> er-image-pullsecret --docker-server=cp.icr.io --docker-username=<my_username> --docker-password=<my_api_key>
-
Add the pull secrets to
values.yaml
ofibm-mobilefoundation-prod
andibm-es-prod
helm charts
-
For detailed information, see Deploy Mobile Foundation to an existing Red Hat OpenShift Container Platform.
Load the IBM Mobile Foundation Passport Advantage Archive
Follow the steps given below to load the PPA Archive into IBM Cloud Kubernetes Cluster:
-
Log in to the cluster using IBM Cloud plugin. Refer the [IBM Cloud CLI documentation] (https://cloud.ibm.com/docs/cli?topic=cloud-cli-getting-started#overview) for Command Reference.
For example,
ibmcloud login -a cloud.ibm.com
Include the
--sso
option if using a federated ID. Optionally, you may intend to skip SSL validation use the flag--skip-ssl-validation
in the above command. This would bypass SSL validation of HTTP requests. Using this parameter might cause security problems. - Login into the IBM Cloud Container registry & initialize the Container Service using the following commands:
ibmcloud cr login ibmcloud ks init
- Set the region of the deployment using the following command (e.g. us-south)
ibmcloud cr region-set
-
Follow the below steps to Gain access to your cluster -
- Download and install a few CLI tools and the Kubernetes Service plug-in.
curl -sL https://ibm.biz/idt-installer | bash
- Download the kubeconfig files for your cluster.
ibmcloud ks cluster-config --cluster my_cluster_name
- Set the KUBECONFIG environment variable. Copy the output from the previous command and paste it in your terminal. The command output looks similar to the following example:
export KUBECONFIG=/Users/$USER/.bluemix/plugins/container-service/clusters/my_namespace/kube-config-dal10-my_namespace.yml
- Verify that you can connect to your cluster by listing your worker nodes.
kubectl get nodes
- Download and install a few CLI tools and the Kubernetes Service plug-in.
- Load the PPA Archive of Mobile Foundation using the following steps:
- Extract the PPA archive
- Tag the loaded images with the IBM Cloud Container registry namespace and with the right version
- Push the image
- [Optional] Create and Push the manifests, if the worker nodes are based on a combination of architectures (such as amd64, ppc64le, s390x).
Below is an example for loading the mfpf-server and mfpf-push images to the Worker Nodes based on amd64 architecture. You should follow the same process for mfpf-appcenter and mfpf-analytics.
# 1. Extract the PPA archive mkdir -p ppatmp ; cd ppatmp tar -xvzf ibm-mobilefirst-foundation-icp.tar.gz cd ./images for i in *; do docker load -i $i;done # 2. Tag the loaded images with the IBM Cloud Container registry namespace and with the right version docker tag mfpf-server:1.1.0-amd64 us.icr.io/my_namespace/mfpf-server:1.1.0 docker tag mfpf-dbinit:1.1.0-amd64 us.icr.io/my_namespace/mfpf-dbinit:1.1.0 docker tag mfpf-push:1.1.0-amd64 us.icr.io/my_namespace/mfpf-push:1.1.0 # 3. Push all the images docker push us.icr.io/my_namespace/mfpf-server:1.1.0 docker push us.icr.io/my_namespace/mfpf-dbinit:1.1.0 docker push us.icr.io/my_namespace/mfpf-push:1.1.0 # 4. Cleanup the extracted archive rm -rf ppatmp
Below is an example for loading the mfpf-server and mfpf-push images to the Worker Nodes based on multi-architecture. You should follow the same process for mfpf-appcenter and mfpf-analytics.
# 1. Extract the PPA archive mkdir -p ppatmp ; cd ppatmp tar -xvzf ibm-mobilefirst-foundation-icp.tar.gz cd ./images for i in *; do docker load -i $i;done # 2. Tag the loaded images with the IBM Cloud Container registry namespace and with the right version ## 2.1 Tagging mfpf-server docker tag mfpf-server:1.1.0-amd64 us.icr.io/my_namespace/mfpf-server:1.1.0-amd64 docker tag mfpf-server:1.1.0-s390x us.icr.io/my_namespace/mfpf-server:1.1.0-s390x docker tag mfpf-server:1.1.0-ppc64le us.icr.io/my_namespace/mfpf-server/mfpf-server:1.1.0-ppc64le ## 2.2 Tagging mfpf-dbinit docker tag mfpf-dbinit:1.1.0-amd64 us.icr.io/my_namespace/mfpf-dbinit:1.1.0-amd64 docker tag mfpf-dbinit:1.1.0-s390x us.icr.io/my_namespace/mfpf-dbinit:1.1.0-s390x docker tag mfpf-dbinit:1.1.0-ppc64le us.icr.io/my_namespace/mfpf-dbinit/mfpf-dbinit:1.1.0-ppc64le ## 2.3 Tagging mfpf-push docker tag mfpf-push:1.1.0-amd64 us.icr.io/my_namespace/mfpf-push:1.1.0-amd64 docker tag mfpf-push:1.1.0-s390x us.icr.io/my_namespace/mfpf-push:1.1.0-s390x docker tag mfpf-push:1.1.0-ppc64le us.icr.io/my_namespace/mfpf-push/mfpf-push:1.1.0-ppc64le # 3. Push all the images ## 3.1 Pushing mfpf-server images docker push us.icr.io/my_namespace/mfpf-server:1.1.0-amd64 docker push us.icr.io/my_namespace/mfpf-server:1.1.0-s390x docker push us.icr.io/my_namespace/mfpf-server/mfpf-server:1.1.0-ppc64le ## 3.3 Pushing mfpf-dbinit images docker push us.icr.io/my_namespace/mfpf-dbinit:1.1.0-amd64 docker push us.icr.io/my_namespace/mfpf-dbinit:1.1.0-s390x docker push us.icr.io/my_namespace/mfpf-dbinit/mfpf-dbinit:1.1.0-ppc64le ## 3.3 Pushing mfpf-push images docker push us.icr.io/my_namespace/mfpf-push:1.1.0-amd64 docker push us.icr.io/my_namespace/mfpf-push:1.1.0-s390x docker push us.icr.io/my_namespace/mfpf-push/mfpf-push:1.1.0-ppc64le # 4. [Optional] Create and Push the manifests ## 4.1 Create manifest-lists docker manifest create us.icr.io/my_namespace/mfpf-server:1.1.0 us.icr.io/my_namespace/mfpf-server:1.1.0-amd64 us.icr.io/my_namespace/mfpf-server:1.1.0-s390x us.icr.io/my_namespace/mfpf-server/mfpf-server:1.1.0-ppc64le --amend docker manifest create us.icr.io/my_namespace/mfpf-dbinit:1.1.0 us.icr.io/my_namespace/mfpf-dbinit:1.1.0-amd64 us.icr.io/my_namespace/mfpf-dbinit:1.1.0-s390x us.icr.io/my_namespace/mfpf-dbinit/mfpf-dbinit:1.1.0-ppc64le --amend docker manifest create us.icr.io/my_namespace/mfpf-push:1.1.0 us.icr.io/my_namespace/mfpf-push:1.1.0-amd64 us.icr.io/my_namespace/mfpf-push:1.1.0-s390x us.icr.io/my_namespace/mfpf-push/mfpf-push:1.1.0-ppc64le --amend ## 4.2 Annotate the manifests ### mfpf-server docker manifest annotate us.icr.io/my_namespace/mfpf-server:1.1.0 us.icr.io/my_namespace/mfpf-server:1.1.0-amd64 --os linux --arch amd64 docker manifest annotate us.icr.io/my_namespace/mfpf-server:1.1.0 us.icr.io/my_namespace/mfpf-server:1.1.0-s390x --os linux --arch s390x docker manifest annotate us.icr.io/my_namespace/mfpf-server:1.1.0 us.icr.io/my_namespace/mfpf-server/mfpf-server:1.1.0-ppc64le --os linux --arch ppc64le ### mfpf-dbinit docker manifest annotate us.icr.io/my_namespace/mfpf-dbinit:1.1.0 us.icr.io/my_namespace/mfpf-dbinit:1.1.0-amd64 --os linux --arch amd64 docker manifest annotate us.icr.io/my_namespace/mfpf-dbinit:1.1.0 us.icr.io/my_namespace/mfpf-dbinit:1.1.0-s390x --os linux --arch s390x docker manifest annotate us.icr.io/my_namespace/mfpf-dbinit:1.1.0 us.icr.io/my_namespace/mfpf-dbinit/mfpf-dbinit:1.1.0-ppc64le --os linux --arch ppc64le ### mfpf-push docker manifest annotate us.icr.io/my_namespace/mfpf-push:1.1.0 us.icr.io/my_namespace/mfpf-push:1.1.0-amd64 --os linux --arch amd64 docker manifest annotate us.icr.io/my_namespace/mfpf-push:1.1.0 us.icr.io/my_namespace/mfpf-push:1.1.0-s390x --os linux --arch s390x docker manifest annotate us.icr.io/my_namespace/mfpf-push:1.1.0 us.icr.io/my_namespace/mfpf-push/mfpf-push:1.1.0-ppc64le --os linux --arch ppc64le ## 4.3 Push the manifest list docker manifest push us.icr.io/my_namespace/mfpf-server:1.1.0 docker manifest push us.icr.io/my_namespace/mfpf-dbinit:1.1.0 docker manifest push us.icr.io/my_namespace/mfpf-push:1.1.0 # 5. Cleanup the extracted archive rm -rf ppatmp
Note:
- The
ibmcloud cr ppa-archive load
command approach doesn’t support the PPA package with multi-arch support. Hence one has to extract and push the package manually to the IBM Cloud Container repository (users using older PPA versions need to use following command to load).
- Multi-architecture refers to architectures including intel (amd64), power64 (ppc64le) and s390x. Multi-arch is supported from ICP 3.1.1 only.
ibmcloud cr ppa-archive-load --archive <archive_name> --namespace <namespace> [--clustername <cluster_name>]
archive_name of Mobile Foundation is the name of the PPA archive downloaded from IBM Passport Advantage,
The helm charts are stored in the client or local (unlike ICP helm chart stored in the IBM Cloud Private helm repository). Charts can be located within the ppa-import/charts
(or charts) directory.
Install and configure IBM Mobile Foundation Helm Charts
Before you install and configure MobileFirst Server, you should have the following:
This section summarizes the steps for creating secrets.
Secret objects let you store and manage sensitive information, such as passwords, OAuth tokens, ssh keys and so on. Putting this information in a secret is safer and more flexible than putting it in a Pod definition or in a container image.
-
[Mandatory] A DB2 database instance should be configured and has to be ready to use. You will need the database information to configure MobileFirst Server helm. MobileFirst Server requires schema and tables, which will be created (if it does not exist) in this database.
-
[Mandatory] Creating database secrets for Server, Push and Application Center. This section outlines the security mechanisms for controlling access to the database. Create a secret using specified subcommand and provide the created secret name under the database details.
Run the code snippet below to create a database secret for Mobile Foundation server:
# Create mfpserver secret
cat <<EOF | kubectl apply -f -
apiVersion: v1
data:
MFPF_ADMIN_DB_USERNAME: encoded_uname
MFPF_ADMIN_DB_PASSWORD: encoded_password
MFPF_RUNTIME_DB_USERNAME: encoded_uname
MFPF_RUNTIME_DB_PASSWORD: encoded_password
MFPF_PUSH_DB_USERNAME: encoded_uname
MFPF_PUSH_DB_PASSWORD: encoded_password
MFPF_LIVEUPDATE_DB_USERNAME: encoded_uname
MFPF_LIVEUPDATE_DB_PASSWORD: encoded_password
kind: Secret
metadata:
name: mfpserver-dbsecret
type: Opaque
EOF
Run the below code snippet to create a database secret for Application Center
# create appcenter secret
cat <<EOF | kubectl apply -f -
apiVersion: v1
data:
APPCNTR_DB_USERNAME: encoded_uname
APPCNTR_DB_PASSWORD: encoded_password
kind: Secret
metadata:
name: appcenter-dbsecret
type: Opaque
EOF
NOTE: You may encode the username and password details using the below command -
export $MY_USER_NAME=<myuser>
export $MY_PASSWORD=<mypassword>
echo -n $MY_USER_NAME | base64
echo -n $MY_PASSWORD | base64
-
[Optional] A pre-created Login Secret can be provided for Server, Analytics and Application Center console login. For example:
kubectl create secret generic serverlogin --from-literal=MFPF_ADMIN_USER=admin --from-literal=MFPF_ADMIN_PASSWORD=admin
For Analytics.
kubectl create secret generic analyticslogin --from-literal=ANALYTICS_ADMIN_USER=admin --from-literal=ANALYTICS_ADMIN_PASSWORD=admin
For Application Center.
kubectl create secret generic appcenterlogin --from-literal=APPCENTER_ADMIN_USER=admin --from-literal=APPCENTER_ADMIN_PASSWORD=admin
NOTE: If these secrets are not provided, they are created with default username and password of admin/admin during the deployment of Mobile Foundation helm chart
-
[Optional] You can provide your own keystore and truststore to Server, Push, Analytics and Application Center deployment by creating a secret with your own keystore and truststore.
Pre-create a secret with
keystore.jks
andtruststore.jks
along with keystore and trustore password using the literals KEYSTORE_PASSWORD and TRUSTSTORE_PASSWORD provide the secret name in the field keystoreSecret of respective componentKeep the files
keystore.jks
,truststore.jks
and its passwords as belowFor example:
kubectl create secret generic server --from-file=./keystore.jks --from-file=./truststore.jks --from-literal=KEYSTORE_PASSWORD=worklight --from-literal=TRUSTSTORE_PASSWORD=worklight
NOTE: The names of the files and literals should be the same as mentioned in command above. Provide this secret name in
keystoresSecretName
input field of respective component to override the default keystores when configuring the helm chart. -
[Optional] Mobile Foundation components can be configured with hostname based Ingress for external clients to reach them using hostname. The Ingress can be secured by using a TLS private key and certificate. The TLS private key and certificate must be defined in a secret with key names
tls.key
andtls.crt
.The secret mf-tls-secret has to be created in the same namespace as the Ingress resource by using the following command:
kubectl create secret tls mf-tls-secret --key=/path/to/tls.key --cert=/path/to/tls.crt
The ingress hostname and the name of the secret is then provided in the field global.ingress.secret. Modify the values.yaml to add appropriate ingress hostname and the ingress secret name while deploying the helm chart.
NOTE: Avoid using same ingress hostname if it was already used for any other helm releases.
-
[Optional] Mobile Foundation Server is predefined with confidential clients for Admin Service. The credentials for these clients are provided in the
mfpserver.adminClientSecret
andmfpserver.pushClientSecret
fields.These secrets can be created as follows:
kubectl create secret generic mf-admin-client --from-literal=MFPF_ADMIN_AUTH_CLIENTID=admin --from-literal=MFPF_ADMIN_AUTH_SECRET=admin kubectl create secret generic mf-push-client --from-literal=MFPF_PUSH_AUTH_CLIENTID=admin --from-literal=MFPF_PUSH_AUTH_SECRET=admin
If Liveupdate is enabled, then
kubectl create secret generic mf-liveupdate-client --from-literal=MFPF_LIVEUPDATE_AUTH_CLIENTID=admin --from-literal=MFPF_LIVEUPDATE_AUTH_SECRET=admin
NOTE: If the values for these fields
mfpserver.pushClientSecret
,mfpserver.adminClientSecret
andmfpserver.liveupdateClientSecret
are not provided during Mobile Foundation helm chart deployment, default auth ID / client Secret ofadmin / nimda
formfpserver.adminClientSecret
,push / hsup
formfpserver.pushClientSecret
andliveupdate/etadpuevil
formfpserver.liveupdateClientSecret
are generated and utilized.
- [Mandatory] Before you begin the installation of Mobile Foundation Analytics Chart, configure the Persistent Volume and Persistent Volume Claim accordingly. Provide the Persistent Volume to configure Mobile Foundation Analytics. Follow the steps detailed in IBM Cloud Kubernetes documentation to create Persistent Volume.
[OPTIONAL] Custom Server Configuration
To customise the configuration (example: modifying a log trace setting, adding a new jndi property and so on), you will have to create a configmap with the configuration XML file. This allows you to add a new configuration setting or override the existing configurations of the Mobile Foundation components.
The custom configuration is accessed by the Mobile Foundation components through a configMap (mfpserver-custom-config) which can be created as follows -
kubectl create configmap mfpserver-custom-config --from-file=<configuration file in XML format>
The configmap created using the above command should be provided in the Custom Server Configuration in the Helm chart while deploying Mobile Foundation.
Below is an example of setting the trace log specification to warning (The default setting is info) using mfpserver-custom-config configmap.
- Sample config XML (logging.xml)
<server>
<logging maxFiles="5" traceSpecification="com.ibm.mfp.*=debug:*=warning"
maxFileSize="20" />
</server>
- Creating configmap and add the same during the helm chart deployment
kubectl create configmap mfpserver-custom-config --from-file=logging.xml
- Notice the change in the messages.log (of Mobile Foundation components) - Property traceSpecification will be set to com.ibm.mfp.=debug:*=warning.
Environment variables
The table below provides the environment variables used in the MobileFirst Server instance, MobileFirst Analytics, and MobileFirst Application Center
Qualifier | Parameter | Definition | Allowed Value |
---|---|---|---|
Global Configuration |
|||
arch | amd64 | amd64 worker node scheduler preference in a hybrid cluster | 3 - Most preferred (Default). |
ppcle64 | ppc64le worker node scheduler preference in a hybrid cluster | 2 - No preference (Default). | |
s390x | S390x worker node scheduler preference in a hybrid cluster | 2 - No preference (Default). | |
image | pullPolicy | Image Pull Policy | Defaults to IfNotPresent. |
pullSecret | Image Pull Secret | ||
global.ingress | hostname | The external hostname or IP address to be used by external clients | Leave blank to default to the IP address of the cluster proxy node |
secret | TLS secret name | Specifies the name of the secret for the certificate that has to be used in the Ingress definition. The secret has to be pre-created using the relevant certificate and key. Mandatory if SSL/TLS is enabled. Pre-create the secret with Certificate & Key before supplying the name here | |
sslPassThrough | Enable SSL passthrough | Specifies is the SSL request should be passed through to the Mobile Foundation service - SSL termination occurs in the Mobile Foundation service. Default: false | |
global.dbinit | enabled | Enable initialization of Server, Push and Application Center databases | Initializes databases and create schemas / tables for Server, Push and Application Center deployment.(Not required for Analytics). Default: true |
repository | Docker image repository for database initialization | Repository of the Mobile Foundation database docker image | |
tag | Docker image tag | See Docker tag description | |
MFP Server Configuration |
|||
mfpserver | enabled | Flag to enable Server | true (default) or false |
mfpserver.image | repository | Docker image repository | Repository of the Mobile Foundation Server docker image |
tag | Docker image tag | See Docker tag description | |
consoleSecret | A pre-created secret for login | Check Prerequisites section | |
mfpserver.db | host | IP address or hostname of the database where Mobile Foundation Server tables need to be configured. | IBM DB2® (default). |
port | Port where database is setup | ||
secret | A precreated secret which has database credentials | ||
name | Name of the Mobile Foundation Server database | ||
schema | Server db schema to be created. | If the schema already present, it will be used. Otherwise, it will be created. | |
ssl | Database connection type | Specify if you database connection has to be http or https. Default value is false (http). Make sure that the database port is also configured for the same connection mode | |
driverPvc | Persistent Volume Claim to access the JDBC Database Driver | Specify the name of the persistent volume claim that hosts the JDBC database driver. Required if the database type selected is not DB2 | |
adminCredentialsSecret | MFPServer DB Admin Secret | If you have enabled DB initialization ,then provide the secret to create database tables and schemas for Mobile Foundation components | |
mfpserver | adminClientSecret | Admin client secret | Specify the Client Secret name created. Refer #6 in Prerequisites |
pushClientSecret | Push client secret | Specify the Client Secret name created. Refer #6 in Prerequisites | |
liveupdateClientSecret | Liveupdate client secret | Specify the Client Secret name created. Refer here | |
mfpserver.replicas | The number of instances (pods) of Mobile Foundation Server that need to be created | Positive integer (Default: 3) | |
mfpserver.autoscaling | enabled | Specifies whether a horizontal pod autoscaler (HPA) is deployed. Note that enabling this field disables the replicas field. | false (default) or true |
minReplicas | Lower limit for the number of pods that can be set by the autoscaler. | Positive integer (default to 1) | |
maxReplicas | Upper limit for the number of pods that can be set by the autoscaler. Cannot be lower than min. | Positive integer (default to 10) | |
targetCPUUtilizationPercentage | Target average CPU utilization (represented as a percentage of requested CPU) over all the pods. | Integer between 1 and 100(default to 50) | |
mfpserver.pdb | enabled | Specifu whether to enable/disable PDB. | true (default) or false |
min | minimum available pods | Positive integer (default to 1) | |
mfpserver.customConfiguration | Custom server configuration (Optional) | Provide server specific additional configuration reference to a pre-created config map | |
mfpserver | keystoreSecret | Refer the configuration section to pre-create the secret with keystores and their passwords. | |
mfpserver.resources | limits.cpu | Describes the maximum amount of CPU allowed. | Default is 2000m. See Kubernetes - meaning of CPU |
limits.memory | Describes the maximum amount of memory allowed. | Default is 4096Mi. See Kubernetes - meaning of Memory | |
requests.cpu | Describes the minimum amount of CPU required - if not specified will default to limit (if specified) or otherwise implementation-defined value. | Default is 1000m. See Kubernetes - meaning of CPU | |
requests.memory | Describes the minimum amount of memory required. If not specified, the memory amount will default to the limit (if specified) or the implementation-defined value. | Default is 2048Mi. See Kubernetes - meaning of Memory | |
MFP Push Configuration |
|||
mfppush | enabled | Flag to enable Mobile Foundation Push | true (default) or false |
repository | Docker image repository | Repository of the Mobile Foundation Push docker image | |
tag | Docker image tag | See Docker tag description | |
mfppush.replicas | The number of instances (pods) of Mobile Foundation Server that need to be created | Positive integer (Default: 3) | |
mfppush.autoscaling | enabled | Specifies whether a horizontal pod autoscaler (HPA) is deployed. Note that enabling this field disables the replicaCount field. | false (default) or true |
minReplicas | Lower limit for the number of pods that can be set by the autoscaler. | Positive integer (default to 1) | |
maxReplicas | Upper limit for the number of pods that can be set by the autoscaler. Cannot be lower than minReplicas. | Positive integer (default to 10) | |
targetCPUUtilizationPercentage | Target average CPU utilization (represented as a percentage of requested CPU) over all the pods. | Integer between 1 and 100(default to 50) | |
mfppush.pdb | enabled | Specifu whether to enable/disable PDB. | true (default) or false |
min | minimum available pods | Positive integer (default to 1) | |
mfppush.customConfiguration | Custom configuration (Optional) | Provide Push specific additional configuration reference to a pre-created config map | |
mfppush | keystoresSecretName | Refer the configuration section to pre-create the secret with keystores and their passwords. | |
mfppush.resources | limits.cpu | Describes the maximum amount of CPU allowed. | Default is 2000m. See Kubernetes - meaning of CPU |
limits.memory | Describes the maximum amount of memory allowed. | Default is 4096Mi. See Kubernetes - meaning of Memory | |
requests.cpu | Describes the minimum amount of CPU required - if not specified will default to limit (if specified) or otherwise implementation-defined value. | Default is 1000m. See Kubernetes - meaning of CPU | |
requests.memory | Describes the minimum amount of memory required. If not specified, the memory amount will default to the limit (if specified) or the implementation-defined value. | Default is 2048Mi. See Kubernetes - meaning of Memory | |
MFP Liveupdate Configuration |
|||
mfpliveupdate | enabled | Flag to enable Liveupdate | false (default) or true |
mfpliveupdate.image | repository | Docker image repository | Repository of the Mobile Foundation Liveupdate docker image |
tag | Docker image tag | See Docker tag description | |
consoleSecret | A pre-created secret for login | Refer here | |
mfpliveupdate.db | type | Supported database vendor name. | DB2 (default) / MySQL / Oracle |
host | IP address or hostname of the database where Mobile Foundation Server tables need to be configured. | ||
port | Port where database is setup | ||
secret | A precreated secret which has database credentials | ||
name | Name of the Mobile Foundation Server database | ||
schema | Server db schema to be created. | If the schema already present, it will be used. Otherwise, it will be created. | |
ssl | Database connection type | Specify if you database connection has to be http or https. Default value is false (http). Make sure that the database port is also configured for the same connection mode | |
driverPvc | Persistent Volume Claim to access the JDBC Database Driver | Specify the name of the persistent volume claim that hosts the JDBC database driver. Required if the database type selected is not DB2 | |
adminCredentialsSecret | MFPServer DB Admin Secret | If you have enabled DB initialization ,then provide the secret to create database tables and schemas for Mobile Foundation components. | |
mfpliveupdate.replicas | The number of instances (pods) of Mobile Foundation Liveupdate that need to be created | Positive integer (Default: 2) | |
mfpliveupdate.autoscaling | enabled | Specifies whether a horizontal pod autoscaler (HPA) is deployed. Note that enabling this field disables the replicas field. | false (default) or true |
minReplicas | Lower limit for the number of pods that can be set by the autoscaler. | Positive integer (default to 1) | |
maxReplicas | Upper limit for the number of pods that can be set by the autoscaler. Cannot be lower than min. | Positive integer (default to 10) | |
targetCPUUtilizationPercentage | Target average CPU utilization (represented as a percentage of requested CPU) over all the pods. | Integer between 1 and 100(default to 50) | |
mfpliveupdate.pdb | enabled | Specifu whether to enable/disable PDB. | true (default) or false |
min | minimum available pods | Positive integer (default to 1) | |
mfpliveupdate.customConfiguration | Custom server configuration (Optional) | Provide server specific additional configuration reference to a pre-created config map. | |
mfpliveupdate | keystoreSecret | Refer the configuration section to pre-create the secret with keystores and their passwords. | |
mfpliveupdate.resources | limits.cpu | Describes the maximum amount of CPU allowed. | Default is 1000m. See Kubernetes - meaning of CPU |
limits.memory | Describes the maximum amount of memory allowed. | Default is 2048Mi. See Kubernetes - meaning of Memory | |
requests.cpu | Describes the minimum amount of CPU required - if not specified will default to limit (if specified) or otherwise implementation-defined value. | Default is 750m. See Kubernetes - meaning of CPU | |
requests.memory | Describes the minimum amount of memory required. If not specified, the memory amount will default to the limit (if specified) or the implementation-defined value. | Default is 1024Mi. See Kubernetes - meaning of Memory | |
MFP Analytics Configuration |
|||
mfpanalytics | enabled | Flag to enable analytics | false (default) or true |
mfpanalytics.image | repository | Docker image repository | Repository of the Mobile Foundation Operational Analytics docker image |
tag | Docker image tag | See Docker tag description | |
consoleSecret | A pre-created secret for login | Check Prerequisites section | |
mfpanalytics.replicas | The number of instances (pods) of Mobile Foundation Operational Analytics that need to be created | Positive integer (Default: 2) | |
mfpanalytics.autoscaling | enabled | Specifies whether a horizontal pod autoscaler (HPA) is deployed. Note that enabling this field disables the replicaCount field. | false (default) or true |
minReplicas | Lower limit for the number of pods that can be set by the autoscaler. | Positive integer (default to 1) | |
maxReplicas | Upper limit for the number of pods that can be set by the autoscaler. Cannot be lower than minReplicas. | Positive integer (default to 10) | |
targetCPUUtilizationPercentage | Target average CPU utilization (represented as a percentage of requested CPU) over all the pods. | Integer between 1 and 100(default to 50) | |
mfpanalytics.shards | Number of Elasticsearch shards for Mobile Foundation Analytics | default to 2 | |
mfpanalytics.replicasPerShard | Number of Elasticsearch replicas to be maintained per each shard for Mobile Foundation Analytics | default to 2 | |
mfpanalytics.persistence | claimName | Provide an existing PersistentVolumeClaim | nil |
storageClassName | Storage class of backing PersistentVolumeClaim | nil | |
size | Size of data volume | 20Gi | |
mfpanalytics.pdb | enabled | Specify whether to enable/disable PDB. | true (default) or false |
min | minimum available pods | Positive integer (default to 1) | |
mfpanalytics | esrepo | Docker image repository | Repository of the Mobile Foundation Elasticsearch docker image |
estag | Docker image tag | See Docker tag description | |
esnamespace | namespace to deploy Elasticsearch | ||
esmasterReplicas | Master Replica count | Positive integer (Default: 2) | |
esclientReplicas | Client Replica count | Positive integer (Default: 1) | |
esdataReplicas | Data Replica count | Positive integer (Default: 2) | |
mfpanalytics.esdataresources | limits.cpu | Describes the maximum amount of CPU allowed. | Default is 2000m. See Kubernetes - meaning of CPU |
limits.memory | Describes the maximum amount of memory allowed. | Default is 10Gi. See Kubernetes - meaning of Memory | |
requests.cpu | Describes the minimum amount of CPU required - if not specified will default to limit (if specified) or otherwise implementation-defined value. | Default is 1000m. See Kubernetes - meaning of CPU | |
requests.memory | Describes the minimum amount of memory required. If not specified, the memory amount will default to the limit (if specified) or the implementation-defined value. | Default is 2048Mi. See Kubernetes - meaning of Memory | |
mfpanalytics.customConfiguration | Custom configuration (Optional) | Provide Analytics specific additional configuration reference to a pre-created config map | |
mfpanalytics | keystoreSecret | Refer the configuration section to pre-create the secret with keystores and their passwords. | |
mfpanalytics.resources | limits.cpu | Describes the maximum amount of CPU allowed. | Default is 2000m. See Kubernetes - meaning of CPU |
limits.memory | Describes the maximum amount of memory allowed. | Default is 4096Mi. See Kubernetes - meaning of Memory | |
requests.cpu | Describes the minimum amount of CPU required - if not specified will default to limit (if specified) or otherwise implementation-defined value. | Default is 1000m. See Kubernetes - meaning of CPU | |
requests.memory | Describes the minimum amount of memory required. If not specified, the memory amount will default to the limit (if specified) or the implementation-defined value. | Default is 2048Mi. See Kubernetes - meaning of Memory | |
MFP Analytics Receiver Configuration |
|||
mfpanalytics_recvr | enabled | Flag to enable analytics | false (default) or true |
mfpanalytics_recvr.image | repository | Docker image repository | Repository of the Mobile Foundation Operational Analytics Receiver docker image |
tag | Docker image tag | See Docker tag description | |
mfpanalytics_recvr.replicas | The number of instances (pods) of Mobile Foundation Operational Analytics Receiver that need to be created | Positive integer (Default: 2) | |
mfpanalytics_recvr.autoscaling | enabled | Specifies whether a horizontal pod autoscaler (HPA) is deployed. Note that enabling this field disables the replicaCount field. | false (default) or true |
minReplicas | Lower limit for the number of pods that can be set by the autoscaler. | Positive integer (default to 1) | |
maxReplicas | Upper limit for the number of pods that can be set by the autoscaler. Cannot be lower than minReplicas. | Positive integer (default to 10) | |
targetCPUUtilizationPercentage | Target average CPU utilization (represented as a percentage of requested CPU) over all the pods. | Integer between 1 and 100(default to 50) | |
mfpanalytics_recvr.pdb | enabled | Specify whether to enable/disable PDB. | true (default) or false |
min | minimum available pods | Positive integer (default to 1) | |
mfpanalytics_recvr.analyticsRecvrSecret | Analytics Receiver Secret (Optional) | Provide Analytics Receiver pre-created secret | |
mfpanalytics_recvr.customConfiguration | Custom configuration (Optional) | Provide Analytics specific additional configuration reference to a pre-created config map | |
mfpanalytics_recvr | keystoreSecret | Refer the configuration section to pre-create the secret with keystores and their passwords. | |
mfpanalytics_recvr.resources | limits.cpu | Describes the maximum amount of CPU allowed. | Default is 2000m. See Kubernetes - meaning of CPU |
limits.memory | Describes the maximum amount of memory allowed. | Default is 4096Mi. See Kubernetes - meaning of Memory | |
requests.cpu | Describes the minimum amount of CPU required - if not specified will default to limit (if specified) or otherwise implementation-defined value. | Default is 1000m. See Kubernetes - meaning of CPU | |
requests.memory | Describes the minimum amount of memory required. If not specified, the memory amount will default to the limit (if specified) or the implementation-defined value. | Default is 2048Mi. See Kubernetes - meaning of Memory | |
MFP Application center Configuration |
|||
mfpappcenter | enabled | Flag to enable Application Center | false (default) or true |
mfpappcenter.image | repository | Docker image repository | Repository of the Mobile Foundation Application Center docker image |
tag | Docker image tag | See Docker tag description | |
consoleSecret | A pre-created secret for login | Check Prerequisites section | |
mfpappcenter.db | host | IP address or hostname of the database where Appcenter database needs to be configured | |
port | Port of the database | ||
name | Name of the database to be used | The database has to be precreated. | |
secret | A precreated secret which has database credentials | ||
schema | Application Center database schema to be created. | If the schema already exists, it will be used. If not, one will be created. | |
ssl | Database connection type | Specify if you database connection has to be http or https. Default value is false (http). Make sure that the database port is also configured for the same connection mode | |
driverPvc | Persistent Volume Claim to access the JDBC Database Driver | Specify the name of the persistent volume claim that hosts the JDBC database driver. Required if the database type selected is not DB2 | |
adminCredentialsSecret | Application Center DB Admin Secret | If you have enabled DB initialization ,then provide the secret to create database tables and schemas for Mobile Foundation components | |
mfpappcenter.autoscaling | enabled | Specifies whether a horizontal pod autoscaler (HPA) is deployed. Note that enabling this field disables the replicaCount field. | false (default) or true |
minReplicas | Lower limit for the number of pods that can be set by the autoscaler. | Positive integer (default to 1) | |
maxReplicas | Upper limit for the number of pods that can be set by the autoscaler. Cannot be lower than minReplicas. | Positive integer (default to 10) | |
targetCPUUtilizationPercentage | Target average CPU utilization (represented as a percentage of requested CPU) over all the pods. | Integer between 1 and 100(default to 50) | |
mfpappcenter.pdb | enabled | Specifu whether to enable/disable PDB. | true (default) or false |
min | minimum available pods | Positive integer (default to 1) | |
mfpappcenter.customConfiguration | Custom configuration (Optional) | Provide Application Center specific additional configuration reference to a pre-created config map | |
mfpappcenter | keystoreSecret | Refer the configuration section to pre-create the secret with keystores and their passwords. | |
mfpappcenter.resources | limits.cpu | Describes the maximum amount of CPU allowed. | Default is 1000m. See Kubernetes - meaning of CPU |
limits.memory | Describes the maximum amount of memory allowed. | Default is 1024Mi. See Kubernetes - meaning of Memory | |
requests.cpu | Describes the minimum amount of CPU required - if not specified will default to limit (if specified) or otherwise implementation-defined value. | Default is 1000m. See Kubernetes - meaning of CPU | |
requests.memory | Describes the minimum amount of memory required. If not specified, the memory amount will default to the limit (if specified) or the implementation-defined value. | Default is 1024Mi. See Kubernetes - meaning of Memory |
For the tutorial on analyzing logs using Kibana, see here.
Installing Mobile Foundation Helm Chart
Before you begin the installation, ensure that you have covered all the Mandatory sections under [Install and configure IBM Mobile Foundation Helm Charts](#configure-install-mf-helmcharts).
Follow the below steps to install and configure IBM Mobile Foundation on IBM Cloud Kubernetes Cluster.
- To configure the Kubernetes Cluster execute the command below:
ibmcloud cs cluster-config <iks-cluster-name>
- Get the default helm chart values using the following command.
helm inspect values <ibm-mobilefoundation-prod-<version>.tgz> > values.yaml
Example:
helm inspect values ibm-mobilefoundation-prod-8.1.0.tgz > values.yaml
- Modify the values.yaml to add appropriate values before deploying the helm chart. Make sure database details, ingress hostname, secrets, etc. are added and save the values.yaml.
Refer the section Environment variables for more details.
- To deploy the helm chart run the following command:
helm install -n <iks-cluster-name> -f values.yaml <ibm-mobilefoundation-prod-<version>.tgz>
Verifying the Installation
After you have installed and configured the Mobile Foundation components, you can verify your installation and the status of the deployed pods by using IBM Cloud CLI, Kubernetes CLI and helm commands.
See the CLI Command Reference in IBM Cloud CLI documentation and Helm CLI from Helm documentation.
From the IBM Cloud Kubernetes Cluster page on IBM Cloud Portal, one can use the Kubernetes Dashboard button to open the Kubernetes console to manage the cluster artifacts.
Accessing console
After successful installation you can access, IBM Operational Console using <protocol>://<ingress_host>/mfpconsole
.
IBM MobileFirst Analytics console can be accessed using <protocol>://<ingress_host>/analytics/console
.
The protocol can be http
or https
.
For more information on accessing the service via Ingress, see here.
Follow the steps below to access the console:
- Go to the IBM Cloud Dashboard.
- Choose the Kubernetes Cluster on which
Analytics/Server/AppCenter
has been deployed and open the Overview page. - Locate the Ingress subdomain for the ingress hostname and access the consoles as follows.
- Access the IBM Mobile Foundation Operational Console using:
<protocol>://<ingress-hostname>/mfpconsole
- Access the IBM Mobile Foundation Analytics Console using:
<protocol>://<ingress-hostname>/analytics/console
- Access the IBM Mobile Foundation Application Center Console using:
<protocol>://<ingress-hostname>/appcenterconsole
- Access the IBM Mobile Foundation Operational Console using:
- The SSL services support is disabled by default on nginx ingress. You may notice connectivity while accessing the console through https. Follow the below steps to enable SSL services on ingress -
- From IBM Cloud Kubernetes Cluster page, launch the Kubernetes dashboard
- On the Left hand side panel, click on the option Ingresses
- Select the Ingress name
- Click on the Edit button on your top right
- Modify the yaml file and add the ssl-services annotation Example :
"annotations": { "ingress.bluemix.net/ssl-services": "ssl-service=my_service_name1;ssl-service=my_service_name2", ..... .... ... ... }
- Click Update
Note: The port 9600 is exposed internally in the Kubernetes service and is used by the Analytics instances as the transport port.
Sample application
See the tutorials, to deploy the sample adapter and to run the sample application on IBM MobileFirst Server running on IBM Cloud Kubernetes Cluster.
Deploying Elasticsearch helm chart for Mobile Foundation Analytics
Starting from iFix IF202006151151, Elasticsearch helm chart (ibm-es-prod) is a pre-requisite to deploy Mobile Foundation Analytics on IKS cluster.
Prerequisites
A pre-created Persistent Volume and Persistent Volume Claim or Storageclass must be available.
Installing Elasticsearch Helm Chart for Analytics
-
Create ER Secret:
kubectl create secret docker-registry cp-docker-secret --docker-server=cp.icr.io --docker-username=<apikey> --docker-password=<password>
Update the command with the following details:
- Replace with your Entitled registry api key and password.
- Update pullSecret in
values.yaml
that can be extracted from helm chart archive.
-
Configure a persistent volume (PV):
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolume metadata: labels: name: mfanalyticspv name: mfanalyticspv spec: capacity: storage: 20Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain nfs: path: <nfs-mount-volume-path> server: <nfs-server-hostname-or-ip> EOF
-
Configure a persistent volume claim (PVC):
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mfanalyticsvolclaim namespace: <projectname-or-namespace> spec: accessModes: - ReadWriteMany resources: requests: storage: 20Gi selector: matchLabels: name: mfanalyticspv volumeName: mfanalyticspv EOF
-
For elasticsearch deployment, specify either claimName(PVC) or existing storageClassName in the
values.yaml
ofibm-es-prod
helm chart persistence:storageClassName: "" claimName: ""
To use claimName, you must have already configured PV and PVC.
Note: Make sure you add the {nfs-server-hostname-or-ip} and {nfs-mount-volume-path} entries in the above
values.yaml
and ensure PVC is in “inbound” state. -
Run the following command to deploy Elasticsearch helm chart:
helm install -f values.yaml <ibm-es-prod-<version>.tgz>
After the deployment is complete, Elasticseach continues to run as an internal service and can be used by Mobile Foundation analytics. While deploying Mobile Foundation Analytics, update
esnamespace
in thevalues.yaml
ofibm-mobilefoundation-prod
helm chart with the project name where Elasticsearch is deployed.
Backup and recovery of Mobile Foundation Analytics data
The Mobile Foundation Analytics data is available as a part of Kubernetes PersistentVolume or PersistentVolumeClaim. You may be using one of the volume plugins that Kubernetes offers. Backup and restore depends on the volume plugins that you use. You can backup or restore the volume by using various tools. Kuberenetes provides VolumeSnapshot, VolumeSnapshotContent, and Restore options. You may take a copy of the volume in the cluster that has been provisioned by an administrator. Use the sample yaml files to test the snapshot feature. You can also leverage other tools to take a backup of the volume and restore the same.
- IBM Cloud Automation Manager: Leverage capabilities of IBM Cloud Automation Manager and its Backup/Restore, High Availability (HA) and Disaster Recovery (DR) for CAM instances strategies. • Portworx: It is a storage solution designed for applications that are deployed as containers or via container orchestrators such as Kubernetes. • Stash by AppsCode: Using Stash you can backup the volumes in Kubernetes.
Upgrading Helm Charts and Releases
Please refer to Upgrading bundled products for instructions on how-to upgrade helm charts/releases.
Sample scenarios for Helm release upgrades
- To upgrade helm release with changes in values of
values.yaml
, use thehelm upgrade
command with –set flag. You can specify –set flag multiple times. The priority will be given to the right most set specified in the command line.helm upgrade --set <name>=<value> --set <name>=<value> <existing-helm-release-name> <path of new helm chart>
- To upgrade helm release by providing values in a file, use the
helm upgrade
command with -f flag. You can use –values or -f flag multiple times. The priority will be given to the right most file specified in the command line. In the following example, if bothmyvalues.yaml
andoverride.yaml
contain a key called Test, the value set inoverride.yaml
would take precedence.helm upgrade -f myvalues.yaml -f override.yaml <existing-helm-release-name> <path of new helm chart>
- To upgrade helm release by reusing the values from the last release and overriding some of them, a command such as below can be used:
helm upgrade --reuse-values --set <name>=<value> --set <name>=<value> <existing-helm-release-name> <path of new helm chart>
Uninstall
To uninstall MobileFirst Server and MobileFirst Analytics, use the Helm CLI. Use the following command to completely delete the installed charts and the associated deployments:
helm delete --purge <release_name>
release_name is the deployed release name of the Helm Chart.
Troubleshooting
This section guides you in identifying and resolving the likely error scenarios you might encounter while deploying Mobile Foundation
- Helm install failed.
Error: could not find a ready tiller pod
- Run the below set of commands as it is and re-try helm install
helm init kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' helm init --service-account tiller --upgrade
- Unable to pull images while deploying the Helm chart -
Failed to pull image, Error: ErrImagePull
- Make sure the image pullSecret has been added to the values.yaml before helm deployment. If image pull secret doesn’t exist, create a pull secret and assign it to
image.pullSecret
in the values.yaml file.
Example for creating a pull secret:
kubectl create secret docker-registry iks-secret-name --docker-server=us.icr.io --docker-username=iamapikey --docker-password=Your_IBM_Cloud_API_key --docker-email=your_email_id
Note: Keep the value of
--docker-username=iamapikey
as it is, if you are using the IBM Cloud API key for authentication.
- Connectivity issues while accessing the console though ingress
-
To resolve the issue, launch the Kubernetes dashboard and select the option ‘Ingresses’. Edit the Ingress yaml and add the Ingress host details as below -
Example :
"spec": { "tls": [ { "hosts": [ “ingress_host_name” ], "secretName": "ingress-secret-name" } ], "rules": [ { …. ….
Inclusive terminology note: The Mobile First Platform team is making changes to support the IBM® initiative to replace racially biased and other discriminatory language in our code and content with more inclusive language. While IBM values the use of inclusive language, terms that are outside of IBM's direct influence are sometimes required for the sake of maintaining user understanding. As other industry leaders join IBM in embracing the use of inclusive language, IBM will continue to update the documentation to reflect those changes.