FME Flow: 2024.1
Deploying to Google Cloud Using Google Kubernetes Engine and Cloud Filestore
The following steps explain how to deploy an FME Flow Kubernetes deployment consisting of multiple FME Flow Core nodes to Google Cloud using Google Kubernetes Engine (GKE) and Cloud Filestore for storage.
- Launch the Cluster
- Navigate to the Google Kubernetes Engine section of the Google Cloud Platform console.
- Select Create Cluster to deploy a new Kubernetes Cluster.
- If not already selected, select Switch to Standard cluster at the top right, and set the following parameters:
- Name: Name of the cluster.
- Location type: For test purposes, select Zonal.
- Zone: Select the zone closest to you or your data.
- Master version: Keep the default value.
- Node pool: For each FME Flow Core you want to run, you need approximately 8GB of memory. For a single-Core deployment, we recommend 2 vCPUS, 8GB RAM, and 2 nodes. For a two-Core deployment, we recommend 4 vCPUs, 16GB RAM, and 3 nodes.
Note More than one Core container requires a minimum of two nodes due to a constraint in the Helm chart that prevents Core containers residing on the same node. To ensure fault tolerance, we recommend three nodes because if a node is lost, two Cores can still run on the remaining two nodes. - Click Create.
- Navigate to the Google Kubernetes Engine section of the Google Cloud Platform console.
- Select Create Cluster to deploy a new Kubernetes Cluster.
- If not already selected, select Switch to Autopilot cluster at the top right, and set the following parameters:
- Name: Name of the cluster.
- Region: Select the region closest to you or your data.
- Networking: Choose the correct network settings for your environment. You must ensure that the cluster is accessible to any other services you may use, such as Filestore or Cloud SQL.
- Click Create.
- Connect to the Cluster
- Click Connect beside the Kubernetes cluster you just launched.
- Copy the "Command-line access" command and run in Google Cloud Shell or execute it on your terminal.
- To check if you are connected to the cluster, run:
- Install the NGINX ingress controller:
- Set Up Shared Storage for the FME Flow System Share
- Go to "Filestore" in the Google Cloud Platform console.
- Click Create Instance, and set the following:
- Instance ID: Identifier for the instance.
- Instance tier: Standard
- Location: The same region and zone to which you deployed the Kubernetes cluster.
- Fileshare name: Specify a name.
- Fileshare capacity: 1TB (default)
- Click Create.
- To deploy a storage class in the cluster to use the new Filestore instance of the NFS server, Kubernetes provides the nfs-subdir-external-provisioner Helm chart. First, go to the details of your Filestore instance and note the IP address and file share name.
- Run:
- Deploy FME Flow
- Add the Safe Software Helm repository:
- Get the IP address of the Ingress Controller:
- Create a namespace in which to deploy. Run:
- Create a values.yaml file that will pass user-supplied parameter values into the Helm chart.
- (Optional) Update parameters in values.yaml.
- Install FME Flow.
- To access FME Flow after it deploys, use your browser to invoke the external IP in your browser.Note If you are using the external IP address, and not a DNS, to access the deployment, you must update the URLs for each of the FME Flow Web Services to that address. To update, see Services.



Run the following commands in Cloud Shell (recommended). Alternatively, configure your local machine to run kubectl and helm commands to deploy resources to the cluster.
kubectl get nodes
You should see all nodes in the cluster.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-<version>/deploy/static/provider/cloud/deploy.yaml
Replace <version> with the most recent version in the corresponding script in the NGINX Ingress Controller Installation Guide. As well, replace other syntax elements with any changes in the updated script.

Using Cloud Filestore, storage for the FME Flow System Share is accessible across all nodes. This procedure creates a Cloud Filestore instance to host an NFS server for the Kubernetes cluster to use.
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=<ip_address> \
--set nfs.path=<fileshare_name>
Replace <ip_address> and <fileshare_name> with the values of your Fileshare. For example:
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=10.135.210.250 \
--set nfs.path=/fmeflow-storage
This command creates a storage class in the Kubernetes cluster called "nfs-client" that uses this Filestore NFS server for volumes in Kubernetes.

helm repo add safesoftware https://safesoftware.github.io/helm-charts/
kubectl -n ingress-nginx get svc
Note the "EXTERNAL-IP" of the "ingress-nginx-controller". This is the ingress to the cluster. In a production environment, set up a DNS that points to this IP address and use that as the hostname when deploying FME Flow. In this procedure, we access FME Flow directly using the IP.
kubectl create namespace <namespace_name>
For example:
kubectl create namespace fmeflow
To get the default Helm chart values file, run:
helm show values <chart>
For example:
helm show values safesoftware/fmeflow
To write the values to a file:
helm show values safesoftware/fmeflow >> values.yaml
For more information on value files, see the Helm Docs.
A current list of supported parameters for FME Flow can be found on GitHub.
The following are parameters you may want to change in the values.yaml file:

Parameter |
Example or Possible Value |
Description |
---|---|---|
fmeflow.image.tag | 2024.0 | Set to the FME Flow major release version you want to deploy, such as 2024.0 or 2024.1. |
fmeflow.engines[].name | “standard-group” | The name of the engine group. This can be changed, particularly if creating multiple engine groups. |
fmeflow.engines[].engines | 2 | Controls the number of engine pods to start. |
fmeflow.engines[].type | STANDARD | DYNAMIC | Controls the type of engine to start. |
fmeflow.healthcheck.liveness.initialDelaySeconds | 60 | For Autopilot clusters, set this value to 900, as FME Flow pods take longer to come online when using Autopilot. This will ensure the pods have enough time to start without premature health checks. |
fmeflow.healthcheck.readiness.initialDelaySeconds | 60 | For Autopilot clusters, set this value to 900, as FME Flow pods take longer to come online when using Autopilot. This will ensure the pods have enough time to start without premature health checks. |
deployment.hostname | If a DNS has been configured to point to the external IP address, set to that value. If using the external IP address only, leave leave as localhost and set deployment.useHostnameIngress to false. | |
deployment.numCores | 2 | Starts two FME Flow Core pods for fault tolerance and load balancing. |
deployment.useHostnameIngress | true | false | If DNS is set up for the ingress controller, specify the DNS name for deployment.hostname and leave this parameter to true. If not, specify false. |
storage.fmeflow.class | nfs-client | Uses the Cloud Filestore NFS Server set up previously. |
The following resource requests have been configured to comply with minimum resource requests allowed by Autopilot.
For more information, see Resource requests in Autopilot.

1
2
3
4
5
6
7
engines:
groups:
- name: "standard-group"
resources:
requests:
memory: 512Mi
cpu: 250m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
resources:
core:
requests:
memory: 1.5Gi
cpu: 250m
web:
requests:
memory: 1Gi
cpu: 250m
limits:
memory:
cpu:
queue:
requests:
memory: 512Mi
cpu: 500m
websocket:
requests:
memory: 512Mi
cpu: 250m
1
2
3
4
5
6
7
8
postgresql:
primary:
persistence:
existingClaim: fmeserver-postgresql
resources:
requests:
memory: 512Mi
cpu: 250m
helm install <name> <chart> -f values.yaml -n <namespace>
For example:
helm install fmeflow safesoftware/fmeflow -f values.yaml -n fmeflow
If you make changes to the values.yaml file after FME Flow is installed (such as to scale FME Flow engines), use the helm upgrade command:
helm upgrade <name> <chart> -f values.yaml -n <namespace>
For example:
helm upgrade fmeflow safesoftware/fmeflow -f values.yaml -n fmeflow