FluxCD GitOps Made Simple: A Follow-up (bis)
In my previous posts, "FluxCD GitOps Made Simple: My Journey to Automated Kubernetes Deployments" and its follow-up, I documented my journey from basic FluxCD setup to implementing a layered configuration approach using Git submodules and ConfigMap generation. However, as I continued to scale and improve my GitOps implementation, I encountered a few challenges with Kustomization validation, ConfigMap dependency management, and the complexity of maintaining multiple environment configurations while maximizing the automation I was targeting for.
More precisely, the next evolution of my FluxCD setup was driven by the need to solve three core challenges I was facing: Kustomization validation errors that were preventing proper deployment (although at the time, it had limited effect on my deployment), ConfigMap dependency management that required careful orchestration between component updates and FluxCD reconciliation (which wasn't working properly in all cases), and monitoring and health checks to ensure the entire system remained in a consistent state (not fully achieved, but much better than before).
My current setup now uses configMapGenerator implementation that
automatically creates ConfigMaps from component-specific values files stored in Git submodules of my
main FluxCD repository. Each of those Git submodules links to their component-specific Git
repositories, which are the actual repositories containing the component-specific values files. This
approach ensures that all configuration changes are version-controlled and follow GitOps principles,
while the valuesFrom feature in HelmRelease resources allows for layered configuration
management across multiple environments. The key breakthrough came when I restructured my repository
to separate ConfigMap generation into dedicated Kustomization resources with proper health checks,
ensuring that HelmReleases only deploy after their dependent ConfigMaps are successfully created and
available.
This post will walk through the specific changes made to my repository structure, the implementation of proper dependency management, and the lessons learned about Kustomization's validation requirements. I'll also share the monitoring strategies I've implemented to track ConfigMap updates and ensure smooth deployments across both local and sandbox environments.
Note, my personal objective with these tools is always the same: trying to maintain a basic implementation of each component in use, allowing for maximum flexibility and reusability while keeping the implementation simple and easy to understand. In other words, using the tools the proper way, as it was designed to be used! (or at least, try to do so)
The Problem: Kustomization Validation and Dependency Chaos
As my FluxCD implementation grew from a simple setup to a multi-component, multi-environment system, I encountered several issues that threatened the reliability of my GitOps workflow:
1. ConfigMap Changes were not being detected
This was THE major issue, preventing the system from triggering the deployment of the HelmRelease. It was properly working in some scenarios, but not in others.
2. Kustomization Validation Errors
The most frustrating challenge was Kustomization's strict validation rules. When I initially tried to
include configMapGenerator directly in my main environment Kustomization files, I
encountered validation errors that prevented deployments. The error messages were often cryptic,
pointing to issues with file paths, namespace configurations, or resource references that weren't
immediately obvious to me.
3. ConfigMap Dependency Race Conditions
Even when ConfigMaps were successfully generated, HelmReleases would sometimes attempt to deploy before their dependent ConfigMaps were fully available in the cluster. This created race conditions where deployments would fail with "ConfigMap not found" errors, requiring manual intervention and reconciliation.
4. Lack of Visibility between ConfigMap generation and HelmRelease deployment
Without proper monitoring and health checks, it was difficult to determine whether ConfigMap generation was successful or if dependencies were properly satisfied. This lack of visibility made troubleshooting deployments challenging and a lot more unpredictable and unreliable.
The Solution: Restructured Repository with Dedicated Config Management
To address these challenges, I completely restructured my FluxCD repository to separate concerns and implement proper dependency management. The key insight was to create dedicated Kustomization resources for ConfigMap generation with explicit health checks and dependency chains.
Repository Structure Evolution
My current repository structure now follows a clear separation of concerns:
fluxcdboucio/
├── .gitmodules # Git submodule definitions
├── clusters/
│ ├── base/ # Base configurations
│ │ ├── apps/examples/ # Base HelmRelease definitions
│ │ └── infrastructure/ # Base infrastructure components
│ ├── components/ # Git submodules for each component
│ │ ├── static-website/ # Static website component, actually pointing to the actual static-website Git repository
│ │ │ ├── base.values.yaml # Base configuration
│ │ │ ├── lcl.values.yaml # Local environment values
│ │ │ └── snbx.values.yaml # Sandbox environment values
│ │ ├── api-java/ # Java API component...
│ │ ├── chatbot-ui/ # Chatbot UI component...
│ │ └── ... # Other components
│ ├── local/ # Local environment
│ │ ├── config/ # ConfigMap generation
│ │ │ └── kustomization.yaml # ConfigMap generators
│ │ ├── apps/examples/ # Local-specific HelmRelease patches
│ │ ├── infrastructure/ # Local infrastructure patches
│ │ ├── flux-system/ # FluxCD system resources
│ │ │ ├── config-kustomization.yaml # Config management
│ │ │ ├── apps-kustomization.yaml # Apps management
│ │ │ └── infra-kustomization.yaml # Infrastructure management
│ │ └── kustomization.yaml # Main local kustomization
│ └── sandbox/ # Sandbox environment (similar structure)
└── README.md
The folder structure is quite simple and straightforward, and it allows me to keep the configuration
of each component in a separate Git repository, while still being able to use the same FluxCD
repository for all the components. The main FluxCD configuration for a specific environment is
located in the clusters/local/flux-system/ directory for the local environment for
example, triggering the kustomization of the ConfigMap generation at first and then the Apps and
Infrastructure components via their HelmRelease definitions afterwards. The same goes for the
sandbox environment, with the clusters/sandbox/flux-system/ directory.
ConfigMap Generation Strategy
The core of my solution is the dedicated ConfigMap generation in the config/ directory.
Here's how it works.
Terminology: base = default values shared across environments; level = environment-specific overrides (e.g., local, sandbox); inflight = near-term or runtime overrides applied during deployment.
# clusters/local/config/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: static-web-base-values
namespace: default
files:
- base.values.yaml=../../components/static-website/base.values.yaml
- name: static-web-level-values
namespace: default
files:
- lcl.values.yaml=../../components/static-website/lcl.values.yaml
# ... similar entries for all components
generatorOptions:
disableNameSuffixHash: true
commonLabels:
config-managed-by: flux
In effect, the ConfigMap is created using a sequence of values YAML files, each bringing in the appropriate values for the component, which is actually the component Git repository allowing the developer of that particular component to maintain the content of those values files. First, the base values are included and provide the common values for the component, then the "level" values are included and provide either the local environment values or the sandbox values for the component. After that, the values from the HelmRelease are included, and the values from the HelmRelease take precedence over the values from the ConfigMap. Those HelmRelease values are usually maintained by FluxCD itself, and are used to automatically update the image tags of the component for the most part.
Dependency Management with Health Checks
An important item to get it right was implementing proper dependency management through FluxCD's
dependsOn feature combined with comprehensive health checks:
# clusters/local/flux-system/config-kustomization.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: local-config
namespace: flux-system
spec:
interval: 1m
path: "./clusters/local/config"
sourceRef:
kind: GitRepository
name: flux-system
prune: true
timeout: 5m
force: true # Forces regeneration when component files change
# Health checks to ensure ConfigMaps are created successfully
healthChecks:
- apiVersion: v1
kind: ConfigMap
name: static-web-base-values
namespace: default
- apiVersion: v1
kind: ConfigMap
name: static-web-level-values
namespace: default
# ... health checks for all ConfigMaps
This was key, as it ensures that the ConfigMap is created before the application is deployed. Without this, the application would be deployed before the ConfigMap is created in some cases, and the application would fail to start. Before that, it would sometimes work and others fail (which I can't explain why at this point!!)
Apps Dependency on Config
Applications are configured to depend on the ConfigMap generation, ensuring proper deployment order:
# clusters/local/flux-system/apps-kustomization.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: local-apps
namespace: flux-system
spec:
interval: 1m
path: "./clusters/local/apps"
sourceRef:
kind: GitRepository
name: flux-system
prune: true
timeout: 10m
# THIS IS THE KEY - Apps depend on config
dependsOn:
- name: local-config
# Post-build variable substitution for dynamic values
postBuild:
substitute:
CONFIG_TIMESTAMP: "$(date +%Y%m%d-%H%M%S)"
substituteFrom:
- kind: ConfigMap
name: static-web-base-values
optional: true
# ... other ConfigMap references
The dependency was working well, but one trick was required: the "CONFIG_TIMESTAMP" variable substitution. This forces Flux to detect ConfigMap changes after the initial creation when only the values files are updated, ensuring reliable reconciliation and triggering the HelmRelease deployments.
Why: Forces a detectable content change so Flux reconciles when only values update. Trade-offs: introduces a synthetic diff and mild churn, which is acceptable for reliability.
Specific Configuration Examples
Here are some specific examples from my implementation that demonstrate the layered configuration approach and that will hopefully help you reproduce the same approach if you need to do so.
Base Configuration (Common Across Environments)
# clusters/components/static-website/base.values.yaml
replicaCount: 1
image:
pullPolicy: Always
service:
type: ClusterIP
port: 80
targetPort: 80
resources:
requests:
cpu: 1m
memory: 48Mi
autoscaling:
minReplicas: 1
maxReplicas: 2
targetCPUUtilizationPercentage: 80
Environment-Specific Overrides
# clusters/components/static-website/lcl.values.yaml (Local)
image:
repository: static-web-example
imagePullSecrets:
environment:
LOG_LEVEL: "debug"
autoscaling:
enabled: false
# clusters/components/static-website/snbx.values.yaml (Sandbox)
image:
repository: REPO_URL/static-web-example
imagePullSecrets: "gitlab-registry-key"
autoscaling:
enabled: true
Git Submodule Configuration
# .gitmodules
[submodule "clusters/components/static-website"]
path = clusters/components/static-website
url = URL_TO_THE_COMPONENT_SPECIFIC_GIT_REPOSITORY/static-web-chart.git
[submodule "clusters/components/api-java"]
path = clusters/components/api-java
url = URL_TO_THE_COMPONENT_SPECIFIC_GIT_REPOSITORY/api-java-chart.git
# ... other components
Key Lessons Learned
Throughout this journey, I've learned several lessons about Kustomization and FluxCD, especially Kustomize as I improved my setup and my understanding of it:
1. I'm still internalizing Kustomize's loading and composition model
Path resolution and resource layering are subtle; small structure choices significantly change outcomes. I still don't have a good understanding of it, but I'm getting there. I was only able to progress and move forward with the help of an AI assistant, and a lot of trial and error.
2. Git submodules add friction, but clarify ownership
Letting component owners ship their own values/config separates Dev and Ops cleanly — even with an extra sync step. Depending on your specific teams setup, this might be undesirable or not required. However, in my case, it was preferred.
3. AI assistants unlocked the apps/infra/config split
The suggested layout with explicit dependencies made the design suggestions and allowed me to adjust the structure to my needs. In fact, it was the combination of a few AI assistants with a lot of trial and error that allowed me to progress and move forward. Once again proving the critical aspects of AI assistants these days within my own workflow. Now, is it the perfect approach? I am still learning and validating that aspects as I learn more.
4. Base/level/inflight values via ConfigMaps and composed via FluxCD are incredibly practical
Layered values provide sane defaults, environment overrides, and safe runtime tweaks while staying GitOps-friendly. The combination of the ConfigMap generation and the FluxCD HelmRelease valuesFrom feature allowed me to achieve a very practical and effective solution but still very simple to understand and maintain.
5. Kustomization composition for YAML configuration
Once setup initially, reusing the installation pattern with Kustomization loading the content of a folder, in other words executing a "kubectl apply" to that content, is fairly easy to understand.
Conclusion
The journey from a basic FluxCD setup to a sophisticated, multi-component GitOps implementation has been both challenging and rewarding. The key to success, at least for now, was to combine the use of FluxCD with Kustomization on some specific aspects of my implementation. I'm still not sure if this is the best-of-all approach, but it's certainly working for now and working very well for my needs.
The current implementation provides several significant benefits:
- Reliability: Proper dependency management eliminates race conditions and ensures consistent deployments
- Maintainability: Separation of concerns makes the system easier to understand and modify
- Scalability: The modular structure makes it easy to add new components and environments
- Visibility: Comprehensive monitoring and health checks provide clear insight into system status
- Automation: The complete workflow is automated, reducing manual intervention and human error
While the initial setup was complex, the resulting system provides a solid foundation for managing Kubernetes deployments across multiple environments with full GitOps compliance. The layered configuration approach with Git submodules and ConfigMap generation has proven to be both flexible and robust.
Next Steps
Looking forward, I plan to continue evolving this GitOps implementation in several areas and learn from those:
1. Enhanced Secret Management
Implement more sophisticated secret management using tools like Sealed Secrets, Vault or others to handle sensitive configuration data while maintaining GitOps principles.
2. Infrastructure as Code Integration
Extend the GitOps approach to infrastructure components like cert-manager, Istio, Keycloak, OAuth2-proxy, and other critical services that are currently managed outside the FluxCD workflow. Those components are currently installed by FluxCD, but I would like to extend the management of their configuration values as well, including the inter-relationships between them (e.g. Keycloak client ID and secret required for OAuth2-proxy).
I believe the foundation is now in place for a robust, scalable GitOps implementation that can grow with my needs while maintaining the principles of declarative configuration, version control, and automated deployment. I'm sure there are still many things to improve and learn, but I'm happy with the progress I've made so far. It was a long journey, but it was worth it.
References and Further Reading
This implementation builds upon several key resources and best practices:
- FluxCD Repository Structure Guide - Official best practices for organizing FluxCD repositories
- Kustomize configMapGenerator Documentation - Detailed guide for ConfigMap generation
- FluxCD HelmRelease valuesFrom Documentation - Official documentation for layered configuration
- Git Submodules Guide - Comprehensive documentation on Git submodules
- FluxCD Kustomize Helm Example - Official example demonstrating Kustomize integration
This blog post represents the culmination of weeks and months of experimentation and refinement in GitOps practices. I am hoping the lessons learned here should help others avoid similar challenges and build more robust Kubernetes deployment workflows. It is in no way the absolute, best-of-all approach but my personal experience with it.
AI Usage Disclosure
This document was created with assistance from AI tools. The content has been reviewed and edited by a human. For more information on the extent and nature of AI usage, please contact the author.