As I prepare to move from Italy to the United Kingdom, I’ve started reassessing not just the physical belongings I’ll bring along, but also my personal infrastructure. My goal is to travel light, so reducing both physical and digital baggage has become a priority.

As an OpenShift contributor and enthusiast of the Kubernetes and CoreOS ecosystem, I began exploring if MicroShift—a lightweight Kubernetes distribution built from OpenShift and optimized for small form factors and edge computing—could fulfill my needs.

MicroShift’s design focuses on:

  • Making frugal use of system resources (CPU, memory, network, storage)
  • Tolerating severe network constraints
  • Updating securely, seamlessly, and without disrupting workloads
  • Integrating well with edge-optimized OSes like RHEL for Edge
  • Offering a consistent OpenShift development and management experience

This lean architecture not only serves edge cases but also makes MicroShift a strong candidate for resource-constrained environments, such as Kubernetes development setups on lightweight systems or for deploying Kubernetes control planes in a minimal footprint.

To build this environment, I decided to run MicroShift on Fedora CoreOS, leveraging its lightweight, immutable, and secure OS structure, versioned through OSTree Native Containerization, as I detailed in [a previous post] (/post/rpm-ostree-container-native-fedora-silverblue-kinoite-dual-boot/).

Although MicroShift isn’t officially supported on Fedora CoreOS (it’s designed for RHEL and RHEL for Edge), I’ve managed to embed it into Fedora CoreOS using OSTree Native Containerization. In this post, I’ll walk through the setup, including configuring CoreOS dex as an OAuth provider with GitHub, integrating the OpenShift console, setting up ArgoCD for GitOps, and using cert-manager for automatic SSL certificate provisioning for OpenShift Routes.

My Current Infrastructure

My current setup revolves around a single server—a Dell PowerEdge R710—equipped with two Intel Xeon E5645 CPUs, 48GB of RAM, and 4x4TB RAID10 HDDs. It currently runs Proxmox VE and hosts a range of VMs and containers, which include:

  • An Nginx reverse proxy
  • Home Assistant
  • Nextcloud
  • Vaultwarden
  • Samba shares for file access
  • The UniFi Controller for network management
  • An Nginx server serving static content, including my CV
  • VMs and persistent containers supporting various research projects

I’m aiming to transition all of these services to a Kubernetes-based solution, consolidating my infrastructure for easier management and scaling.

Initial Challenges

MicroShift is distributed as RPM packages for RHEL and RHEL for Edge, specifically with dev-preview versions available from OpenShift’s mirror. However, there are a few hurdles in using MicroShift on Fedora CoreOS (FCOS):

  • Lack of an OKD MicroShift Version: although there is no official OKD version of MicroShift, running MicroShift on FCOS will display the OKD branding in the OpenShift console since FCOS is based on Fedora.

  • Dependency Gaps*: the MicroShift RPMs depend on packages that are not readily available on FCOS. For example:

    • The networking stack relies on openvswitch3.1, available in RHEL but named openvswitch-3.1 in Fedora, where the version number is excluded from the package name.
    • The openshift-clients package, which provides the oc command-line tool, isn’t available outside of the OpenShift repositories. I plan to use the OKD version of oc to align with my FCOS environment.
  • Missing support and downstream-upstream alignment:

    • MicroShift is not officially supported on FCOS, and the project is not aligned with the Fedora community.
    • The MicroShift project is focused on RHEL and RHEL for Edge, and the community is not actively working on supporting Fedora CoreOS.
    • Some packages in FCOS have diverged a lot from the older versions available in RHEL, making it harder to run MicroShift on FCOS smoothly.

To address these, I’ll start by building a layered FCOS image with a configuration suited for running MicroShift smoothly. This image will be created through a Containerfile, enabling me to layer in dependencies and resolve the package naming issues while preserving FCOS’s immutability.

The Repository structure

To support the integration of MicroShift into my infrastructure, I’ve restructured my my-ostree-config repository, which I initially built for my workstations. This reorganization introduced a modular approach inspired by the OKD Machine OS image definition pattern.

In this setup, I’ve added an overlay.d directory, which contains layered configurations to construct different images according to specific needs. This allows me to reuse shared content across images while enabling distinct configurations where required—making the configuration both versatile and maintainable. This structure facilitates MicroShift deployment while preserving consistency and modularity in the image build process.

graph LR subgraph Containerfiles os_desktop.Containerfile os_microshift.Containerfile os_toolbox.Containerfile end subgraph overlay.d overlay.d-1 overlay.d-2 overlay.d-3 end os_desktop.Containerfile --> overlay.d-1 os_microshift.Containerfile --> overlay.d-1 os_microshift.Containerfile --> overlay.d-2 os_desktop.Containerfile --> overlay.d-3 os_toolbox.Containerfile --> overlay.d-1
.
├── overlay.d
│   ├── 00-temp
│   │   └── usr
│   ├── 01-common
│   │   ├── etc
│   │   └── usr
│   ├── 05-systemd
│   │   └── usr
│   ├── 10-desktop
│   │   ├── etc
│   │   └── usr
│   ├── 10-fcos
│   │   ├── etc
│   │   └── usr
│   └── 15-microshift
│       ├── etc
│       └── usr

In the process of preparing Fedora CoreOS for MicroShift, I’ve had to set up a few temporary and custom configurations due to compatibility gaps, particularly in handling certain packages. The overlay.d/00-temp folder, for instance, holds temporary files needed to bridge these gaps, including adjustments for missing packages required by MicroShift.

I initially had to create a workaround for the openvswitch3.x package, though I recently contributed a quest to remove this dependency. Now merged and included in MicroShift version 4.15.0, this update eliminates the need for the openvswitch3.x workaround in future setups.

To make this configuration modular, I’ve structured my my-ostree-config repository with several layered folders under overlay.d:

  • overlay. d/01-common: Holds common configurations like vim and zsh setups, used across all Containerfiles.

  • overlay.d/05-systemd: Contains systemd configurations shared across builds, including:

    • Journal log retention settings
    • A unit to disable mitigations for Intel CPU vulnerabilities
    • A slice to limit resource consumption of lower-priority services (e.g., rpm-ostreed and certain flatpak updates for desktop images)
  • overlay.d/10-fcos, inspired by the OKD Machine OS image definition, preps FCOS for Kubernetes and Microshift by including:

    • SELinux policy adjustments
    • Disabling zincati (unnecessary for my update method from the image registry)
    • Ownership modifications for directories used by openvswitch (/var/lib/openvswitch, /etc/openvswitch, and /var/run/openvswitch) to ensure correct permissions for the ovs user

These modular layers allow me to manage the environment efficiently while adapting Fedora CoreOS to handle MicroShift with minimal changes.

The MicroShift overlay.d folder

The overlay.d/15-microshift folder in my my-ostree-config repository holds configuration files tailored for MicroShift images. This includes:

  • Systemd Units: Required units to run MicroShift, along with their default enabled state.
  • Configurations for Core Components: Settings for crio, NetworkManager, and the repository configurations needed to install the MicroShift RPMs.
  • Initial GitOps Manifests: Basic manifests to initialize a GitOps-based environment on MicroShift. These provide the minimal setup for deploying components, with flexibility to adjust them for specific needs.

Some manifests are labeled as 99-<component>.example.yaml to avoid exposing sensitive data, such as OAuth2Client secrets. These sensitive configurations will be provisioned dynamically at setup via the Ignition file.

We will provision them at the provisioning time, via the Ignition file.

%%{ init: { 'theme': 'neutral', 'themeVariables': { 'primaryColor': '#BB2528', 'primaryTextColor': '#fff', 'primaryBorderColor': '#7C0000', 'lineColor': '#F8B229', 'secondaryColor': '#00FF00', 'secondaryTextColor': '#ff0000', 'tertiaryColor': '#fff', 'tertiaryTextColor': '#000', 'clusterBkg': '#f9f9f9', 'clusterText': '#000', 'titleColor': '#FF2528', 'noteTextColor': '#000', 'nodeTextColor': '#000', } } }%% graph LR subgraph MicroShift Container Native Image's Workloads dex argocd cert-manager openshift-console end dex -.Auth provider for.-> argocd dex -.Auth provider for.-> cert-manager dex -.Auth provider for.-> openshift-console cert-manager -.SSL provisioning.-> route/argocd cert-manager -.SSL provisioning.-> route/openshift-console cert-manager -.SSL provisioning.-> route/dex openshift-console --FE--> route/openshift-console argocd --FE--> route/argocd dex --FE--> route/dex route/openshift-console <----> user route/argocd <----> user route/dex <----> user GitHub --OAuth2--> dex CloudFlare --DNS01--> cert-manager

Cert Manager

Cert-manager, a Kubernetes add-on for managing and issuing TLS certificates, plays a key role in my MicroShift setup by automating SSL certificate provisioning for OpenShift Routes. This includes certificates for the OpenShift Console, the ArgoCD server, and any additional workloads I deploy on MicroShift.

In my configuration, cert-manager is set up with a ClusterIssuer that uses Let’s Encrypt. It employs DNS-01 challenges via the Cloudflare DNS provider, which I use for my domain, ensuring secure, automated certificate management across my environment.

CoreOS Dex

CoreOs Dex, an OAuth identity provider, serves as the backbone for authentication in my MicroShift environment. Integrated with GitHub, it enables seamless sign-ins for both the OpenShift console and ArgoCD, providing a streamlined and secure authentication experience across my setup.

Openshift Console

I’m using the OpenShift Console as a UI to manage the MicroShift cluster. It will be limited, as MicroShift is a lightweight distribution of OpenShift.

The authentication is backed by CoreOS Dex, not the internal OpenShift OAuth provider, which is not available in MicroShift. This is a dev-only setup that you should take care of based on your needs and the security setup and requirements of your environment.

Configure OAuth Clients backed by Kubernetes and Dek

To streamline Dex configuration in my setup, I created a small Golang CLI tool that generates the main configuration objects needed, particularly the OAuth2Client objects for applications authenticating via Dex. By directly generating these configurations, I avoid the complexities of using Dex’s APIs within the Ignition setup or relying on Kustomize.

This CLI tool, available in my-ostree-config, simplifies OAuth client creation, making it easier to configure authentication for the OpenShift Console and other applications in my MicroShift environment.

go run main.go -c "OpenShift Console" -i "openshift-console" -r "https://dex-dex.apps.$BASE_DOMAIN/callback" -s "<change-with-your-openshift-console-secret>"
    - path: /etc/microshift/manifests.d/20-console/99-oauth-client.yaml
      mode: 0640
      user:
        id: 0
      group: 
        id: 0
      contents: 
        inline: |
          apiVersion: dex.coreos.com/v1
          kind: OAuth2Client
          metadata:
            # The name is generated through the FNV non-cryptographic hash function on the OAuth2Client.ID value.
            # See https://github.com/aleskandro/my-ostree-config/tree/utils/storagedex
            # and https://github.com/dexidp/dex/blob/master/storage/kubernetes/client.go#L72-L78
            # https://github.com/dexidp/dex/blob/master/storage/kubernetes/client.go#L72-L78
            name: n5ygk3ttnbuwm5bnmnxw443pnrs4x4u44scceizf
            # The oauth2client CRD is namespaced, and dex reads it from the namespace it is running in.
            namespace: dex
          ID: openshift-console
          Name: OpenShift Console
          Secret: <secret> # This secret must match the one in the OpenShift Console configuration. See below.
          RedirectURIs:
            - https://web-console.apps.$BASE_DOMAIN/oauth2/callback          

The secret is a 64-characters-long secret that you can generate anyway like:

tr -dc A-Za-z0-9 </dev/urandom | head -c 64; echo

The secret and ID must match the ones in the OpenShift Console configuration of the OAuth Proxy Secret. Likewise, any further OAuth2Client objects you generate must match the secret and ID set in the component that will communicate with Dex.

In your GitHub profile, you will need to:

  • Create an OAuth App with the following data:

  • Create an organization and grant permissions for the users in that organization to use the app. All the users that will authenticate via the OAuth2Client object must be part of this organization to get access to the OpenShift Console, ArgoCD and any further application you’ll deploy on top of MicroShift configued to authenticate via this Dex instance.

The Ignition file

The Ignition file is the file that will provision the MicroShift image at the provisioning time.

You will install a plain Fedora CoreOS image on your server, providing the ignition file to the installer.

At the next reboot, the content of the ignition file will be applied to the system, and one of the Systemd units will take care of rebasing the system to the MicroShift image in the repository (or a fork of yours).

To generate an ignition, we first define a butane configuration file that you can process with:

podman run -i quay.io/coreos/butane:release --strict --pretty < microshift-remote.bu > ignition.ign

The butane

I’ll add the butane configuration file in different parts to better comment on it, removing the sensitive data like the OAuth2Client secrets and the GitHub Client ID and Client Secret.

Basic configuration


variant: fcos
version: 1.5.0
passwd:
 users:
  - name: core
    ssh_authorized_keys:
     - <your ssh public key>
    password_hash: <your password hash> # Generate with podman run -ti --rm quay.io/coreos/mkpasswd --method=yescrypt
storage:
 files:
  - path: /etc/ostree/image.env
    mode: 0640
    user:
     id: 0
    group:
     id: 0
    contents:
     inline: |
      # This is the image that will be rebased at the first boot after the installation of CoreOS and the application of the ignition. See https://github.com/aleskandro/my-ostree-config/blob/073be9b9cb2cffd071158330cac9650b7f8f84ed/overlay.d/15-microshift/usr/lib/systemd/system/first-boot-rebase.service
      IMAGE=quay.io/aleskandrox/fedora:coreos-microshift-custom      
  - path: /etc/hostname
    mode: 0644
    user:
     id: 0
    group:
     id: 0
    contents:
     inline: >-
            <FQDN of your Host>

  - path: /etc/resolv.conf
    mode: 0644
    user:
     id: 0
    group:
     id: 0
    contents:
     inline: |
      # Network Manager rc-manager is disabled.
      nameserver 8.8.8.8
      nameserver 1.1.1.1      
  - path: /etc/microshift/config.yaml
    contents:
     inline: |
      dns:
        baseDomain: <baseDomain>
      network:
        clusterNetwork:
        - 10.42.0.0/16
        serviceNetwork:
        - 10.43.0.0/16
        serviceNodePortRange: 30000-32767
      apiServer:
        subjectAltNames:
        - <FQDN of your Host>
        - api.<baseDomain>
      debugging:
        logLevel: Normal      
    mode: 0644
    user:
     id: 0
    group:
     id: 0
  - path: /etc/crio/openshift-pull-secret
    mode: 0600
    user:
     id: 0
    group:
     id: 0
    contents:
     inline: |
            << PULL SECRET TO ACCESS THE RED HAT REGISTRY - If the OKD builds were available, there was no need for this >>
systemd:
  units:
   - name: zincati.service
     enabled: false
   - name: first-rebase.service
     enabled: true
     contents: |
      [Unit]
      Description=Rebase the system on first boot
      ConditionPathExists=!/var/.first-rebase-completed
      After=network.target
      Wants=network.target
      [Service]
      Type=oneshot
      EnvironmentFile=/etc/ostree/image.env
      ExecStartPre=/usr/bin/rpm-ostree rebase ostree-unverified-registry:${IMAGE}
      ExecStartPre=/usr/bin/touch /var/.first-rebase-completed
      ExecStart=/usr/bin/systemctl reboot
      StandardOutput=journal+console
      StandardError=journal+console
      Restart=on-failure
      RestartSec=15
      [Install]
      WantedBy=default.target      

Kubernetes Manifests

These manifests will work to complete the ones in /usr/lib/microshift/manifests.d/ and deploy the components we need to have a working GitOps environment.

Cert Manager

    - path: /etc/microshift/manifests.d/05-cert-manager/99-cluster-issuer.yaml
      mode: 0640
      user:
        id: 0
      group:
        id: 0
      contents:
        inline: |
          apiVersion: cert-manager.io/v1
          kind: ClusterIssuer
          metadata:
            name: letsencrypt-prod
          spec:
            acme:
              email: <your-email>
              preferredChain: ""
              privateKeySecretRef:
                name: letsencrypt-prod
              server: https://acme-v02.api.letsencrypt.org/directory
              solvers:
                - dns01:
                    cloudflare:
                      apiTokenSecretRef:
                        name: cloudflare
                        key: api-token          
    - path: /etc/microshift/manifests.d/05-cert-manager/99-secret.yaml
      mode: 0640
      user:
        id: 0
      group:
        id: 0
      contents:
        inline: |
          kind: Secret
          apiVersion: v1
          metadata:
            name: cloudflare
            namespace: cert-manager
          stringData:
            api-token: <your-cloudflare-api-token> # You can generate it in the Cloudflare dashboard          
    - path: /etc/microshift/manifests.d/05-cert-manager/kustomization.yaml
      mode: 0640
      user:
        id: 0
      group:
        id: 0
      contents:
        inline: |
          apiVersion: kustomize.config.k8s.io/v1beta1
          kind: Kustomization
          namespace: cert-manager
          resources:
            - 99-secret.yaml
            - 99-cluster-issuer.yaml          

Dex

The Dex configuration will include URLs and secrets that you get from the GitHub OAuth App you created.

    - path: /etc/microshift/manifests.d/10-dex/99-configmap.yaml
      mode: 0640
      user:
       id: 0
      group:
       id: 0
      contents:
       inline: |
        kind: ConfigMap
        apiVersion: v1
        metadata:
          name: dex
          namespace: dex
        data:
          config.yaml: |
            issuer: https://dex-dex.apps.$BASE_DOMAIN/
            storage:
              type: kubernetes
              config:
                inCluster: true
            web:
              https: 0.0.0.0:5556
              tlsCert: /etc/dex/tls/tls.crt
              tlsKey: /etc/dex/tls/tls.key
            connectors:
              - type: github
                id: github
                name: GitHub
                config:
                  clientID: $GITHUB_CLIENT_ID
                  clientSecret: $GITHUB_CLIENT_SECRET
                  redirectURI: https://dex-dex.apps.$BASE_DOMAIN/callback
                  orgs:
                    - name: <your-github-organization>
                rootCA: /etc/dex/ca.crt
            oauth2:
              skipApprovalScreen: true        
    - path: /etc/microshift/manifests.d/10-dex/99-secret.yaml
      mode: 0640
      user:
       id: 0
      group:
       id: 0
      contents:
       inline: |
        apiVersion: v1
        kind: Secret
        metadata:
          name: github-client
          namespace: dex
        stringData:
          client-id: #################### # A 20-characters-long-secret 
          client-secret: ######################################## # A 64-characters-long-secret        
    - path: /etc/microshift/manifests.d/10-dex/kustomization.yaml
      mode: 0640
      user:
       id: 0
      group:
       id: 0
      contents:
       inline: |
        apiVersion: kustomize.config.k8s.io/v1beta1
        kind: Kustomization
        namespace: dex
        resources:
        - 99-configmap.yaml
        - 99-secret.yaml        

Openshift Console

    - path: /etc/microshift/manifests.d/20-console/99-oauth-proxy-configmap.yaml
      mode: 0640
      user:
        id: 0
      group:
        id: 0
      contents:
        inline: |
          apiVersion: v1
          kind: ConfigMap
          metadata:
            name: oauth-proxy-config
            namespace: console
          data:
            redirect-url: https://web-console.apps.$BASE_DOMAIN/oauth2/callback
            oidc-issuer-url: https://dex-dex.apps.$BASE_DOMAIN/          
    - path: /etc/microshift/manifests.d/20-console/99-oauth-proxy-secret.yaml
      mode: 0640
      user:
        id: 0
      group:
        id: 0
      contents:
        inline: |
          apiVersion: v1
          kind: Secret
          metadata:
            name: oauth-proxy-secret
            namespace: console
          stringData:
            session-secret: <32-characters-long-secret>
            client-secret: <64-characters-long-secret>          
    - path: /etc/microshift/manifests.d/20-console/99-oauth-client.yaml
      mode: 0640
      user:
        id: 0
      group: 
        id: 0
      contents: 
        inline: |
          apiVersion: dex.coreos.com/v1
          kind: OAuth2Client
          metadata:
            # The name is generated through the FNV non-cryptographic hash function on the client ID.
            # See https://github.com/aleskandro/my-ostree-config/tree/utils/storagedex
            # and https://github.com/dexidp/dex/blob/master/storage/kubernetes/client.go#L72-L78
            # https://github.com/dexidp/dex/blob/master/storage/kubernetes/client.go#L72-L78
            name: n5ygk3ttnbuwm5bnmnxw443pnrs4x4u44scceizf
            # The oauth2client CRD is namespaced, and dex reads it from the namespace it is running in.
            namespace: dex
          ID: openshift-console
          Name: OpenShift Console
          Secret: <the-64-characters-long-secret> # This secret must match the one in the oauth-proxy-secret Secret above.
          RedirectURIs:
            - https://web-console.apps.$BASE_DOMAIN/oauth2/callback          
    - path: /etc/microshift/manifests.d/20-console/namespace.transformer.yaml
      mode: 0640
      user:
        id: 0
      group:
        id: 0
      contents:
        inline: |
          apiVersion: builtin
          kind: NamespaceTransformer
          metadata:
            name: notImportantHere
          namespace: console
          unsetOnly: true
          setRoleBindingSubjects: allServiceAccounts
          # https://github.com/kubernetes-sigs/kustomize/blob/fdf8f44c90f0a8f159cbfccce28d9ab0ab765085/plugin/builtin/namespacetransformer/NamespaceTransformer.go#L4          
    - path: /etc/microshift/manifests.d/20-console/kustomization.yaml
      mode: 0640
      user:
        id: 0
      group:
        id: 0
      contents:
        inline: |
          apiVersion: kustomize.config.k8s.io/v1beta1
          kind: Kustomization

          transformers:
          - namespace.transformer.yaml
          
          resources:
          - 99-oauth-proxy-configmap.yaml
          - 99-oauth-proxy-secret.yaml
          - 99-oauth-client.yaml          

ArgoCD

    - path: /etc/microshift/manifests.d/40-argocd/99-oauth2client.yaml
      mode: 0640
      user:
          id: 0
      group:
          id: 0
      contents:
          inline: |
            apiVersion: dex.coreos.com/v1
            kind: OAuth2Client
            metadata:
              # The name is generated through the FNV non-cryptographic hash function on the client ID.
              # See https://github.com/aleskandro/my-ostree-config/tree/utils/storagedex
              # and https://github.com/dexidp/dex/blob/master/storage/kubernetes/client.go#L72-L78
              # https://github.com/dexidp/dex/blob/master/storage/kubernetes/client.go#L72-L78
              # The oauth2client CRD is namespaced, and dex reads it from the namespace it is running in.
              namespace: dex
              name: mfzgo33dmtf7fhheqqrcgji
            name: ArgoCD Console
            ID: argocd
            public: false
            redirectURIs:
              - https://console-argocd.apps.<rambles>/auth/callback
            secret: <64-characters-long-secret>            
    - path: /etc/microshift/manifests.d/40-argocd/99-rbac-cm.yaml
      mode: 0640
      user:
        id: 0
      group:
        id: 0
      contents:
        inline: |
          apiVersion: v1
          kind: ConfigMap
          data:
            policy.csv: |
              g, [email protected], role:admin
            scopes: '[email]'
          metadata:
            labels:
              app.kubernetes.io/name: argocd-rbac-cm
              app.kubernetes.io/part-of: argocd
            name: argocd-rbac-cm
            namespace: argocd          
    - path: /etc/microshift/manifests.d/40-argocd/99-argocd-cm-configmap.yaml
      mode: 0640
      user:
        id: 0
      group:
        id: 0
      contents:
        inline: |
         apiVersion: v1
         kind: ConfigMap
         metadata:
           name: argocd-cm
           namespace: argocd
           labels:
             app.kubernetes.io/name: argocd-cm
             app.kubernetes.io/part-of: argocd
         data:
           admin.enabled: "false"
           url: https://console-argocd.apps.$BASE_DOMAIN
           oidc.config: |-
             name: ArgoCD Dex
             issuer: $oidc.issuer
             clientID: argocd
             clientSecret: $oidc.clientSecret
             requestedScopes: ["openid", "profile", "email", "groups"]
             getUserInfo: true         
    - path: /etc/microshift/manifests.d/40-argocd/99-secret.yaml
      mode: 0640
      user:
        id: 0
      group:
        id: 0
      contents:
        inline: |
         apiVersion: v1
         kind: Secret
         metadata:
           name: argocd-secret
         labels:
           app.kubernetes.io/component: server
           app.kubernetes.io/name: argocd-server
           app.kubernetes.io/part-of: argocd
         stringData:
          oidc.issuer: https://dex-dex.apps.$BASE_DOMAIN/
          oidc.clientID: argocd # Same as the one in the OAuth2Client object above
          oidc.clientSecret: <64-characters-long-secret> # Same as the one in the OAuth2Client object above         
    - path: /etc/microshift/manifests.d/40-argocd/kustomization.yaml
      mode: 0640
      user:
       id: 0
      group:
       id: 0
      contents:
       inline: |
        apiVersion: kustomize.config.k8s.io/v1beta1
        kind: Kustomization
        resources:
          - 99-oauth2client.yaml
          - 99-rbac-cm.yaml
          - 99-argocd-cm-configmap.yaml
          - 99-secret.yaml        

Apps of Apps pattern, ArgoCD and Butane

Declaratively specify one Argo CD app that consists only of other apps.

The Apps of Apps pattern in ArgoCD is integral to my cluster setup. By defining all the applications in a single Git repository, ArgoCD can deploy them in sequence, starting from the initial manifest. I’ve added this primary manifest to the Ignition file, ensuring that it is applied automatically at the first boot, making the setup seamless and efficient.

Any changes to the monitored repository will trigger a sync in ArgoCD, ensuring that the applications are always up-to-date and in sync with the desired state, without the need to manually create or update each ArgoCD application object.

    - path: /etc/microshift/manifests.d/50-app-of-apps/application.yaml
      mode: 0640
      user:
       id: 0
      group:
       id: 0
      contents:
       inline: |
        apiVersion: argoproj.io/v1alpha1
        kind: Application
        metadata:
          name: root-app
          namespace: argocd
        spec:
          destination:
            name: ''
            namespace: 'argocd'
            server: 'https://kubernetes.default.svc'
          source:
            path: envs/my-env
            repoURL: [email protected]:aleskandro/my-private-repo.git
            targetRevision: master
          sources: []
          project: default
          syncPolicy:
            automated:
              selfHeal: true
            retry:
              limit: 5
              backoff:
                duration: 5s
                maxDuration: 10m0s
                factor: 2        
    - path: etc/microshift/manifests.d/50-app-of-apps/secret.yaml
      mode: 0640
      user:
       id: 0
      group:
       id: 0
      contents:
       inline: |
        apiVersion: v1
        stringData:
          name: cluster
          project: default
          type: git
          url: [email protected]:aleskandro/my-private-repo.git
          sshPrivateKey: |
            -----BEGIN OPENSSH PRIVATE KEY-----
            <your-private-key>
            -----END OPENSSH PRIVATE KEY-----
        kind: Secret
        metadata:
          annotations:
            managed-by: argocd.argoproj.io
          labels:
            argocd.argoproj.io/secret-type: repository
          name: repo-secret
          namespace: argocd
        type: Opaque        
    - path: /etc/microshift/manifests.d/50-app-of-apps/kustomization.yaml
      mode: 0640
      user:
       id: 0
      group:
       id: 0
      contents:
       inline: |
        apiVersion: kustomize.config.k8s.io/v1beta1
        kind: Kustomization
        namespace: argocd
        resources:
          - application.yaml
          - secret.yaml        

my-private repo is a private repository consisting of the applications you want to deploy in your cluster, grouped in a Kustomize-like structure. The root is envs/my-env/kustomization.yaml, that can look like:

kind: Kustomization
apiVersion: kustomize.config.k8s.io/v1beta1
namespace: argocd
resources:
  - ../../apps/hello-openshift

apps/hello-openshift is a folder containing a kustomization project with the manifests of the application you want to deploy in your cluster.

Conclusion

Using the ignition generated from a Butane configuration like the one above and the CoreOS container image in the repository, you can quickly provision a MicroShift cluster with the components you need to have a working GitOps environment with the Openshift console and ArgoCD.

The process will work as in the following steps:

  1. Boot the Fedora CoreOS ISO on your server
  2. in gdisk /dev/sda, assuming a new GPT partition table, create a partition on a higher index to use for persistent volumes:
    c <enter>
    Partition number (1-128, default 1): 8
    First sector (....) or {+-}size{KMGTP}: +100G
    Last sector (.....) or {+-}size{KMGTP}: +350G
    Hex code or GUID (L to show codes, Enter = 8300): 8300
    
  3. Create the physical volume and volume group for the topolvm storage class:
    pvcreate /dev/sda8
    vgcreate topolvm /dev/sda8
    
  4. Install:
    coreos-installer install /dev/sda --ignition-file=ignition.ign --save-partindex=8
    
  5. I also use to enlarge the boot partition to allow more room for upgrades and pinning of OSTree deployments:
     echo "<NewSectorForRootPartition>, 4G" | sfdisk --move-data /dev/sda -N 4 # 4 is the root partition, on the right of the boot one to be resized
     echo ", +" | sfdisk --move-data /dev/sda -N 3 # 3 is the boot partition
     e2fsck -f /dev/sda3
     resize2fs /dev/sda3
    
  6. Reboot

In the future, I’ll delve into the structure of the repository and the applications I deployed in the MicroShift cluster. I would also investigate whether the process described above can be simplified by using bootc. Stay tuned!