1
0

Compare commits

..

No commits in common. "5eb52e7adba7564d887e80f725cdbebbe5120120" and "a9858952258f76c26260dd6407f51ac9e7d938c9" have entirely different histories.

38 changed files with 96 additions and 37125 deletions

13
.gitignore vendored
View File

@ -1,5 +1,8 @@
**/vault_password ansible/vault_password
**/vault.yaml ansible/inventory/host_vars/*/vault.yaml
**/*secrets.yaml ansible/roles/k8s_network/files/calico
**/*secret.yaml ansible/roles/k8s_storage_rook/files/rook
.vscode/* ansible/roles/k8s_control/files/core-dns
ansible/roles/k8s_storage_ebs_manifests/files/ebs
.vscode
*/vault.yaml

View File

@ -1,6 +0,0 @@
{
"yaml.schemas": {
"https://raw.githubusercontent.com/ansible/schemas/main/f/ansible.json": "file:///home/michael/Code/home/IaC/ansible/roles/vm_deploy/tasks/deploy.yaml",
"kubernetes://schema/storage.k8s.io/v1@storageclass": "file:///home/michael/Code/home/IaC/ansible/roles/k8s_storage_deploy/files/config/blockpool_ssd_replica.yaml"
}
}

32
README.md Normal file
View File

@ -0,0 +1,32 @@
The general idea is to bootstrap a bare metal host into a functioning kubernetes cluster.
These playbooks/roles in their current state will create all kubernetes nodes on a single host. This is for lab/testing/learning type scenarios.
With some adjustments though this could be used to provision multiple hypervisors, ideally with each running 2 VMs: a control-plane node and a worker node. If you've got the hardware or the cloud budget for that, then lucky you! :smile:
An outline of the steps, which are roughly broken up by playbook:
- [ ] Install Arch linux on the bare metal
- [x] Configure the bare metal Arch host as a hypervisor (qemu/kvm) - [Link](https://code.balsillie.net/michael/IaC/src/branch/master/ansible/playbooks/02_hypervisor.yaml)
- [ ] Install Arch linux into a VM on the hypervisor then convert it to a template.
- [x] Deploy 3 (or more) VMs from the template (uses backing store qcow images) - [Link](https://code.balsillie.net/michael/IaC/src/branch/master/ansible/playbooks/04_vm_deploy.yaml)
- [x] Create a kubernetes cluster from those 3 VMs - [Link](https://code.balsillie.net/michael/IaC/src/branch/master/ansible/playbooks/05_k8s_deploy.yaml)
- [x] Install calico networking into the cluster.
- [x] Remove the taint from control plane nodes. <-- Optional
- [x] Configure cluster storage using rook. <-- This didn't work due to hardware limitations (3 x VHDs on a single spinning HDD)
- [ ] Possible storage setup using [openEBS](https://openebs.io/docs/#quickstart-guides) zfs or device local PV
- [ ] Example PVC backups using one of [stash](https://stash.run/)/[velero](https://velero.io/)/[gemini](https://github.com/FairwindsOps/gemini) or other
- [ ] Deploy workloads into the cluster
What you don't see here is setup/configuration of an Opnsense VM to act as a firewall, this is too far off from being possible to automate.
Opnsense provides firewall, routing (including BGP peering to calico nodes), DNS and acts as a HA proxy load balancer to the kubernetes nodes. I'll add [notes](https://code.balsillie.net/michael/IaC/src/branch/master/notes/opnsense.md) at some point on how to configure opnsense but it's not something that can be done sensibly with ansible.
What you'll also need:
- Clone the git repo.
- Create a vault_password file (chmod 600) under the ansible directory.
- Ensure .gitignore is correctly setup so that vault_password doesn't get commited to source control.
- Create an ansible vault in your inventory directory tree to hold sensitive variables such as 'ansible_become_pass'. Again, .gitignore should ensure this vault file remains only on your workstation.
Check the defaults files for roles carefully. Variables are a scattered mess right now and need to be properly amalgamated.
Ansible roles were written to work on an Arch linux workstation, some tasks are intended to install packages to localhost (such as kubectl) and use pacman modules to do so. If you encounter problems with these steps, change those tasks to use your relevant package manager module, eg apt or yum.

View File

@ -1,13 +0,0 @@
$ANSIBLE_VAULT;1.1;AES256
32663239363537353936346439323334373561303531343365356338626336626237386562376335
3637303166393236323236623637613632313831373065620a646639336130613534666633643633
33393032356261393764646166643465366164356236666464333439333039633934643732616666
6537396433663666650a316266393334656534323135643939336662626563646461363131336437
32383963366163323065376230633366383830626539396563323661643266643139316334616237
35633264626637346635613262383236396530313335346139653239316433646338613339303638
65326134306438333265636337376538313337356164663865653036343666353335663336376463
61616465333461656461313464623635336533363132626534373230633139373064636634613136
33633134313538326662323534386533363833326337383837393036653637663561323837373162
32613733353637313862323837653663343134323761363339333032383239643633666632663563
39366362663334316634346339663337386439386162636639393137306138303163333538616664
64333366663134356435

View File

@ -20,27 +20,19 @@
# roles: # roles:
# - k8s_taint # - k8s_taint
# - name: configure ebs storage operator # - name: configure storage operator
# hosts: localhost # hosts: localhost
# gather_facts: false # gather_facts: false
# become: false # become: false
# roles: # roles:
# - k8s_storage_ebs_deploy # - k8s_storage_ebs_deploy
- name: configure smb storage provider - name: configure ingress controller
hosts: localhost hosts: localhost
gather_facts: false gather_facts: false
become: false become: false
roles: roles:
- k8s_storage_smb_deploy - k8s_ingress_controller
# - name: configure ingress controller
# hosts: localhost
# gather_facts: false
# become: false
# roles:
# - k8s_ingress_controller
# - name: configure cert manager # - name: configure cert manager
# hosts: localhost # hosts: localhost
@ -48,5 +40,3 @@
# become: false # become: false
# roles: # roles:
# - k8s_cert_manager # - k8s_cert_manager

View File

@ -1,16 +1,2 @@
--- ---
cert_manager_version: v1.10.1 cert_manager_version: v1.10.1
cert_manager_dns_address: 10.96.244.86
cert_manager_dns_port: 53
cert_manager_tsig_name: rndc
cert_manager_tsig_algo: HMACSHA256
cert_manager_tsig_keyname: rndc
cert_manager_acme_providers:
- provider: lets-encrypt
environment: staging
url: https://acme-staging-v02.api.letsencrypt.org/directory
email: lets-encrypt@balsillie.email
- provider: lets-encrypt
environment: production
url: https://acme-v02.api.letsencrypt.org/directory
email: lets-encrypt@balsillie.email

View File

@ -1,19 +0,0 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: acme-lets-encrypt-production
spec:
acme:
email: lets-encrypt@balsillie.email
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: cert-manager-secret-acme-lets-encrypt-production
solvers:
- dns01:
rfc2136:
nameserver: 10.96.244.86:53
tsigKeyName: rndc
tsigAlgorithm: HMACSHA256
tsigSecretSecretRef:
name: cert-manager-secret-tsig
key: rndc

View File

@ -1,19 +0,0 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: acme-lets-encrypt-staging
spec:
acme:
email: lets-encrypt@balsillie.email
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: cert-manager-secret-acme-lets-encrypt-staging
solvers:
- dns01:
rfc2136:
nameserver: 10.96.244.86:53
tsigKeyName: rndc
tsigAlgorithm: HMACSHA256
tsigSecretSecretRef:
name: cert-manager-secret-tsig
key: rndc

View File

@ -1,37 +1,51 @@
--- ---
# - name: download the cert manager manifest - name: download the cert manager manifest
# ansible.builtin.uri: ansible.builtin.uri:
# url: https://github.com/cert-manager/cert-manager/releases/download/{{ cert_manager_version }}/cert-manager.yaml url: https://github.com/cert-manager/cert-manager/releases/download/{{ cert_manager_version }}/cert-manager.yaml
# dest: "{{ ansible_search_path[0] }}/files/cert_manager_{{ cert_manager_version }}.yaml" dest: "{{ ansible_search_path[0] }}/files/cert_manager_{{ cert_manager_version }}.yaml"
# creates: "{{ ansible_search_path[0] }}/files/cert_manager_{{ cert_manager_version }}.yaml" creates: "{{ ansible_search_path[0] }}/files/cert_manager_{{ cert_manager_version }}.yaml"
# mode: 0664 mode: 0664
# - name: install cert manager manifest to cluster - name: install cert manager manifest to cluster
# kubernetes.core.k8s:
# state: present
# src: "{{ ansible_search_path[0] }}/files/cert_manager_{{ cert_manager_version }}.yaml"
- name: template out the cert manager secrets definition file
ansible.builtin.template:
src: cert-manager-secrets.yaml.j2
dest: "{{ ansible_search_path[0] }}/files/cert-manager-secrets.yaml"
- name: apply cert manager secrets definition
kubernetes.core.k8s: kubernetes.core.k8s:
state: present state: present
src: "{{ ansible_search_path[0] }}/files/cert-manager-secrets.yaml" src: "{{ ansible_search_path[0] }}/files/cert_manager_{{ cert_manager_version }}.yaml"
- name: template out the cert manager issuer definition files - name: set fact for acme account secret
ansible.builtin.template: ansible.builtin.set_fact:
src: cert-manager-issuer-acme.yaml.j2 cert_manager_acme_secret:
dest: "{{ ansible_search_path[0] }}/files/cert-manager-issuer-acme-{{ item.provider }}-{{ item.environment }}.yaml"
with_items:
"{{ cert_manager_acme_providers }}"
- name: apply cert manager issuer definition files - name: set fact for dns tsig secret
kubernetes.core.k8s: ansible.builtin.set_fact:
state: present cert_manager_secret_tsig:
src: "{{ ansible_search_path[0] }}/files/cert-manager-issuer-acme-{{ item.provider }}-{{ item.environment }}.yaml" apiVersion: v1
with_items: kind: Secret
"{{ cert_manager_acme_providers }}" metadata:
name: cert-manager-secret-acme
namespace:
type: Opaque
stringData: |
key:
- name: set cert issuer fact
ansible.builtin.set_fact:
cert_issuer:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: lets-encrypt-staging
spec:
acme:
email: lets-encrypt@balsillie.email
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: cert-manager-secret-acme
solvers:
- dns01:
rfc2136:
nameserver: 2a01:4f8:13b:f203::ecc:53
tsigKeyName: cert-manager-tsig
tsigAlgorithm: HMACSHA512
tsigSecretSecretRef:
name: cert-manager-secret-tsig
key: key

View File

@ -1,19 +0,0 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: acme-{{ item.provider }}-{{ item.environment }}
spec:
acme:
email: {{ item.email }}
server: {{ item.url }}
privateKeySecretRef:
name: cert-manager-secret-acme-{{ item.provider }}-{{ item.environment }}
solvers:
- dns01:
rfc2136:
nameserver: {{ cert_manager_dns_address }}:{{ cert_manager_dns_port }}
tsigKeyName: {{ cert_manager_tsig_keyname }}
tsigAlgorithm: {{ cert_manager_tsig_algo }}
tsigSecretSecretRef:
name: cert-manager-secret-tsig
key: {{ cert_manager_tsig_keyname }}

View File

@ -1,8 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: cert-manager-secret-tsig
namespace: cert-manager
type: Opaque
stringData:
{{ cert_manager_tsig_keyname }}: {{ cert_manager_tsig_keyvalue }}

View File

@ -1,8 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
['192.168.199.254', '2a01:4f8:13b:f201::254']

View File

@ -1 +0,0 @@
---

View File

@ -2,16 +2,14 @@
- name: download the ingress controller manifest - name: download the ingress controller manifest
ansible.builtin.uri: ansible.builtin.uri:
url: https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v{{ ingress_controller_version | string }}/deploy/static/provider/cloud/deploy.yaml url: https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v{{ ingress_controller_version | string }}/deploy/static/provider/cloud/deploy.yaml
dest: "{{ ansible_search_path[0] }}/files/ingress_controller_{{ ingress_controller_version }}.yaml"
dest: "{{ ansible_search_path[0] }}/files/ingress_controller_v{{ ingress_controller_version }}.yaml" creates: "{{ ansible_search_path[0] }}/files/ingress_controller_{{ ingress_controller_version }}.yaml"
creates: "{{ ansible_search_path[0] }}/files/ingress_controller_v{{ ingress_controller_version }}.yaml"
mode: 0664 mode: 0664
- name: install ingress controller manifest to cluster - name: install ingress controller manifest to cluster
kubernetes.core.k8s: kubernetes.core.k8s:
state: present state: present
src: "{{ ansible_search_path[0] }}/files/ingress_controller_v{{ ingress_controller_version | string }}.yaml" src: "{{ ansible_search_path[0] }}/files/ingress_controller_{{ ingress_controller_version | string }}.yaml"
- name: create replacement fact for ingress controller service - name: create replacement fact for ingress controller service
ansible.builtin.set_fact: ansible.builtin.set_fact:

View File

@ -1,5 +0,0 @@
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}

View File

@ -1,8 +0,0 @@
apiVersion: projectcalico.org/v3
kind: BGPConfiguration
metadata:
name: default
spec:
serviceClusterIPs:
- cidr: 10.96.0.0/16
- cidr: 2a01:4f8:13b:f203::00/116

View File

@ -1,7 +0,0 @@
apiVersion: crd.projectcalico.org/v1
kind: BGPPeer
metadata:
name: opnsense
spec:
asNumber: 64612
peerIP: 192.168.199.254

View File

@ -1,7 +0,0 @@
apiVersion: crd.projectcalico.org/v1
kind: BGPPeer
metadata:
name: opnsense-v4
spec:
asNumber: 64612
peerIP: 192.168.199.254

View File

@ -1,7 +0,0 @@
apiVersion: crd.projectcalico.org/v1
kind: BGPPeer
metadata:
name: opnsense-v6
spec:
asNumber: 64612
peerIP: 2a01:4f8:13b:f201::254

View File

@ -1,22 +0,0 @@
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
bgp: Enabled
ipPools:
- blockSize: 20
cidr: 10.128.0.0/16
encapsulation: None
natOutgoing: Enabled
nodeSelector: all()
linuxDataplane: Iptables
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}

View File

@ -1,4 +0,0 @@
apiVersion: v1
data: {KUBERNETES_SERVICE_HOST: 192.168.199.240, KUBERNETES_SERVICE_PORT: '6443'}
kind: ConfigMap
metadata: {name: kubernetes-services-endpoint, namespace: tigera-operator}

View File

@ -1,20 +0,0 @@
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
bgp: Enabled
hostPorts: Enabled
ipPools:
- blockSize: 20
cidr: 10.128.0.0/16
encapsulation: None
natOutgoing: Disabled
nodeSelector: all()
- blockSize: 120
cidr: 2a01:4f8:13b:f202::00/64
encapsulation: None
natOutgoing: Disabled
nodeSelector: all()
linuxDataplane: Iptables

View File

@ -1,5 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
labels: {name: tigera-operator}
name: tigera-operator

View File

@ -1,27 +0,0 @@
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 192.168.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}

View File

@ -1,27 +0,0 @@
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 192.168.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}

View File

@ -1,27 +0,0 @@
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 192.168.0.0/16
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}

View File

@ -10,6 +10,6 @@ metadata:
value: "hostpath" value: "hostpath"
- name: BasePath - name: BasePath
value: "/ebs/hdd/" value: "/ebs/hdd/"
volumeBindingMode: WaitForFirstConsumer volumeBindingMode: Immediate
allowVolumeExpansion: true allowVolumeExpansion: true
reclaimPolicy: Delete reclaimPolicy: Retain

View File

@ -10,6 +10,6 @@ metadata:
value: "hostpath" value: "hostpath"
- name: BasePath - name: BasePath
value: "/ebs/ssd/" value: "/ebs/ssd/"
volumeBindingMode: WaitForFirstConsumer volumeBindingMode: Immediate
allowVolumeExpansion: true allowVolumeExpansion: true
reclaimPolicy: Delete reclaimPolicy: Retain

View File

@ -1,8 +0,0 @@
csi_smb_version: v1.9.0
csi_smb_username: kube
csi_smb_storage_classes:
- name: userdata
server: 192.168.199.253
share: userdata
username: "{{ csi_smb_username }}"
password: "{{ csi_smb_password }}"

View File

@ -1,109 +0,0 @@
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: csi-smb-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: csi-smb-controller
template:
metadata:
labels:
app: csi-smb-controller
spec:
dnsPolicy: Default # available values: Default, ClusterFirstWithHostNet, ClusterFirst
serviceAccountName: csi-smb-controller-sa
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/controlplane"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
containers:
- name: csi-provisioner
image: registry.k8s.io/sig-storage/csi-provisioner:v3.2.0
args:
- "-v=2"
- "--csi-address=$(ADDRESS)"
- "--leader-election"
- "--leader-election-namespace=kube-system"
- "--extra-create-metadata=true"
env:
- name: ADDRESS
value: /csi/csi.sock
volumeMounts:
- mountPath: /csi
name: socket-dir
resources:
limits:
cpu: 1
memory: 300Mi
requests:
cpu: 10m
memory: 20Mi
- name: liveness-probe
image: registry.k8s.io/sig-storage/livenessprobe:v2.7.0
args:
- --csi-address=/csi/csi.sock
- --probe-timeout=3s
- --health-port=29642
- --v=2
volumeMounts:
- name: socket-dir
mountPath: /csi
resources:
limits:
cpu: 1
memory: 100Mi
requests:
cpu: 10m
memory: 20Mi
- name: smb
image: registry.k8s.io/sig-storage/smbplugin:v1.9.0
imagePullPolicy: IfNotPresent
args:
- "--v=5"
- "--endpoint=$(CSI_ENDPOINT)"
- "--metrics-address=0.0.0.0:29644"
ports:
- containerPort: 29642
name: healthz
protocol: TCP
- containerPort: 29644
name: metrics
protocol: TCP
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 30
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
securityContext:
privileged: true
volumeMounts:
- mountPath: /csi
name: socket-dir
resources:
limits:
memory: 200Mi
requests:
cpu: 10m
memory: 20Mi
volumes:
- name: socket-dir
emptyDir: {}

View File

@ -1,8 +0,0 @@
---
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: smb.csi.k8s.io
spec:
attachRequired: false
podInfoOnMount: true

View File

@ -1,160 +0,0 @@
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-smb-node-win
namespace: kube-system
spec:
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app: csi-smb-node-win
template:
metadata:
labels:
app: csi-smb-node-win
spec:
tolerations:
- key: "node.kubernetes.io/os"
operator: "Exists"
effect: "NoSchedule"
nodeSelector:
kubernetes.io/os: windows
priorityClassName: system-node-critical
serviceAccountName: csi-smb-node-sa
containers:
- name: liveness-probe
volumeMounts:
- mountPath: C:\csi
name: plugin-dir
image: registry.k8s.io/sig-storage/livenessprobe:v2.7.0
args:
- --csi-address=$(CSI_ENDPOINT)
- --probe-timeout=3s
- --health-port=29643
- --v=2
env:
- name: CSI_ENDPOINT
value: unix://C:\\csi\\csi.sock
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 40Mi
- name: node-driver-registrar
image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1
args:
- --v=2
- --csi-address=$(CSI_ENDPOINT)
- --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
livenessProbe:
exec:
command:
- /csi-node-driver-registrar.exe
- --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
- --mode=kubelet-registration-probe
initialDelaySeconds: 60
timeoutSeconds: 30
env:
- name: CSI_ENDPOINT
value: unix://C:\\csi\\csi.sock
- name: DRIVER_REG_SOCK_PATH
value: C:\\var\\lib\\kubelet\\plugins\\smb.csi.k8s.io\\csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: kubelet-dir
mountPath: "C:\\var\\lib\\kubelet"
- name: plugin-dir
mountPath: C:\csi
- name: registration-dir
mountPath: C:\registration
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 40Mi
- name: smb
image: registry.k8s.io/sig-storage/smbplugin:v1.9.0
imagePullPolicy: IfNotPresent
args:
- --v=5
- --endpoint=$(CSI_ENDPOINT)
- --nodeid=$(KUBE_NODE_NAME)
- "--metrics-address=0.0.0.0:29645"
- "--remove-smb-mapping-during-unmount=true"
ports:
- containerPort: 29643
name: healthz
protocol: TCP
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 30
env:
- name: CSI_ENDPOINT
value: unix://C:\\csi\\csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
volumeMounts:
- name: kubelet-dir
mountPath: "C:\\var\\lib\\kubelet"
- name: plugin-dir
mountPath: C:\csi
- name: csi-proxy-fs-pipe-v1
mountPath: \\.\pipe\csi-proxy-filesystem-v1
- name: csi-proxy-smb-pipe-v1
mountPath: \\.\pipe\csi-proxy-smb-v1
# these paths are still included for compatibility, they're used
# only if the node has still the beta version of the CSI proxy
- name: csi-proxy-fs-pipe-v1beta1
mountPath: \\.\pipe\csi-proxy-filesystem-v1beta1
- name: csi-proxy-smb-pipe-v1beta1
mountPath: \\.\pipe\csi-proxy-smb-v1beta1
resources:
limits:
memory: 200Mi
requests:
cpu: 10m
memory: 40Mi
volumes:
- name: csi-proxy-fs-pipe-v1
hostPath:
path: \\.\pipe\csi-proxy-filesystem-v1
- name: csi-proxy-smb-pipe-v1
hostPath:
path: \\.\pipe\csi-proxy-smb-v1
# these paths are still included for compatibility, they're used
# only if the node has still the beta version of the CSI proxy
- name: csi-proxy-fs-pipe-v1beta1
hostPath:
path: \\.\pipe\csi-proxy-filesystem-v1beta1
- name: csi-proxy-smb-pipe-v1beta1
hostPath:
path: \\.\pipe\csi-proxy-smb-v1beta1
- name: registration-dir
hostPath:
path: C:\var\lib\kubelet\plugins_registry\
type: Directory
- name: kubelet-dir
hostPath:
path: C:\var\lib\kubelet\
type: Directory
- name: plugin-dir
hostPath:
path: C:\var\lib\kubelet\plugins\smb.csi.k8s.io\
type: DirectoryOrCreate

View File

@ -1,130 +0,0 @@
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-smb-node
namespace: kube-system
spec:
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app: csi-smb-node
template:
metadata:
labels:
app: csi-smb-node
spec:
hostNetwork: true
dnsPolicy: Default # available values: Default, ClusterFirstWithHostNet, ClusterFirst
serviceAccountName: csi-smb-node-sa
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-node-critical
tolerations:
- operator: "Exists"
containers:
- name: liveness-probe
volumeMounts:
- mountPath: /csi
name: socket-dir
image: registry.k8s.io/sig-storage/livenessprobe:v2.7.0
args:
- --csi-address=/csi/csi.sock
- --probe-timeout=3s
- --health-port=29643
- --v=2
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 20Mi
- name: node-driver-registrar
image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1
args:
- --csi-address=$(ADDRESS)
- --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
- --v=2
livenessProbe:
exec:
command:
- /csi-node-driver-registrar
- --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
- --mode=kubelet-registration-probe
initialDelaySeconds: 30
timeoutSeconds: 15
env:
- name: ADDRESS
value: /csi/csi.sock
- name: DRIVER_REG_SOCK_PATH
value: /var/lib/kubelet/plugins/smb.csi.k8s.io/csi.sock
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
resources:
limits:
memory: 100Mi
requests:
cpu: 10m
memory: 20Mi
- name: smb
image: registry.k8s.io/sig-storage/smbplugin:v1.9.0
imagePullPolicy: IfNotPresent
args:
- "--v=5"
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(KUBE_NODE_NAME)"
- "--metrics-address=0.0.0.0:29645"
ports:
- containerPort: 29643
name: healthz
protocol: TCP
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 30
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
securityContext:
privileged: true
volumeMounts:
- mountPath: /csi
name: socket-dir
- mountPath: /var/lib/kubelet/
mountPropagation: Bidirectional
name: mountpoint-dir
resources:
limits:
memory: 200Mi
requests:
cpu: 10m
memory: 20Mi
volumes:
- hostPath:
path: /var/lib/kubelet/plugins/smb.csi.k8s.io
type: DirectoryOrCreate
name: socket-dir
- hostPath:
path: /var/lib/kubelet/
type: DirectoryOrCreate
name: mountpoint-dir
- hostPath:
path: /var/lib/kubelet/plugins_registry/
type: DirectoryOrCreate
name: registration-dir
---

View File

@ -1,56 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-smb-controller-sa
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-smb-node-sa
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: smb-external-provisioner-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: smb-csi-provisioner-binding
subjects:
- kind: ServiceAccount
name: csi-smb-controller-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: smb-external-provisioner-role
apiGroup: rbac.authorization.k8s.io

View File

@ -1,40 +0,0 @@
---
- name: download the csi-smb manifests
become: false
ansible.builtin.uri:
url: "https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/{{ csi_smb_version }}/deploy/{{ item }}"
dest: "{{ ansible_search_path[0] }}/files/{{ item }}"
creates: "{{ ansible_search_path[0] }}/files/{{ item }}"
mode: 0664
with_items:
- rbac-csi-smb.yaml
- csi-smb-driver.yaml
- csi-smb-controller.yaml
- csi-smb-node.yaml
- csi-smb-node-windows.yaml
- name: install the csi-smb manifests
kubernetes.core.k8s:
src: "{{ ansible_search_path[0] }}/files/{{ item }}"
state: present
with_items:
- rbac-csi-smb.yaml
- csi-smb-driver.yaml
- csi-smb-controller.yaml
- csi-smb-node.yaml
- csi-smb-node-windows.yaml
# - name: template out the csi-smb storage class definitions
# ansible.builtin.template:
# src: smb_storage_class.yaml.j2
# dest: "{{ ansible_search_path[0] }}/files/smb_storage_class_{{ item.name }}.yaml"
# with_items:
# "{{ csi_smb_storage_classes }}"
# - name: install the csi-smb storage classes
# kubernetes.core.k8s:
# src: "{{ ansible_search_path[0] }}/files/smb_storage_class_{{ item.name }}.yaml"
# state: present
# with_items:
# "{{ csi_smb_storage_classes }}"

View File

@ -1,24 +0,0 @@
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: smb
provisioner: smb.csi.k8s.io
parameters:
source: "//smb-server.default.svc.cluster.local/share"
# if csi.storage.k8s.io/provisioner-secret is provided, will create a sub directory
# with PV name under source
csi.storage.k8s.io/provisioner-secret-name: "smbcreds"
csi.storage.k8s.io/provisioner-secret-namespace: "default"
csi.storage.k8s.io/node-stage-secret-name: "smbcreds"
csi.storage.k8s.io/node-stage-secret-namespace: "default"
volumeBindingMode: Immediate
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1001
- gid=1001
- noperm
- mfsymlinks
- cache=strict
- noserverino # required to prevent data corruption