Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing support for kubevirt #1213

Merged
merged 14 commits into from
Mar 13, 2024
9 changes: 6 additions & 3 deletions ramenctl/ramenctl/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ def run(args):
cloud_secret = generate_cloud_credentials_secret(env["clusters"][0], args)

if env["hub"]:
hub_cm = generate_config_map("hub", env["clusters"], args)
hub_cm = generate_config_map("hub", env, args)

wait_for_ramen_hub_operator(env["hub"], args)

Expand All @@ -38,7 +38,7 @@ def run(args):
wait_for_dr_clusters(env["hub"], env["clusters"], args)
wait_for_dr_policy(env["hub"], args)
else:
dr_cluster_cm = generate_config_map("dr-cluster", env["clusters"], args)
dr_cluster_cm = generate_config_map("dr-cluster", env, args)

for cluster in env["clusters"]:
create_ramen_s3_secrets(cluster, s3_secrets)
Expand Down Expand Up @@ -89,7 +89,9 @@ def create_cloud_credentials_secret(cluster, yaml):
kubectl.apply("--filename=-", input=yaml, context=cluster, log=command.debug)


def generate_config_map(controller, clusters, args):
def generate_config_map(controller, env, args):
clusters = env["clusters"]
volsync = env["features"].get("volsync", True)
template = drenv.template(command.resource("configmap.yaml"))
return template.substitute(
name=f"ramen-{controller}-operator-config",
Expand All @@ -98,6 +100,7 @@ def generate_config_map(controller, clusters, args):
cluster2=clusters[1],
minio_url_cluster1=minio.service_url(clusters[0]),
minio_url_cluster2=minio.service_url(clusters[1]),
volsync_disabled="false" if volsync else "true",
namespace=args.ramen_namespace,
)

Expand Down
2 changes: 2 additions & 0 deletions ramenctl/ramenctl/resources/configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,8 @@ data:
clusterServiceVersionName: ramen-dr-cluster-operator.v0.0.1
kubeObjectProtection:
veleroNamespaceName: velero
volSync:
disabled: $volsync_disabled
s3StoreProfiles:
- s3ProfileName: minio-on-$cluster1
s3Bucket: bucket
Expand Down
32 changes: 31 additions & 1 deletion test/addons/cdi/cr/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,34 @@
# yamllint disable rule:line-length
---
resources:
- https://github.com/kubevirt/containerized-data-importer/releases/download/v1.57.0/cdi-cr.yaml
- https://github.com/kubevirt/containerized-data-importer/releases/download/v1.58.1/cdi-cr.yaml
patches:
# Allow pulling from local insecure registry.
- target:
kind: CDI
name: cdi
patch: |-
apiVersion: cdi.kubevirt.io/v1beta1
kind: CDI
metadata:
name: not-used
spec:
config:
insecureRegistries:
- host.minikube.internal:5000
# Incrase certificate duration to avoid certificates renewals while a cluster
# is suspended and resumed.
- target:
kind: CDI
name: cdi
patch: |-
apiVersion: cdi.kubevirt.io/v1beta1
kind: CDI
metadata:
name: not-used
spec:
certConfig:
ca:
duration: 168h
server:
duration: 168h
2 changes: 1 addition & 1 deletion test/addons/cdi/disk/source.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ metadata:
spec:
source:
registry:
url: "docker://quay.io/alitke/cirros:latest"
url: "docker://quay.io/nirsof/cirros:0.6.2-1"
2 changes: 1 addition & 1 deletion test/addons/cdi/operator/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@
# yamllint disable rule:line-length
---
resources:
- https://github.com/kubevirt/containerized-data-importer/releases/download/v1.57.0/cdi-operator.yaml
- https://github.com/kubevirt/containerized-data-importer/releases/download/v1.58.1/cdi-operator.yaml
8 changes: 8 additions & 0 deletions test/addons/cdi/start
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,14 @@ def wait(cluster):
"--timeout=600s",
context=cluster,
)
print("Waiting until cdi cr finished progressing")
kubectl.wait(
"cdi.cdi.kubevirt.io/cdi",
"--for=condition=progressing=False",
f"--namespace={NAMESPACE}",
"--timeout=300s",
context=cluster,
)


if len(sys.argv) != 2:
Expand Down
20 changes: 19 additions & 1 deletion test/addons/kubevirt/cr/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,22 @@
# yamllint disable rule:line-length
---
resources:
- https://github.com/kubevirt/kubevirt/releases/download/v1.0.1/kubevirt-cr.yaml
- https://github.com/kubevirt/kubevirt/releases/download/v1.2.0/kubevirt-cr.yaml
patches:
# Incrase certificate duration to avoid certificates renewals while a cluster
# is suspended and resumed.
- target:
kind: KubeVirt
name: kubevirt
patch: |-
apiVersion: kubevirt.io/v1
kind: Kubevirt
metadata:
name: not-used
spec:
certificateRotateStrategy:
selfSigned:
ca:
duration: 168h
server:
duration: 168h
2 changes: 1 addition & 1 deletion test/addons/kubevirt/operator/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@
# yamllint disable rule:line-length
---
resources:
- https://github.com/kubevirt/kubevirt/releases/download/v1.0.1/kubevirt-operator.yaml
- https://github.com/kubevirt/kubevirt/releases/download/v1.2.0/kubevirt-operator.yaml
11 changes: 11 additions & 0 deletions test/configs/kubevirt/vm-pvc-k8s-regional.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# SPDX-FileCopyrightText: The RamenDR authors
# SPDX-License-Identifier: Apache-2.0

---
repo: https://github.com/ramendr/ocm-ramen-samples.git
path: subscription/kubevirt/vm-pvc-k8s-regional
branch: main
name: vm-pvc
namespace: vm-pvc
dr_policy: dr-policy
pvc_label: vm
51 changes: 43 additions & 8 deletions test/drenv/__main__.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,16 @@ def main():
p = argparse.ArgumentParser(prog="drenv")
p.add_argument("-v", "--verbose", action="store_true", help="Be more verbose")
p.add_argument(
"--skip-tests", dest="run_tests", action="store_false", help="Skip self tests"
"--skip-tests",
dest="run_tests",
action="store_false",
help="Skip addons 'test' hooks",
)
p.add_argument(
"--skip-addons",
dest="run_addons",
action="store_false",
help="Skip addons 'start' and 'stop' hooks",
)
p.add_argument("command", choices=commands, help="Command to run")
p.add_argument("--name-prefix", help="Prefix profile names")
Expand All @@ -57,7 +66,12 @@ def main():
def cmd_start(env, args):
start = time.monotonic()
logging.info("[%s] Starting environment", env["name"])
hooks = ["start", "test"] if args.run_tests else ["start"]

hooks = []
if args.run_addons:
hooks.append("start")
if args.run_tests:
hooks.append("test")

# Delaying `minikube start` ensures cluster start order.
execute(
Expand All @@ -67,7 +81,9 @@ def cmd_start(env, args):
hooks=hooks,
args=args,
)
execute(run_worker, env["workers"], hooks=hooks)

if hooks:
execute(run_worker, env["workers"], hooks=hooks)

if "ramen" in env:
ramen.dump_e2e_config(env)
Expand All @@ -82,7 +98,8 @@ def cmd_start(env, args):
def cmd_stop(env, args):
start = time.monotonic()
logging.info("[%s] Stopping environment", env["name"])
execute(stop_cluster, env["profiles"])
hooks = ["stop"] if args.run_addons else []
execute(stop_cluster, env["profiles"], hooks=hooks)
logging.info(
"[%s] Environment stopped in %.2f seconds",
env["name"],
Expand All @@ -107,6 +124,18 @@ def cmd_delete(env, args):
)


def cmd_suspend(env, args):
logging.info("[%s] Suspending environment", env["name"])
for profile in env["profiles"]:
run("virsh", "-c", "qemu:///system", "suspend", profile["name"])


def cmd_resume(env, args):
logging.info("[%s] Resuming environment", env["name"])
for profile in env["profiles"]:
run("virsh", "-c", "qemu:///system", "resume", profile["name"])


def cmd_dump(env, args):
yaml.dump(env, sys.stdout)

Expand Down Expand Up @@ -152,17 +181,23 @@ def start_cluster(profile, hooks=(), args=None, **options):
if is_restart:
wait_for_deployments(profile)

execute(run_worker, profile["workers"], max_workers=args.max_workers, hooks=hooks)
if hooks:
execute(
run_worker,
profile["workers"],
max_workers=args.max_workers,
hooks=hooks,
)


def stop_cluster(profile, **options):
def stop_cluster(profile, hooks=(), **options):
cluster_status = cluster.status(profile["name"])

if cluster_status == cluster.READY:
if cluster_status == cluster.READY and hooks:
execute(
run_worker,
profile["workers"],
hooks=["stop"],
hooks=hooks,
reverse=True,
allow_failure=True,
)
Expand Down
2 changes: 2 additions & 0 deletions test/envs/regional-dr-hubless.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ ramen:
hub: null
clusters: [dr1, dr2]
topology: regional-dr
features:
volsync: true

templates:
- name: "dr-cluster"
Expand Down
8 changes: 2 additions & 6 deletions test/envs/regional-dr-kubevirt.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ ramen:
hub: hub
clusters: [dr1, dr2]
topology: regional-dr
features:
volsync: false

templates:
- name: "dr-cluster"
Expand All @@ -25,7 +27,6 @@ templates:
extra_disks: 1
disk_size: "50g"
addons:
- volumesnapshots
- csi-hostpath-driver
workers:
- addons:
Expand Down Expand Up @@ -57,8 +58,6 @@ templates:
- name: ocm-controller
- name: cert-manager
- name: olm
- name: submariner
args: ["hub", "dr1", "dr2"]

profiles:
- name: "dr1"
Expand All @@ -72,6 +71,3 @@ workers:
- addons:
- name: rbd-mirror
args: ["dr1", "dr2"]
- addons:
- name: volsync
args: ["dr1", "dr2"]
2 changes: 2 additions & 0 deletions test/envs/regional-dr.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ ramen:
hub: hub
clusters: [dr1, dr2]
topology: regional-dr
features:
volsync: true

templates:
- name: "dr-cluster"
Expand Down
88 changes: 88 additions & 0 deletions test/gitlap/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Setting up a local git server

## Initial setup

1. Install lighttpd

```
sudo dnf install lighttpd
```

1. Create the git repo

Create a directory where the git repositories will be served:

```
sudo mkdir /var/www/gitlap
cd /var/www/gitlap
sudo git clone --bare https://github.com/nirs/ocm-kubevirt-samples.git
```

Set git repo permissions so you can push changes, and the web server
can serve the repo.

```
sudo chown -R $USER:lighttpd /var/www/gitlap
```

1. Copy the vhost configuration

```
sudo cp gitlap.conf /etc/lighttpd/vhosts.d/
```

1. Uncomment the vhost include in /etc/lighttpd/lighttpd.conf

```
include conf_dir + "/vhosts.d/*.conf"
```

1. Enable and start the service

```
sudo systemctl enable --now lighttpd
```

1. Allow http access in the libvirt zone

```
sudo firewall-cmd --zone=libvirt --add-service=http --permanent
sudo firewall-cmd --reload
```

## Testing the server

1. Add entry in /etc/hosts for testing locally

```
192.168.122.1 host.minikube.internal
```

1. Check that git clone works

```
git clone http://host.minikube.internal/ocm-kubevirt-samples.git
rm -rf ocm-kubevirt-samples
```

1. Check git clone in a minikube cluster

```
minikube ssh -p dr1
git clone http://host.minikube.internal/ocm-kubevirt-samples.git
rm -rf ocm-kubevirt-samples
```

## Updating the git repo

1. Add a remote to your working repo

```
git remote add gitlap file:///var/www/gitlap/ocm-kubevirt-samples.git
```

1. Push changes to the remote

```
git push -f gitlap main
```
Loading
Loading