-
Notifications
You must be signed in to change notification settings - Fork 0
Implementation TOBIAS Kubernetes S3 Version
To run a job you have to send a yaml with the congurations to the cluster. To get the les for calculating to the cluster and the results back to your local machine you need the S3-Storage. First you upload your Data to the S3 then the job on the cluster starts and downloads the les from the S3. When calculation has nished the job uploads the results to the S3 and then your VM can download the results from the S3 to your machine.
The most time-consuming part of calculating jobs on the cluster is transfer-
ring the data through the S3-Storage to the Cluster. For these reasons the
bigwig les are storage on the cluster in a NFS Storage so that every pod can
get the les from their. The piloting on Kubernetes is split in to 3 processes.
1. In the rst process the bigwig les from the ATaCcoreect are sending
to the NFS volume on the cluster.
2. When the biwigs are present on the NFS the pipeline starts the plotting
on the Cluster. All plots for one motive run in one Pod. When the
plotting has nished the Plots get upload to the S3-Storage.
3. In the last step when a Pod on the Cluster has nished the les get
back download form the S3 to your VM.