This lesson is in the early stages of development (Alpha version)

Getting Started with Snakemake

Learn to tame your unruly data processing workflow with Snakemake, a tool for creating reproducible and scalable data analyses. Workflows are described via a human readable, Python-based language. They can be seamlessly scaled to server, cluster, grid, and cloud environments, without the need to modify the workflow definition. In this lesson, you will build up a reproducible, automated, and efficient workflow step by step with Snakemake. Along the way, you will learn the benefits of modern workflow engines and how to apply them to your own work.

The example workflow will launch several cluster jobs with the Amdahl program from the Introduction to High-Performance Computing using different numbers of processors, collect the output from each job, and create a graph of “speedup” (reference runtime, usually one processor or node or GPU, divided by the runtime with increased compute resources) as a function of the processor count. You will use this data to analyze the performance of the program, and compare it to the predictions made by Amdahl’s Law. This example has been chosen over a more complex, real-world scientific workflow as the goal is to focus on building the workflow without getting distracted by the underlying science domain.

At the end of this lesson, you will:

Prerequisites

Setup

Please follow the instructions in the Setup page.

The files used in this lesson can be downloaded:

Once downloaded, please extract to the directory you wish to work in for all the hands-on exercises.

Solutions for most episodes can be found in the .solutions directory inside the code download.

A requirements.txt file is included in the download. This can be used to install the required Python packages.

Schedule

Setup Download files required for the lesson
00:00 1. Manual Data Processing Workflow How can I make my results easier to reproduce?
00:30 2. Snakefiles How do I write a simple workflow?
01:10 3. Wildcards How can I abbreviate the rules in my pipeline?
02:00 4. Pattern Rules How can I define rules to operate on similar files?
02:20 5. Snakefiles are Python code How can I automatically manage dependencies and outputs?
How can I use Python code to add features to my pipeline?
03:20 6. Completing the Pipeline How do I move generated files into a subdirectory?
How do I add new processing rules to a Snakefile?
What are some common practices for Snakemake?
How can I get my workflow to clean up generated files?
What is a default rule?
04:00 7. Resources and parallelism How do I scale a pipeline across multiple cores?
How do I manage access to resources while working in parallel?
04:45 8. Make your workflow portable and reduce duplication How can I eliminate duplicated file names and paths in my workflows?
How can I make my workflows portable and easily shared?
05:20 9. Scaling a pipeline across a cluster How do I run my workflow on an HPC system?
06:05 10. Final notes What are some tips and tricks I can use to make this easier?
06:25 Finish

The actual schedule may vary slightly depending on the topics and exercises chosen by the instructor.